repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
11,625
closed
CUDA error: an illegal memory access was encountered
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: Ubuntu 18.04.5 LTS (x86_64) - Python version: 3.8 (64-bit runtime) - PyTorch version (GPU?): 1.7.1 with gpu - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: No - CUDA/cuDNN version: NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 - GPU models and configuration: RTX 2080Ti ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @LysandreJik maybe you can help me? Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Glyce -Bert, Glyce is a Chinese char representation based on Chinese glyph information. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) It‘s a Chinese STS task ## To reproduce Steps to reproduce the behavior: 1.install transformers==4.3.3 tqdm sklearn tensorboardX apex, zhconv==1.4.0 pypinyin==0.34.1 pywubi==0.0.2 boto3 botocore overrides 2.git clone https://github.com/ShannonAI/glyce.git cd glyce python3 setup.py develop 3.git clone https://github.com/zyh3826/GlyceBertTest.git modify dataset path and model path in config.yaml 4.download BERT-Base, Chinese at https://storage.googleapis.com/bert_models/2018_11_03/chinese_L-12_H-768_A-12.zip convert tf checkpoint to pytorch; copy the bert's pytorch_model.bin to glyce_bert/; modify glyph_config.bert_model in glyce_bert/bert_config.json; 5. python3 my_trainer.py and wait. if you use LCQMC/dev.txt as train, eval, test.txt as test dataset the error will happen at the third evaluation loop <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> > Traceback (most recent call last): File "my_trainer.py", line 201, in <module> trainer.train() File "my_trainer.py", line 146, in train self.log(eval_dataloader, loss_item, outputs, labels, global_step, lr) File "/source/code/zhaoyhy/AI/src/toolFunction/trainer/trainer.py", line 289, in log eval_report = self.evaluate(eval_dataloader) File "/source/code/zhaoyhy/AI/src/toolFunction/trainer/trainer.py", line 227, in evaluate outputs, labels, loss = self.evaluate_step(batch) File "my_trainer.py", line 185, in evaluate_step outputs, _ = self.model(**inputs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/source/code/zhaoyhy/AI/src/STS/GlyceBERT/model.py", line 30, in forward encoded_layers, _, glyph_cls_loss = self.glyph_transformer( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/source/code/zhaoyhy/AI/src/STS/GlyceBERT/glyce_transformer.py", line 36, in forward outputs = self.bert_model(**inputs) File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/models/bert/modeling_bert.py", line 939, in forward extended_attention_mask: torch.Tensor = self.get_extended_attention_mask(attention_mask, input_shape, device) File "/usr/local/lib/python3.8/dist-packages/transformers/modeling_utils.py", line 275, in get_extended_attention_mask extended_attention_mask = extended_attention_mask.to(dtype=self.dtype) # fp16 compatibility RuntimeError: CUDA error: an illegal memory access was encountered ## Expected behavior no error <!-- A clear and concise description of what you would expect to happen. -->
05-07-2021 08:08:43
05-07-2021 08:08:43
transformers
11,624
closed
Fix comment in run_clm_no_trainer.py
# What does this PR do? Fix comment in run_clm_no_trainer.py
05-07-2021 05:28:31
05-07-2021 05:28:31
transformers
11,623
closed
When I use run_ner.py to fine-tune the model based on Bert, I cannot predict any entities
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.6.0.dev0 - Platform: windows 7 - Python version: 3.8.8 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No Steps to reproduce the behavior: 1. Use the first 8 items in sample.json as train.json; 2. Use item 9 in sample.json as test.json; 3. Use run_ner.py to fine tune bert model; `python run_ner.py --model_name_or_path bert-base-uncased --train_file transformers_train.json --validation_file transformers_test.json --output_dir /tmp/transformers_8 --do_train --do_eval` 4. Use item 10 in sample.json to predict; ## Expected behavior 1. All prediction results are LABEL_5; ` [{'word': 'clinton', 'score': 0.48847222328186035, 'entity': 'LABEL_5', 'index': 1, 'start': 0, 'end': 7}, {'word': 'flew', 'score': 0.5289798974990845, 'entity': 'LABEL_5', 'index': 2, 'start': 8, 'end': 12}, {'word': 'in', 'score': 0.5855543613433838, 'entity': 'LABEL_5', 'index': 3, 'start': 13, 'end': 15}, {'word': 'by', 'score': 0.5611563324928284, 'entity': 'LABEL_5', 'index': 4, 'start': 16, 'end': 18}, {'word': 'helicopter', 'score': 0.464128702878952, 'entity': 'LABEL_5', 'index': 5, 'start': 19, 'end': 29}, {'word': 'from', 'score': 0.4999557435512543, 'entity': 'LABEL_5', 'index': 6, 'start': 30, 'end': 34}, {'word': 'michigan', 'score': 0.4572168290615082, 'entity': 'LABEL_5', 'index': 7, 'start': 35, 'end': 43}, {'word': 'city', 'score': 0.52681964635849, 'entity': 'LABEL_5', 'index': 8, 'start': 44, 'end': 48}, {'word': ',', 'score': 0.5917312502861023, 'entity': 'LABEL_5', 'index': 9, 'start': 49, 'end': 50}, {'word': 'indiana', 'score': 0.4882311224937439, 'entity': 'LABEL_5', 'index': 10, 'start': 51, 'end': 58}, {'word': ',', 'score': 0.43045851588249207, 'entity': 'LABEL_5', 'index': 11, 'start': 59, 'end': 60}, {'word': 'after', 'score': 0.5621852874755859, 'entity': 'LABEL_5', 'index': 12, 'start': 61, 'end': 66}, {'word': 'ending', 'score': 0.45101162791252136, 'entity': 'LABEL_5', 'index': 13, 'start': 67, 'end': 73}, {'word': 'a', 'score': 0.5589421987533569, 'entity': 'LABEL_5', 'index': 14, 'start': 74, 'end': 75}, {'word': 'four', 'score': 0.46818190813064575, 'entity': 'LABEL_5', 'index': 15, 'start': 76, 'end': 80}, {'word': '-', 'score': 0.5488259196281433, 'entity': 'LABEL_5', 'index': 16, 'start': 80, 'end': 81}, {'word': 'day', 'score': 0.5397554636001587, 'entity': 'LABEL_5', 'index': 17, 'start': 81, 'end': 84}, {'word': ',', 'score': 0.36657819151878357, 'entity': 'LABEL_5', 'index': 18, 'start': 85, 'end': 86}, {'word': '55', 'score': 0.32759979367256165, 'entity': 'LABEL_5', 'index': 19, 'start': 87, 'end': 89}, {'word': '##9', 'score': 0.4324667155742645, 'entity': 'LABEL_5', 'index': 20, 'start': 89, 'end': 90}, {'word': '-', 'score': 0.45314016938209534, 'entity': 'LABEL_5', 'index': 21, 'start': 90, 'end': 91}, {'word': 'mile', 'score': 0.4748324751853943, 'entity': 'LABEL_5', 'index': 22, 'start': 91, 'end': 95}, {'word': 'trip', 'score': 0.492055207490921, 'entity': 'LABEL_5', 'index': 23, 'start': 96, 'end': 100}, {'word': 'aboard', 'score': 0.5214760303497314, 'entity': 'LABEL_5', 'index': 24, 'start': 101, 'end': 107}, {'word': 'a', 'score': 0.5448480844497681, 'entity': 'LABEL_5', 'index': 25, 'start': 108, 'end': 109}, {'word': 'campaign', 'score': 0.4928675889968872, 'entity': 'LABEL_5', 'index': 26, 'start': 110, 'end': 118}, {'word': 'train', 'score': 0.359994113445282, 'entity': 'LABEL_5', 'index': 27, 'start': 119, 'end': 124}, {'word': 'from', 'score': 0.4995194673538208, 'entity': 'LABEL_5', 'index': 28, 'start': 125, 'end': 129}, {'word': 'washington', 'score': 0.6212663054466248, 'entity': 'LABEL_5', 'index': 29, 'start': 130, 'end': 140}, {'word': '.', 'score': 0.42734676599502563, 'entity': 'LABEL_5', 'index': 30, 'start': 141, 'end': 142}] ` 2. After the model training is completed, Precision and F-Score are 0.0; ` 05/07/2021 08:51:22 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:510] 2021-05-07 08:51:22,162 >> The following columns in the ev aluation set don't have a corresponding argument in ‘BertForTokenClassification .forward’ and have been ignored: ner, words. [INFO|trainer.py:2008] 2021-05-07 08:51:22,166 >> ***** Running Evaluation ***** [INFO|trainer.py:2010] 2021-05-07 08:51:22,166 >> Num examples = 1 [INFO|trainer.py:2013] 2021-05-07 08:51:22,168 >> Batch size = 8 0%| | 0/1 [00:00<?, ?it/s]D :\Tool\Install\Python\lib\site-packages\seqeval\metrics\v1.py:57: UndefinedMetri cWarning: Precision and F-score are ill-defined and being set to 0.0 in labels w ith no predicted samples. Use `zero_division` parameter to control this behavior . _warn_prf(average, modifier, msg_start, len(result)) D:\Tool\Install\Python\lib\site-packages\seqeval\metrics\v1.py:57: UndefinedMetr icWarning: Precision and F-score are ill-defined and being set to 0.0 due to no predicted samples. Use `zero_division` parameter to control this behavior. _warn_prf(average, modifier, msg_start, len(result)) 100%|█████████████████████████████████████ ███████| 1/1 [00:00<00:00, 13.69it/s] [INFO|trainer_pt_utils.py:898] 2021-05-07 08:51:22,351 >> ***** eval metrics *** ** [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,352 >> epoch = 3.0 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,352 >> eval_accuracy = 0.8 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,353 >> eval_f1 = 0.0 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,353 >> eval_loss = 1.1879 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,354 >> eval_precision = 0.0 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,354 >> eval_recall = 0.0 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,355 >> eval_runtime = 0:00:00.11 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,355 >> eval_samples = 1 [INFO|trainer_pt_utils.py:903] 2021-05-07 08:51:22,361 >> eval_samples_per_sec ond = 8.925 ` Is there not enough data set or is there a wrong setting? Thank you. @sgugger
05-07-2021 01:51:04
05-07-2021 01:51:04
It's just a matter of data: a lot of the words are labelled with "not entity" so the model has a tendency to learn to spit that label for each word, in the absence of more data.
transformers
11,622
closed
[TokenClassification] Label realignment for subword aggregation
# What does this PR do? Fixes #10263, #10763 See also #10568 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik @sgugger ## What capabilities have been added ? label realignment: token predictions for subwords can be realigned with 4 different strategies `first` (default): the prediction for the first token in the word is assigned to all subword tokens `max`: the highest confidence prediction among the subword tokens is assigned to all subword tokens `average`: the average pool of the predictions for all subwords is assigned to all subword tokens ## What are the expected changes from the current behavior? New flag `aggregation_strategy` enables realignment. Already existing flag `ignore_subwords` actually enables merging subwords. ## Example use cases with code sample enabled by the PR ``` ner = transformers.pipeline( 'ner', model='elastic/distilbert-base-cased-finetuned-conll03-english', tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english', ignore_labels=[], ignore_subwords=False, aggregation_strategy='average' ) ner('Mark Musterman') [ { 'word': 'Mark', 'score': 0.999686598777771, 'index': 1, 'start': 0, 'end': 4, 'is_subword': False, 'entity': 'B-PER' }, { 'word': 'Must', 'score': 0.9995412826538086, 'index': 2, 'start': 5, 'end': 9, 'is_subword': False, 'entity': 'I-PER' }, { 'word': '##erman', 'score': 0.9996127486228943, 'index': 3, 'start': 9, 'end': 14, 'is_subword': True, 'entity': 'I-PER' } ] ner = transformers.pipeline( 'ner', model='elastic/distilbert-base-cased-finetuned-conll03-english', tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english', ignore_labels=[], ignore_subwords=True, aggregation_strategy='average' ) ner('Mark Musterman') [ { 'word': 'Mark', 'score': 0.999686598777771, 'index': 1, 'start': 0, 'end': 4, 'is_subword': False, 'entity': 'B-PER' }, { 'word': 'Musterman', 'score': 0.9995412826538086, 'index': 2, 'start': 5, 'end': 9, 'is_subword': False, 'entity': 'I-PER' } ] ``` ## Previous use cases with code sample that see the behavior changes ``` ner = transformers.pipeline( 'ner', model='elastic/distilbert-base-cased-finetuned-conll03-english', tokenizer='elastic/distilbert-base-cased-finetuned-conll03-english', ignore_labels=[], ignore_subwords=True ) ner('Mark Musterman') [ { 'word': 'Mark', 'score': 0.999686598777771, 'entity': 'B-PER', 'index': 1, 'start': 0, 'end': 4 }, { 'word': 'Must', 'score': 0.9995412826538086, 'entity': 'I-PER', 'index': 2, 'start': 5, 'end': 9 }, { 'word': '##erman', 'score': 0.9996127486228943, 'entity': 'I-PER', 'index': 3, 'start': 9, 'end': 14 } ] ```
05-07-2021 01:27:40
05-07-2021 01:27:40
@sgugger the failed test (`test_gpt2_model_past_large_inputs`) seems unrelated to the changes in this PR. Any thought on what might be going on and how to resolve?<|||||>No, it's just flaky, don't worry!<|||||>Hey @francescorubbo, @Narsil pointed out a few issues with the current implementation that we'll take a look at today/tomorrow. Namely, the code becomes a bit complex as we keep adding features to that pipeline so it might be time for a slightly larger refactor, and some code is model-specific, such as this line which wouldn't work on non BERT-like tokenizers: ``` subwords[0]["word"] += "".join([sub["word"].split("##")[1] for sub in subwords[1:]]) ``` We're taking a look at what can be done and will come back to you in a bit. Thanks again for your patience.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,621
closed
Fix usage of head masks by PT encoder-decoder models' `generate()` function
This PR adds missing arguments `head_mask`, `decoder_head_mask` and `cross_attn_head_mask` into `prepare_inputs_for_generation` function of **PyTorch encoder-decoder models** so that these args will be used during the generation when `generate()` function is called. EDIT: Need to fix the new test for ProphetNet <hr> ### Example ```python out = bart.generate(input_ids, ...) tokenizer.decode(out[0], ...) ``` ``` >>> 'The Eiffel Tower in Paris has been officially opened to the public.' ``` **Behaviour before the PR:** ```python out = bart.generate(input_ids, decoder_head_mask=decoder_head_mask, ...) tokenizer.decode(out[0], ...) ``` ```diff - >>> 'The Eiffel Tower in Paris has been officially opened to the public.' ``` **Behaviour after the PR:** ```python out = bart.generate(input_ids, decoder_head_mask=decoder_head_mask, ...) tokenizer.decode(out[0], ...) ``` ```diff + >>> 'The Eiffel Tower in Paris has been officially opened to the public for the first time since it was completed in 1903.' ``` <hr> **Reviewers:** @patrickvonplaten
05-06-2021 21:48:02
05-06-2021 21:48:02
Hey @stancld, Thanks a lot for this contribution! Could we add one test to verify that generation works with head_mask for all encoder-decoder models? I think it could be added to `test_generation_utils.py` <|||||>Hey @patrickvonplaten, I've added one test. At this moment, there are two little issues I'm gonna handle later today so that all encoder-decoder models will pass this new test.<|||||>Hi @patrickvonplaten, sorry for being silent for a while as I've been a bit too busy. As you suggest, I skip the test for `ProphetNetForConditionalGeneration` model and now all the tests pass :)
transformers
11,620
closed
Fix RNG saves in distributed mode.
# What does this PR do? The newly introduced RNG saves in the Trainer checkpoints can yield to an error when one of the process of local_rank != 0 arrives to the end of the function faster then the process local_rank=0 arrived at the beginning: in this case the subfolder "checkpoint-xxx" is not created yet, so trying to save inside fails. This PR fixes that. Fixes #11618
05-06-2021 20:48:09
05-06-2021 20:48:09
transformers
11,619
closed
[cuda ext tests] fixing tests
Fixing several tests for cuda extension jobs single and multi-gpu. One failure I couldn't reproduce locally, posted about it here https://github.com/huggingface/transformers/issues/11618 @LysandreJik or @sgugger
05-06-2021 20:00:51
05-06-2021 20:00:51
transformers
11,618
closed
[fairscale] rng states saving fails in an extended multi-gpu test
@sgugger, following up on the RNG states PR ``` RUN_SLOW=1 pytest tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_fully_sharded_ddp_fp16 -sv ``` fails on multi-gpu scheduled CI: ``` File "/__w/transformers/transformers/examples/pytorch/translation/run_translation.py", line 589, in <module> main() File "/__w/transformers/transformers/examples/pytorch/translation/run_translation.py", line 522, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/__w/transformers/transformers/src/transformers/trainer.py", line 1316, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/__w/transformers/transformers/src/transformers/trainer.py", line 1397, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "/__w/transformers/transformers/src/transformers/trainer.py", line 1534, in _save_checkpoint stderr: Saving model checkpoint to /tmp/tmpb6tfhvqb/checkpoint-1 torch.save(rng_states, os.path.join(output_dir, f"rng_state_{local_rank}.pth")) File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 369, in save with _open_file_like(f, 'wb') as opened_file: File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 230, in _open_file_like return _open_file(name_or_buffer, mode) File "/opt/conda/lib/python3.8/site-packages/torch/serialization.py", line 211, in __init__ super(_open_file, self).__init__(open(name, mode)) stderr: FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpb6tfhvqb/checkpoint-1/rng_state_1.pth' ``` https://github.com/huggingface/transformers/runs/2514006341?check_suite_focus=true Oddly enough it works fine for me if I try it locally.
05-06-2021 19:58:24
05-06-2021 19:58:24
Found the root: the "checkpoint-1" folder is only created by process 0 when calling "model.save_pretrained", so if process 1 is faster and arrives at `_save_checkpoint` first, it will not see this folder (there is almost no saving on process 1, so it goes through this method quickly). Will send a fix shortly.
transformers
11,617
closed
Adding TFWav2Vec2Model
Adds a TensorFlow version of Wav2Vec2 https://github.com/huggingface/transformers/issues/11603 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @LysandreJik Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-06-2021 19:57:14
05-06-2021 19:57:14
Hi @will-rice, just an FYI, there's a CookieCutter template that allows you to automatically create a lot of files (modeling files, test files, documentation pages) for you. You can then edit those files. See [here](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) for more info. <|||||>@patrickvonplaten Some of the layers used in the Pytorch version only exist in TensorFlow [addons](https://www.tensorflow.org/addons) and I imagine that we do not want to introduce an additional dependency. Is this normally handled with custom versions of layers I need to be included with the model code? <|||||>> @patrickvonplaten Some of the layers used in the Pytorch version only exist in TensorFlow [addons](https://www.tensorflow.org/addons) and I imagine that we do not want to introduce an additional dependency. Is this normally handled with custom versions of layers I need to be included with the model code? Hey @will-rice, Is it the group norm layer? Maybe we can only implement the code for `config.feat_extract_norm == "layer"` in a first step and once this works we can see what to do next<|||||>@patrickvonplaten It's the group norm layer. I'm out until Saturday, but I will try your suggestion then. Thank you!<|||||>~@patrickvonplaten I'm almost finished with this. However, I'm having trouble getting the `Wav2Vec2PositionalConvEmbedding` to match. I may have it fixed by the time you see this, but if you have any tips on porting the PyTorch version to TensorFlow, I'd appreciate it.~ Edit: Figured it out. Just cleaning up and finishing tests.<|||||>@LysandreJik @patrickvonplaten I believe this is ready for a first look. The `run_tests_tf` workflow that is failing gives the error message `FAILED tests/test_hf_api.py::HfApiEndpointsTest::test_create_and_delete_repo`. The other one, `run_tests_torch` workflow seems to time out with the message: `Too long with no output (exceeded 10m0s): context deadline exceeded` I'm not completely sure how my code affects these tests. If there is something I missed please let me know.<|||||>This looks very nice! Rerunning the tests as the errors seem to be unrelated. <|||||>Hey @will-rice, Amazing work! This is not an easy model to implement, and it seems that we are already very close to merging this PR :-) The main thing to change is to make the tests independent of PyTorch so that they are actually run at `run_tests_tf` and not skipped because PT is not installed. I think the easiest would be to just "translate" the PT tests into a TF format. Also, we should try to test both encoder architecture the "normal" Wav2Vec2Encoder as well as the "robust" Wav2Vec2RobustEncoder` model - ideally we make to testing classes here just as in PyTorch. I did a small change to make your code work correctly for the model conversion for and uploaded TF weights to both: - https://huggingface.co/facebook/wav2vec2-base-960h - and https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self (weights will be uploaded in 5min - sorry my internet connection is slow today) Could you maybe also add a hard-coded test for https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self similar to the one in PyTorch and switch the "inofficial" weights in the tests to `facebook/wav2vec2-base-960h` and `facebook/wav2vec2-large-960h-lv60-self` ? <|||||>@patrickvonplaten When I use `"facebook/wav2vec2-large-960h-lv60-self"` most of the layer weights don't get loaded for the `feature_extractor`. I'm using the exact same model class though, so I would think the weight names wouldn't have changed. Did you have any issues converting the weights other than my `layer_norm` mistake that you fixed? This seems like it would be related to why `TFWav2Vec2PositionalConvEmbedding` only matches when loaded directly from the pytorch layer weights.<|||||>``` tf_layer = tf_model.wav2vec2.encoder.pos_conv_embed pt_layer = pt_model.wav2vec2.encoder.pos_conv_embed tf_weight_v, tf_weight_g, tf_bias = tf_layer.conv.weights for layer, conv in zip(tf_layer.weights, tf_layer.conv.weights): tf.assert_equal(layer, conv) pt_weight_v = pt_layer.conv.weight_v.detach().numpy().transpose(2, 1, 0) pt_weight_g = pt_layer.conv.weight_g.detach().numpy().transpose(2, 1, 0) pt_bias = pt_layer.conv.bias.detach().numpy() tf_layer.set_weights([pt_weight_v, pt_weight_g, pt_bias]) assert np.allclose(tf_weight_v, pt_weight_v) assert np.allclose(tf_weight_g, pt_weight_g) assert np.allclose(tf_bias, pt_bias) ``` This is what I have to do to get the final outputs to pass `assert_near`. I was able to fix this in my uploaded weights by doing the above. I don't think this should be necessary, but I'm having trouble tracking down exactly where the problem occurs. It looks like the weights for this layer aren't getting translated to TensorFlow well. Or the weights in the Pytorch model get modified by weight norm after being loaded.<|||||>Ah yeah the conversion for conv_norm in the pos_embeddings layer might be a bit problematic there indeed. It should show up with a failing: ``` tests/test_modeling_tf_wav2vec2.py::TFWav2Vec2ModelTest::test_pt_tf_model_equivalence ``` test. This might be a bit difficult to debug, actually. I think it'd be best to first refactor the TF tests to not include any PyTorch code and add test for the "robust" TFWav2Vec2 model as well. Once the "normal" TF tests all pass, we can take a look at the conversion afterwards together :-) <|||||>@patrickvonplaten I pushed what I have. I had a lot of issues with the behavior of the scatter operation between the two frameworks. The Pytorch version seems to perform intuitively, but the TensorFlow ops have some differences that I don't completely understand. It may be a conceptual deficiency on my part so if you have any suggestions, I would greatly appreciate them.<|||||>@patrickvonplaten Thank you so much for all of the debugging. Looking at the changes, I've learned a lot and will be sure apply this in the future.<|||||>Thanks for all the work @will-rice! Happy that the commit diffs help a bit! The final changes were super tricky and very specific to how we convert weights between PyTorch and Tensorflow ;-)<|||||>Hey! I just reviewed and it looks good. Also, don't worry - the tensorflow `scatter` ops are quite counterintuitive; in one case I avoided them by building a tf.sparse() matrix and then densifying it, which was probably not the most efficient but it did work! What problems were you encountering with the scatter updates? If you can identify specific problems I can help you write a function that gets the desired behaviour, but only if you think it's necessary.<|||||>Test failure is unrelated<|||||>Merging - awesome work @will-rice. Feel free to tweet about the addition of TFWav2Vec2 - I'm sure the community would be interested in hearing about it :-)<|||||>Hi @will-rice 👋 Thanks for adding this model all these months ago! I realise it's been quite a while, but I was wondering if you were able to answer two questions I have about the design decisions in this port: 1. Additional arguments For `TFWav2Vec2ForCTC`, the signature for `call` has some additional arguments which weren't in the equivalent PyTorch `forward` method. For example, `input_embeds` [is included here](https://github.com/huggingface/transformers/blob/d438eee030398e084ecf42b24a7453bcd7764d36/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1547) but [not in the PT model](https://github.com/huggingface/transformers/blob/d438eee030398e084ecf42b24a7453bcd7764d36/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1413). This argument also doesn't appear to be used once passed to the main layer. Is there a reason for including these? 2. Missing arguments There are also some arguments which are in the function signature for `forward` in the PyTorch model e.g. `mask_time_indices` [here](https://github.com/huggingface/transformers/blob/d438eee030398e084ecf42b24a7453bcd7764d36/src/transformers/models/wav2vec2/modeling_wav2vec2.py#LL1009C2-L1009C2), which are taken from `kwargs` for the [TF model here](https://github.com/huggingface/transformers/blob/d438eee030398e084ecf42b24a7453bcd7764d36/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1221). Is there a reason for only passing these in through kwargs? <|||||>@amyeroberts I honestly do not remember why I did it like that. It looks like `input_embeds` should be removed and then `mask_time_indices` added as an argument to `call`.<|||||>OK, no worries. Thanks for the quick reply! :)
transformers
11,616
closed
wrong default value of argument "ignore_index" in CrossEntropyLoss for loss calculation in models forward method
Hi guys! You calculate loss in models forward() methods as follows: `loss_fct = CrossEntropyLoss()` for example: [BertForMaskedLM::forward](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1328) [BartForConditionalGeneration:forward](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1305) that means you don't take into account labels ids = -100. However this is something one usually doesn't expect, because usually pad_id != -100. In my opinion DEFAULT behavior should be: ## Expected default behavior `loss_fct = CrossEntropyLoss(ignore_index=self.config.pad_token_id)` that is ignore deposits from paddings in loss. At least this is what tensorflow [tutorial](https://www.tensorflow.org/tutorials/text/transformer#loss_and_metrics) is talking about. I'm aware about [note](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1271-L1272) in documentation and possible work around may be `labels[labels == model.config.pad_token_id] = -100` but it could be additional not obvious step for ordinary user I'm afraid. ### Who can help - @LysandreJik - @patrickvonplaten Thanks!
05-06-2021 19:18:04
05-06-2021 19:18:04
@LysandreJik @sgugger - we could add a warning that fires if one hasn't set `-100` in the labels, but has padding tokens instead - what do you think? <|||||>Would it not hurt performance to do the check at each forward?<|||||>My opinion on this is that the documentation explicitly states how the `labels` should be created and which value is ignored in the loss, so I'm fine with leaving it like this. Also `-100` is the default ignored index in the `CrossEntropyLoss` PyTorch object so it doesn't seem too arcane of a choice. We could throw a warning the first time it's detected (and not on subsequent calls) if it's really important.<|||||>Thank you for consideration I suspect a lot of people using your library out of the box (usual situation in production) will miss this step (replacing padding ids by -100), at least because it is absent in examples (as far as I know) and will always get wrong (a slightly bigger) loss :-) But this is my opinion and I could be mistaken of course.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,615
closed
[Lazy init] Fix edge cases
This PR fixes the flaky circle ci regarding tests such as `tests/test_modeling_xlnet.py::XLNetModelTest::test_save_load_fast_init_to_base`. Luckily, the test caught those edge cases ;-)
05-06-2021 18:07:02
05-06-2021 18:07:02
transformers
11,614
closed
Vectorized Numpy to Torch based Functions for SpecAugment
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10459 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-06-2021 14:52:46
05-06-2021 14:52:46
Following are the average test results for the `_compute_mask_indices` function when ran 100 times. Each X.1 subtest case is calculated with `attention_mask = None` and each X.2 subtest case is calculated with `attention_mask` calculated with the following code: ``` attention_mask = torch.ones((batch_size, sequence_length), device=torch_device, dtype=torch.long) attention_mask[:, -sequence_length // 2 :] = 0 ``` 1) Test - 100 times batch_size = 4 sequence_length = 60 mask_prob = 0.5 mask_length = 1 Test **1.1** - Result - seconds New Code GPU: 0.002872414588928223 New Code CPU: 0.0006633639335632324 Old Code: 0.0003826594352722168 Test **1.2** - Result - seconds New Code GPU: 0.002973439693450928 New Code CPU: 0.0006422805786132813 Old Code: 0.0004153728485107422 2) Test - 100 times batch_size = 100 sequence_length = 60 mask_prob = 0.5 mask_length = 1 Test **2.1** - Result - seconds New Code GPU: 0.0663988971710205 New Code CPU: 0.014422652721405029 Old Code: 0.008053600788116455 Test **2.2** - Result - seconds New Code GPU: 0.06568058252334595 New Code CPU: 0.01404146671295166 Old Code: 0.008796172142028809 3) Test - 100 times batch_size = 1000 sequence_length = 60 mask_prob = 0.5 mask_length = 1 Test **3.1** - Result - seconds New Code GPU: 0.6623778533935547 New Code CPU: 0.14311392545700075 Old Code: 0.08917582988739013 Test **3.2** - Result - seconds New Code GPU: 0.6566315603256225 New Code CPU: 0.13569485664367675 Old Code: 0.08646429538726806 4) Test - 100 times batch_size = 4 sequence_length = 1000 mask_prob = 0.5 mask_length = 1 Test **4.1** - Result - seconds New Code GPU: 0.0031879472732543944 New Code CPU: 0.0013749027252197266 Old Code: 0.00248842716217041 Test **4.2** - Result - seconds New Code GPU: 0.0031322765350341795 New Code CPU: 0.0010571050643920898 Old Code: 0.0015622496604919434 5) Test - 100 times batch_size = 4 sequence_length = 60 mask_prob = 0.5 mask_length = 4 Test **5.1** - Result - seconds New Code GPU: 0.003424525260925293 New Code CPU: 0.0008220672607421875 Old Code: 0.0003489851951599121 Test **5.2** - Result - seconds New Code GPU: 0.0034962940216064454 New Code CPU: 0.0007469034194946289 Old Code: 0.0003824186325073242 6) Test - 100 times batch_size = 4 sequence_length = 1000 mask_prob = 0.5 mask_length = 4 Test **6.1** - Result - seconds New Code GPU: 0.003502027988433838 New Code CPU: 0.0014672994613647461 Old Code: 0.0017711663246154786 Test **6.2** - Result - seconds New Code GPU: 0.0034971165657043455 New Code CPU: 0.0011277437210083009 Old Code: 0.0011361241340637207 7) Test - 100 times batch_size = 128 sequence_length = 1000 mask_prob = 0.5 mask_length = 4 Test **7.1** - Result - seconds New Code GPU: 0.10527128219604492 New Code CPU: 0.04762232780456543 Old Code: 0.052808206081390384 Test **7.2** - Result - seconds New Code GPU: 0.1032623028755188 New Code CPU: 0.03513101100921631 Old Code: 0.03523270606994629<|||||>Hey @01-vyom, Great job - thanks a lot for tackling this issue! One question regarding the benchmarking it looks like CPU is faster than GPU -> is this actually the case? This would not be very good as it means that performing this computation on GPU would slow training down. Also the statistics about the "old code" does it refer to the old code being run on CPU or GPU? <|||||>> Hey @01-vyom, > > Great job - thanks a lot for tackling this issue! > > One question regarding the benchmarking it looks like CPU is faster than GPU -> is this actually the case? This would not be very good as it means that performing this computation on GPU would slow training down. Also the statistics about the "old code" does it refer to the old code being run on CPU or GPU? While running with smaller size inputs i.e. inputs with lesser batch size and mask length, this is the case. But, as we increase our mask length and batch size, this implementation is much efficient. Also, while using GPU, I think it is ineffcient because the tensors are being transfered back and forth from CPU and GPU. Old-code being run on CPU as it was only using numpy.<|||||>> While running with smaller size inputs i.e. inputs with lesser batch size and mask length, this is the case. But, as we increase our mash length and batch size, this implementation is much efficient. Also, while using GPU, I think it is ineffcient because the tensors are being transfered back and forth from CPU and GPU. Old-code being run on CPU as it was only using numpy. Let me run some benchmarks on my side to see :-) Ideally we don't have to run anything on CPU <|||||>I have made all the changes. @patrickvonplaten any updates on the benchmarks?<|||||>Hey @01-vyom, I couldn't see a big speed-up really when running on GPU -> I think the goal should really be to completely get rid of the for loop. I implemented a simpler version here: https://github.com/huggingface/transformers/pull/11764 which I'm benchmarking now. Will let you know!<|||||>Hey @01-vyom, Sorry I merged the PR now since it was more or less ready. I'm sorry to not have used your PR here! I'm sure we'll find other cool possible contributions for Wav2Vec2 though soon - will ping you if I find something interesting :-) <|||||>No problem. Learned a lot from this issue.
transformers
11,613
closed
Re-styling in seq2seq attention
# What does this PR do? After commenting the same things on multiple new models, I looked around and found everyone was copying this code I don't like from `BartAttention`. So fixing the source to avoid seeing it in the future :-)
05-06-2021 14:26:07
05-06-2021 14:26:07
Tests should be fixed now as well: https://github.com/huggingface/transformers/pull/11615
transformers
11,612
closed
Error when using Adafactor without learn rate
Hi, I get these strange errors when I use the Adafactor. This code will result in this (expected) error: ``` optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=1e-4) ``` > ValueError: Cannot combine manual `lr` and `relative_step=True` options however, if I do not set a manual learn rate I get a different error. Btw: This code is recommended in the [documentation](https://huggingface.co/transformers/main_classes/optimizer_schedules.html?highlight=others%20reported%20following%20combination%20work%20well). ``` optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) # same for optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True) ``` will return this error > TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Linux - Python version: 3.7.1 - PyTorch version (GPU?): 1.8.0+cu111 and 1.8.1+cu111 - Tensorflow version (GPU?): - - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help Trainer: @sgugger
05-06-2021 11:45:46
05-06-2021 11:45:46
This was added by @jsrozner and @stas00 in #10526, so pinging them here.<|||||>Thank you @sgugger for the feedback. I install the latest transformers version from source using: ```pip install git+https://github.com/huggingface/transformers``` and set the recommended parameters from the patch: optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=True, warmup_init=True, lr=None) > TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' However, the error message remains the same. Can you give me a hint where I can address this issue? For reference this is the code that I am using: ```python model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=len(label_2_id)) args = TrainingArguments( output_dir=f"models/{run_name}/checkpoints", run_name=run_name, evaluation_strategy = "epoch", per_device_train_batch_size=batch_size, per_device_eval_batch_size=batch_size, gradient_accumulation_steps=1, num_train_epochs=2, report_to=["tensorboard"], logging_dir='runs/'+run_name, logging_first_step=True, logging_steps=100, save_steps= 10000, save_total_limit=10, seed=16, fp16=True ) optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=True, warmup_init=True, lr=None) lrs = get_constant_schedule_with_warmup(optimizer,100) data_collator = DataCollatorForTokenClassification(tokenizer) trainer = Trainer( model, args, train_dataset=tokenized_dataset_train, eval_dataset=tokenized_dataset_val, data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics_sklearn, optimizers=(optimizer,lrs) ) ``` <|||||>@oliverguhr, please always post a full traceback for errors. It's impossible otherwise to know where the error came from, please refer to https://github.com/huggingface/transformers/blob/master/ISSUES.md#the-github-issues item (3). The actual recommendation is: ``` Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) ``` The alternative one I saved because others said it worked well for them. Once you post the full traceback then we can see why it fails. Thank you! p.s. colab notebook reproducing the problem is even better <|||||>Thanks for looking at this @stas00 Here is a traceback and this [is a colab notebook to reproduce the issue](https://colab.research.google.com/drive/1DFsmXObv8JVvGRbX8uc6_sfPfbAnKKMx?usp=sharing). Hint: Depending on setting ``` lrs = get_constant_schedule_with_warmup(optimizer,100) ``` or ``` lrs = none ``` ` get_constant_schedule_with_warmup` fails directly or `trainer.train()`. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-18-031302865887> in <module>() 12 13 optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=True, warmup_init=True, lr=None) ---> 14 lrs = get_constant_schedule_with_warmup(optimizer,100) 15 16 training_args = TrainingArguments( 5 frames /usr/local/lib/python3.7/dist-packages/transformers/optimization.py in get_constant_schedule_with_warmup(optimizer, num_warmup_steps, last_epoch) 67 return 1.0 68 ---> 69 return LambdaLR(optimizer, lr_lambda, last_epoch=last_epoch) 70 71 /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py in __init__(self, optimizer, lr_lambda, last_epoch, verbose) 201 len(optimizer.param_groups), len(lr_lambda))) 202 self.lr_lambdas = list(lr_lambda) --> 203 super(LambdaLR, self).__init__(optimizer, last_epoch, verbose) 204 205 def state_dict(self): /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py in __init__(self, optimizer, last_epoch, verbose) 75 self.verbose = verbose 76 ---> 77 self.step() 78 79 def state_dict(self): /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py in step(self, epoch) 150 if epoch is None: 151 self.last_epoch += 1 --> 152 values = self.get_lr() 153 else: 154 warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning) /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py in get_lr(self) 249 250 return [base_lr * lmbda(self.last_epoch) --> 251 for lmbda, base_lr in zip(self.lr_lambdas, self.base_lrs)] 252 253 /usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py in <listcomp>(.0) 249 250 return [base_lr * lmbda(self.last_epoch) --> 251 for lmbda, base_lr in zip(self.lr_lambdas, self.base_lrs)] 252 253 TypeError: unsupported operand type(s) for *: 'NoneType' and 'float' ```<|||||>Thank you for creating the reproducible colab notebook, @oliverguhr - that's very helpful. So when you use `Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=None)` the learning rate scheduling is performed internally by the optimizer and so there is no need for a scheduler. But I see that barebones HF Trainer doesn't support training w/o a scheduler. So we aren't quite supporting this option then and perhaps we should. Regardless of the outcome we should document the conclusion of this thread in the Adafactor docstring. So here are a few ideas meanwhile: 1. Create a dummy scheduler that always returns a fixed lr, example: ``` from torch.optim.lr_scheduler import LambdaLR class DummyLR(LambdaLR): def __init__(self, optimizer, lr=0): for group in optimizer.param_groups: group['initial_lr'] = lr lr_lambda = lambda x: lr super().__init__(optimizer, lr_lambda) for group in optimizer.param_groups: del group['initial_lr'] def get_dummy_schedule(optimizer): return DummyLR(optimizer) lrs = get_dummy_schedule(optimizer) ``` Let me know if this unblocks you a bit. 2. Alternatively, if you want to be able to access lr outside of optimizer, here a proxy scheduler that pulls the LR out of the optimizer at run time, rather than feeding the optimizer. ``` from torch.optim.lr_scheduler import LambdaLR class AdafactorSchedule(LambdaLR): def __init__(self, optimizer, initial_lr=0): for group in optimizer.param_groups: group['initial_lr'] = initial_lr lr_lambda = lambda x: initial_lr super().__init__(optimizer, lr_lambda) for group in optimizer.param_groups: del group['initial_lr'] def get_lr(self): opt = self.optimizer lrs = [opt._get_lr(group, opt.state[group["params"][0]]) for group in opt.param_groups if group["params"][0].grad is not None] if len(lrs) == 0: lrs = self.base_lrs # if called before stepping # print(f"lr={lrs}") return lrs def get_adafactor_schedule(optimizer): return AdafactorSchedule(optimizer) optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=True, warmup_init=True, lr=None) scheduler = get_adafactor_schedule(optimizer) ``` clearly this is a quick hack, but it seems to work. it returns `initial_lr` during startup and the actual `lr` during stepping (disable the debug print to see). As you can see I had to hack `initial_lr` into it since optimizer doesn't have any lr until it starts stepping. If this is desired than we could add `Adafactor.get_scheduler()` which would return the above. Perhaps it needs to assert if `lr != None` I haven't looked that close. If you like the 2nd solution feel free to clean it up, and making a PR, perhaps getting rid of `LambdaLR` to not need the `group['initial_lr']` hack, going straight for the ` _LRScheduler` super class. 3. Make HF `Trainer` support `scheduler=None` - that would be hard for the loggers and other places that expect being able to get the value for lr. I think a clean version of the 2nd solution is probably more suitable. <|||||>So @sgugger suggests the 3rd option. For that will have to track down all the cases where the scheduler is used and condition those on `scheduler != None` Not sure about back-compat though since we auto-create a scheduler if it's not passed: https://github.com/huggingface/transformers/blob/33fd83bc01e781633dad58a9de6c91591e2fc786/src/transformers/trainer.py#L817-L829 <|||||>Another proposition from @sgugger is: > this can be handled with the `lr_scheduler_type` argument: we could add an acceptable value "no" that would leave the scheduler at `None`.<|||||>@stas00 Sorry for the late reply and thanks for your feedback. The DummyLR worked for me, but this parameter combination did not improve my results, maybe these parameter settings are kind of an edge case. Regarding the 3rd option: Would it possible to check if ```lr_scheduler is None``` and ```optimizer is Adafactor``` and then auto-create an instance of the "AdafactorScheduler"? This could eliminate the need to check all the other parts of the code that rely on the LR value from the opimzier. <|||||>> Regarding the 3rd option: Would it possible to check if lr_scheduler is None and optimizer is Adafactor and then auto-create an instance of the "AdafactorScheduler"? This could eliminate the need to check all the other parts of the code that rely on the LR value from the opimzier. @sgugger, what is your take - `AdafactorScheduler` is the hack I posted here: https://github.com/huggingface/transformers/issues/11612#issuecomment-833888170 I'm happy with either way, but let's resolve it one way or another.<|||||>Mmm, the `lr_scheduler` is always None by default. Can we add a value `"adafactor"` that would use that `AdafactorScheduler`?<|||||>@oliverguhr, we went with the `AdafactorSchedule` - please check that it works for you https://github.com/huggingface/transformers/pull/12123
transformers
11,611
closed
Fix typo in docstring
# What does this PR do? ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
05-06-2021 11:17:41
05-06-2021 11:17:41
transformers
11,610
closed
In "Question Answering" separate context from question
# 🚀 Feature request ## Motivation Often one may have many questions on the same content. In the current implementation, the function will be called multiple times with the same context but with different questions causing a lot of computational time. ## Idea I was wondering if there's a way to speed up runtime by splitting the context and question instead of the single question_answering() which gets both arguments. This way the algorithmic parts that are only dependent on the context can be calculated once, and re-used many times when asking several questions on the same content.
05-06-2021 09:59:08
05-06-2021 09:59:08
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update on this? <|||||>Currently there is no way to handle this in the pipelines. You should be able to achieve this pretty simply by using a model/tokenizer without the pipeline. See here for an example of how to do that: https://huggingface.co/transformers/task_summary.html#extractive-question-answering<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik In the link you had provided it looks like all the computation is being done in the loop that iterates over the questions, so no work is being done only once for the context outside the loop. Why do we have to tokenize the same context for every question?<|||||>@yoeldk did you figure this out?<|||||>> @yoeldk did you figure this out? Not really, but I'm now dealing with other things so I decided to leave it
transformers
11,609
closed
[RAG] ModuleNotFoundError: No module named 'git' when finetuning the model
## Environment info - `transformers` version: 4.6.0.dev0 - Platform: Linux-3.10.0-1127.10.1.el7.x86_64-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.8.1+cu102 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help - rag: @patrickvonplaten, @lhoestq ## Information I get an import error ModuleNotFoundError: No module named 'git' when running the RAG finetuning script. I am not sure if the module `git` is somewhere in the project or must be installed. I am using a virtual environment built with `pip install -e '[.dev]'` from `/transformers` and `pip install -r requirements.txt` from /rag. The problem arises when using: [RAG] finetune_rag.py I'm using the original `finetune_rag.sh` script (here below) but without GPU and less iterations. I also modified the `finetune_ray.py` path in the command below from `examples/rag/finetune_rag.py` to just `finetune_rag.py` since there is no `examples/rag` directory in the repo. I call the script from the `transformers/examples/research_projects/rag/` directory. export PYTHONPATH="../":"${PYTHONPATH}" python finetune_rag.py \ --data_dir mydatadir \ --output_dir myoutdir \ --model_name_or_path facebook/rag-sequence-base \ --model_type rag_sequence \ --fp16 \ --profile \ --do_train \ --do_predict \ --n_val -1 \ --train_batch_size 8 \ --eval_batch_size 1 \ --max_source_length 128 \ --max_target_length 25 \ --val_max_target_length 25 \ --test_max_target_length 25 \ --label_smoothing 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ --weight_decay 0.001 \ --adam_epsilon 1e-08 \ --max_grad_norm 0.1 \ --lr_scheduler polynomial \ --learning_rate 3e-05 \ --num_train_epochs 10 \ --warmup_steps 500 \ --gradient_accumulation_steps 1 \` The tasks I am working on is: just to run the simplest finetuning exercise possible to get familiar with the haggingface library. ## To reproduce Steps to reproduce the behavior: 1. `cd transformers/examples/research_projects/rag/` 2. `bash finetune_rag.sh` (the same code shown above) Error: `Traceback (most recent call last): File "finetune_rag.py", line 40, in <module> from callbacks_rag import ( # noqa: E402 # isort:skipq File "/nlu/users/giovanni_bonetta/transformers/examples/research_projects/rag/callbacks_rag.py", line 11, in <module> from utils_rag import save_json File "/nlu/users/giovanni_bonetta/transformers/examples/research_projects/rag/utils_rag.py", line 14, in <module> import git ModuleNotFoundError: No module named 'git' ` ## Expected behavior have the finetuning script recognising `git` and running.
05-06-2021 09:55:45
05-06-2021 09:55:45
Indeed you need to install `pip install GitPython` to make it work I created a PR to add it to the requirements.txt of the RAG example<|||||>Thanks @lhoestq . I confirm that it works after installing GitPython.
transformers
11,608
closed
Is it correct to load weights from task A to train task B
> I want to use 2 tasks of modelling including (a) Causal language modelling & > (b) Mask language modelling for training my new added tokens My pseudo-code is below ``` ##add new tokenizer model_name = "vinai/phobert-large" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) with open("tokenizer_vocab.txt","r") as wr_file: new_tokens=wr_file.read().splitlines() _________________ ## Train the (a) task - Casual language modeling added_tokens = tokenizer.add_tokens([tokens for i in new_tokens]) model.resize_token_embeddings(len(tokenizer)) ## train the model following the hugging face released https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/language_modeling.ipynb#scrollTo=JAscNNUD3l-P ##save_the_model model.save_pretrained('weights_tokenizer/') tokenizer.save_pretrained('weights_tokenizer/') ___________________ ## Train the (b) task - Masked language modeling model = AutoModelForMaskedLM.from_pretrained("weights_tokenizer/") tokenizer= AutoTokenizer.from_pretrained("weights_tokenizer/") ``` _My question here is it correct when I want to train the (b) task with weights from the (a) (because I think I can somehow more enrich the tokenizer)._ Or there is any solution that I can train my tokenizer in both these two tasks. And I then can use the weights (from training on both two tasks above) to train my model with the task (C) I do appreciate your time and sharing.
05-06-2021 07:42:49
05-06-2021 07:42:49
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!<|||||>Thanks for your reply @LysandreJik Ly
transformers
11,607
closed
Fix Python version
cc @lhoestq
05-06-2021 06:50:06
05-06-2021 06:50:06
transformers
11,606
closed
Added Feature: Prefix decoding for wav2vec2 models
# What does this PR do? Added the code for prefix decoding for wav2vec2 based models. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #11283 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @patil-suraj <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-06-2021 06:21:17
05-06-2021 06:21:17
- [x] Currently the code supports prefix decoding without LM. I am still working to integrate the kenlm version. Problem faced currently: I created a custom kenlm and tried to run the code, but it stops without throwing any error at line `results = self.decoder.decode(emissions_ptr, T, N)` I am currently trying to fix it. (RESOLVED) - [x] Shall I create a .sh or .txt file to guide on how to install flashlight dependencies?<|||||>Performance: Model:- `facebook/wav2vec2-base-960h` Dataset:- `timit_asr`, `clean`, `test[:5%]` Viterbi Decoding:- `wer: 0.115` KenLM Decoding:- `wer: 0.098` <|||||>Wuhuhu! This is an amazing contribution @deepang17 - Super exciting to merge this notebook :-) And yes, it would be great if you could add a section to the README.md that explains how to use your script + maybe with some results (using Prefix decoding vs. not using it on *e.g.* Timit_asr and/or Librispeech evaluation - kinda like you already did above). I'm also very happy to help you run some evals! <|||||>Thank you for the appreciation. I will do the required changes to `README.md` and push a commit soon.<|||||>> * [x] Currently the code supports prefix decoding without LM. I am still working to integrate the kenlm version. > > Problem faced currently: I created a custom kenlm and tried to run the code, but it stops without throwing any error at line `results = self.decoder.decode(emissions_ptr, T, N)` I am currently trying to fix it. (RESOLVED) > > * [x] Shall I create a .sh or .txt file to guide on how to install flashlight dependencies? @deepang17 Did you pushed that fix? I've tried your code and it is crushing at the "self.decoder.decode". What was your fix? What is the status of this PR? <|||||>> > * [x] Currently the code supports prefix decoding without LM. I am still working to integrate the kenlm version. > > > > Problem faced currently: I created a custom kenlm and tried to run the code, but it stops without throwing any error at line `results = self.decoder.decode(emissions_ptr, T, N)` I am currently trying to fix it. (RESOLVED) > > > > * [x] Shall I create a .sh or .txt file to guide on how to install flashlight dependencies? > > @deepang17 > Did you pushed that fix? I've tried your code and it is crushing at the "self.decoder.decode". What was your fix? > > What is the status of this PR? You can fix it by replacing `!cmake .. -DCMAKE_BUILD_TYPE=Release -DKENLM_MAX_ORDER=20 -DCMAKE_POSITION_INDEPENDENT_CODE=ON` to `!cmake ..`<|||||>> > > * [x] Currently the code supports prefix decoding without LM. I am still working to integrate the kenlm version. > > > > > > Problem faced currently: I created a custom kenlm and tried to run the code, but it stops without throwing any error at line `results = self.decoder.decode(emissions_ptr, T, N)` I am currently trying to fix it. (RESOLVED) > > > > > > * [x] Shall I create a .sh or .txt file to guide on how to install flashlight dependencies? > > > > > > @deepang17 > > Did you pushed that fix? I've tried your code and it is crushing at the "self.decoder.decode". What was your fix? > > What is the status of this PR? > > You can fix it by replacing `!cmake .. -DCMAKE_BUILD_TYPE=Release -DKENLM_MAX_ORDER=20 -DCMAKE_POSITION_INDEPENDENT_CODE=ON` to `!cmake ..` Can you please publish a Google Colab or a bash script to do the installation? I could't figure out where to do the change you suggested in the build, I'v used the Google Colab example from flashlight.<|||||>@deepang17 Thank you for your amazing work! I made Google Colab to reproduce this pull request. @samuelazran You can check this. https://colab.research.google.com/drive/1HHEBS3I4biQ8ZDyfJDtHi4E4onOtYe46?usp=sharing Viterbi decoding works well, but KenLM decoding has the following error. ``` File "run_wav2vec2_eval_with_lm.py", line 292, in <module> main() File "run_wav2vec2_eval_with_lm.py", line 281, in main results = selected_dataset.map(map_to_result) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1606, in map desc=desc, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 176, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 397, in wrapper out = func(self, *args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1911, in _map_single example = apply_function_on_filtered_inputs(example, i, offset=offset) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1826, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "run_wav2vec2_eval_with_lm.py", line 265, in map_to_result decoder = W2lKenLMDecoder(eval_args, target_dictionary) File "run_wav2vec2_eval_with_lm.py", line 201, in __init__ self.lm = KenLM(args.kenlm_model, self.word_dict) TypeError: __init__(): incompatible constructor arguments. The following argument types are supported: 1. flashlight.lib.text.flashlight_lib_text_decoder.KenLM(path: str, usr_token_dict: fl::lib::text::Dictionary) Invoked with: None, <flashlight.lib.text.flashlight_lib_text_dictionary.Dictionary object at 0x7fe0ef7294b0> ``` @deepang17 Do you know this error? It exactly gives as an argument of flashlight.lib.text.flashlight_lib_text_decoder.KenLM the dict obtained from flashlight.lib.text.dictionary.create_word_dict.<|||||>@deepang17 - do you have updates regarding the README.md script? :-) I can take over the PR by next week otherwise!<|||||>Hello @patrickvonplaten, Sorry for the delay. I was occupied due to some personal issues. I am on the verge of completing the README.md. I will commit the updated README soon.<|||||>@deepang17 Any updates?<|||||>@deepang17 Any updates? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This PR seems to be stuck since quite some time now. Is anyone interested in finishing / testing this PR? Might be better to start fresh otherwise with a blog post ) colab that explains how to make a complete ASR end-to-end system - cc @anton-l <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten I'm in! I've searched this topic but it seems there is no official implementation on this topic, and It would be so nice to add this feature. If this feature is still in the backlog, I would be happy to contribute. Looking forward to your alert!<|||||>Hey @hbasafa, I'm now working on this topic full time. We will most likely foster a closer collaboration between [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and Transformers. [Here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode) is a github repo that shows how to use `pyctcdecode` with Wav2Vec2 for LM supported decoding. It works quite well with KenLM.<|||||>Nice one! I will check it out. As I was in hurry, I've already used [this code](https://github.com/hbasafa/py-ctc-decode) that could be easily installed via pip. The code sample is also provided in [here](https://github.com/hbasafa/wav2vec_decode) And Now I also focused on other decoding strategies to add there. However, Thank you for sharing! @patrickvonplaten<|||||>> Hey @hbasafa, > > I'm now working on this topic full time. > > We will most likely foster a closer collaboration between [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) and Transformers. [Here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode) is a github repo that shows how to use `pyctcdecode` with Wav2Vec2 for LM supported decoding. It works quite well with KenLM. hi @patrickvonplaten - this is great news. Where is the best place to follow your progress? <|||||>This PR: https://github.com/huggingface/transformers/pull/14339 It all depends a bit on how fast we can merge a `load_from_hf_hub` function to `pyctcdecode`
transformers
11,605
closed
fix typo in command
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-06-2021 04:29:43
05-06-2021 04:29:43
transformers
11,604
closed
Model type to AutoModelForQuestionAnswering incorrect
`model = AutoModelForQuestionAnswering.from_pretrained("dbmdz/german-gpt2") ValueError: Unrecognized configuration class <class 'transformers.models.gpt2.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForQuestionAnswering. Model type should be one of BigBirdConfig, ConvBertConfig, LEDConfig, DistilBertConfig, AlbertConfig, CamembertConfig, BartConfig, MBartConfig, LongformerConfig, XLMRobertaConfig, RobertaConfig, SqueezeBertConfig, BertConfig, XLNetConfig, FlaubertConfig, MobileBertConfig, XLMConfig, ElectraConfig, ReformerConfig, FunnelConfig, LxmertConfig, MPNetConfig, DebertaConfig, DebertaV2Config, IBertConfig.` How i can convert Model Type? If its not possible, i need suggestions to solve this problem ;(
05-06-2021 03:40:04
05-06-2021 03:40:04
`AutoModelForQuestionAnswering` expects a model with a span classification head on top, GPT2 is an auto-regressive language model and does not have `ForQuestionAnswering` class defined for it. So it can't be used with `AutoModelForQuestionAnswering`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,603
closed
TFWav2Vec2Model
# 🌟 New model addition I want a Tensorflow version of Wav2Vec2 and it's something I'm willing to work on. I just wanted to make sure no one else was working on it. I looked at the issues, PR's, and source code. The only mention of TFWav2Vec2Model is in the current docs so just wanted to double-check no one was working on this before I dive in. > vocab_size (int, optional, defaults to 32) – Vocabulary size of the Wav2Vec2 model. Defines the number of different tokens that can be represented by the inputs_ids passed when calling Wav2Vec2Model or TFWav2Vec2Model. ## Model description Tensorflow port of Wav2Vec2 <!-- Important information --> ## Open source status Tensorflow version of existing Transformers Model * [x] the model implementation is available: (give details) * [x] the model weights are available: (give details) * [ ] who are the authors: (mention them, if possible by @gh-username)
05-06-2021 01:29:15
05-06-2021 01:29:15
Sure, there's a [general conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py) that allows you to convert the weights from a PyTorch model to its TF counterpart. You only need to make sure that all the attributes of the TF model correspond to the names of the PyTorch model. cc @patrickvonplaten <|||||>@will-rice, adding TFWav2Vec2Model would be a great addition indeed -> I'm more than happy to help you on the PR
transformers
11,602
closed
Seq2SeqTrainer not working for a list of inputs: TypeError: can't convert np.ndarray of type numpy.object_
## Information <!-- Important information --> Model I am using Bert2Bert, training using Seq2SeqTrainer on Google Colab. ## Details I am trying to use a Bert2Bert model on multiple-choice qa dataset using Seq2SeqTrainer. My whole code is given in the [following]( https://colab.research.google.com/drive/1bqwHa2guKVGDn_8cBehI3BkAl1Xz_7-O?usp=sharing) I convert question-choices-label as follow: > input: (Question, option1), (Question, option2), ... target: label I have generated the tokens for input using tokenizer.batch_encode_plus(.) method as: ``` max_length = 128 def convert_to_commonsense_qa_features(example_batch): num_examples = len(example_batch["question"]) num_choices = len(example_batch["choices"][0]["text"]) features = {} for example_i in range(num_examples): choices_inputs = tokenizer.batch_encode_plus( list(zip( [example_batch["question"][example_i]] * num_choices, example_batch["choices"][example_i]["text"], )), max_length=max_length, pad_to_max_length=True, ) for k, v in choices_inputs.items(): if k not in features: features[k] = [] features[k].append(v) labels2id = {char: i for i, char in enumerate("ABCDE")} # Dummy answers for test if example_batch["answerKey"][0]: features["labels"] = [labels2id[ans] for ans in example_batch["answerKey"]] else: features["labels"] = [0] * num_examples return features convert_func_dict = { "commonsense_qa": convert_to_commonsense_qa_features, } ``` `for` the input, when I print input_ids, it is given the following form (ndarray): ``` array([array([ 101, 1996, 2237, 4580, 2001, 1037, 2524, 5271, 2005, 1996, 2613, 4263, 1010, 2009, 2001, 2157, 2279, 2000, 1037, 2152, 4125, 2054, 1029, 102, 9282, 2458, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), array([ 101, 1996, 2237, 4580, 2001, 1037, 2524, 5271, 2005, 1996, 2613, 4263, 1010, 2009, 2001, 2157, 2279, 2000, 1037, 2152, 4125, 2054, 1029, 102, 4545, 2311, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]), ``` And I train the model as: ``` training_args = Seq2SeqTrainingArguments( output_dir='./models/tpu', per_device_train_batch_size=8, per_device_eval_batch_size=8, predict_with_generate=True, do_train=True, do_eval=True, logging_steps=100, # set to 1000 for full training warmup_steps=2000, # set to 2000 for full training overwrite_output_dir=True, num_train_epochs = 10, save_steps = 12180, fp16=True, ) # instantiate trainer trainer = Seq2SeqTrainer( model=bert2bert, tokenizer=tokenizer, args=training_args, #compute_metrics=compute_metrics, train_dataset=features_dict["commonsense_qa"]["train"], eval_dataset=features_dict["commonsense_qa"]["validation"], ) trainer.train() But it produces: ``` > `TypeError: can't convert np.ndarray of type numpy.object_. The only supported types are: float64, float32, float16, complex64, complex128, int64, int32, int16, int8, uint8, and bool. I could not fix it. Any ideas? ## Checklist - [ ] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ ] I checked if a related official extension example runs on my machine.
05-06-2021 00:43:00
05-06-2021 00:43:00
Could you please post the full error? From what I can see here, the dataset should return tensors, not NumPy arrays. This could be the issue.
transformers
11,601
closed
Using `TFGPT2LMHeadModel.generate` with tf.distribute.TPUStrategy and tf.function
## High-level description I was wondering what were the steps that one is supposed to use to be able to do call `strategy.run` on `model.generate` with tf.distribute.TPUStrategy. Right now, I get an error message telling me that TPUStrategy doesn't support pure eager functions, so I tried to put `model.__call__` and `model.generate` in a `tf.function`, in such a way: ```python model.__call__ = tf.function(model.__call__, experimental_relax_shapes=True) model.generate = tf.function(model.generate, experimental_relax_shapes=True) ``` but it doesn't work, when trying to jit either of just one of these or both (or neither). ## Environment info - `transformers` version: 4.5.1 - Platform: Ubuntu 20.04 LTS - Python version: 3.8.5 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): Tensorflow 2.5.0rc2 - Using GPU in script?: Using TPUs - Using distributed or parallel set-up in script?: tf.distribute.TPUStrategy with 8 TPUs (single TPU host) ### Who can help @Rocketknight1, maybe @LysandreJik ## To reproduce Steps to reproduce the behavior, run this very small script: ```python import sys import transformers import tensorflow as tf TPU_NAME = "" # This is dependant on your setup. This is not the problem. Works perfectly with other things. MODEL_KEY = "distilgpt2" EXAMPLE_SENTENCE = "I like pizza, because it's " cr = tf.distribute.cluster_resolver.TPUClusterResolver.connect(tpu=TPU_NAME) strategy = tf.distribute.TPUStrategy(cr) with strategy.scope(): model = transformers.TFGPT2LMHeadModel.from_pretrained(MODEL_KEY) tokenizer = transformers.AutoTokenizer.from_pretrained(MODEL_KEY) tokens = tokenizer.encode(EXAMPLE_SENTENCE, return_tensors="tf") # This doesn't work (See *Error Message 1*, in the next section) strategy.run(model.generate, args=(tokens,)) # So we try to jit the __call__ function: model.__call__ = tf.function( model.__call__, experimental_relax_shapes=True, ) # The following also doesn't work (See *Error Message 2*, in the next section) strategy.run(model.generate, args=(tokens,)) # We try jitting generate model.generate = tf.function( model.generate, experimental_relax_shapes=True, ) # Once more, it doesn't work (See *Error Message 3*, in the next section) # Only jitting `model.generate` gives the same error. strategy.run(model.generate, args=(tokens,)) ``` ## Error messages: ### Error message 1: No method is jitted. ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-9-44e0c5fc714c> in <module> ----> 1 strategy.run(model.generate, args=(tokens,)) ~/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py in run(self, fn, args, kwargs, options) 392 objects, or `Tensor`s (for example, if running on a single replica). 393 """ --> 394 validate_run_function(fn) 395 396 fn, args, kwargs = _maybe_partial_apply_variables(fn, args, kwargs) ~/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py in validate_run_function(fn) 98 and not isinstance(fn, function.ConcreteFunction) \ 99 and not (callable(fn) and isinstance(fn.__call__, def_function.Function)): --> 100 raise NotImplementedError( 101 "TPUStrategy.run(fn, ...) does not support pure eager " 102 "execution. please make sure the function passed into " NotImplementedError: TPUStrategy.run(fn, ...) does not support pure eager execution. please make sure the function passed into `strategy.run` is a `tf.function` or `strategy.run` is called inside a `tf.function` if eager behavior is enabled. ``` ### Error message 2: Only `model.__call__` is jitted. Same error as #1 ``` --------------------------------------------------------------------------- NotImplementedError Traceback (most recent call last) <ipython-input-9-44e0c5fc714c> in <module> ----> 1 strategy.run(model.generate, args=(tokens,)) ~/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py in run(self, fn, args, kwargs, options) 392 objects, or `Tensor`s (for example, if running on a single replica). 393 """ --> 394 validate_run_function(fn) 395 396 fn, args, kwargs = _maybe_partial_apply_variables(fn, args, kwargs) ~/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py in validate_run_function(fn) 98 and not isinstance(fn, function.ConcreteFunction) \ 99 and not (callable(fn) and isinstance(fn.__call__, def_function.Function)): --> 100 raise NotImplementedError( 101 "TPUStrategy.run(fn, ...) does not support pure eager " 102 "execution. please make sure the function passed into " NotImplementedError: TPUStrategy.run(fn, ...) does not support pure eager execution. please make sure the function passed into `strategy.run` is a `tf.function` or `strategy.run` is called inside a `tf.function` if eager behavior is enabled. ``` ### Error message 3: This is when either just `model.generate` is jitted, or both are jitted. ``` WARNING:tensorflow:AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fa6913fd640>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert WARNING: AutoGraph could not transform <bound method Socket.send of <zmq.sugar.socket.Socket object at 0x7fa6913fd640>> and will run it as-is. Please report this to the TensorFlow team. When filing the bug, set the verbosity to 10 (on Linux, `export AUTOGRAPH_VERBOSITY=10`) and attach the full output. Cause: module, class, method, function, traceback, frame, or code object was expected, got cython_function_or_method To silence this warning, decorate the function with @tf.autograph.experimental.do_not_convert Setting `pad_token_id` to 50256 (first `eos_token_id`) to generate sequence WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`). WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`. WARNING:tensorflow:From /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/ops/array_ops.py:5043: calling gather (from tensorflow.python.ops.array_ops) with validate_indices is deprecated and will be removed in a future version. Instructions for updating: The `validate_indices` argument has no effect. Indices are always validated on CPU and never validated on GPU. --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) <ipython-input-9-44e0c5fc714c> in <module> ----> 1 strategy.run(model.generate, args=(tokens,)) ~/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py in run(self, fn, args, kwargs, options) 400 fn = autograph.tf_convert(fn, autograph_ctx.control_status_ctx()) 401 options = options or distribute_lib.RunOptions() --> 402 return self.extended.tpu_run(fn, args, kwargs, options) 403 404 def experimental_assign_to_logical_device(self, tensor, logical_device_id): ~/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py in tpu_run(self, fn, args, kwargs, options) 1426 def tpu_run(self, fn, args, kwargs, options=None): 1427 func = self._tpu_function_creator(fn, options) -> 1428 return func(args, kwargs) 1429 1430 def _tpu_function_creator(self, fn, options): ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds) 887 888 with OptionalXlaContext(self._jit_compile): --> 889 result = self._call(*args, **kwds) 890 891 new_tracing_count = self.experimental_get_tracing_count() ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds) 931 # This is the first call of __call__, so we have to initialize. 932 initializers = [] --> 933 self._initialize(args, kwds, add_initializers_to=initializers) 934 finally: 935 # At this point we know that the initialization is complete (or less ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to) 761 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph) 762 self._concrete_stateful_fn = ( --> 763 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access 764 *args, **kwds)) 765 ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs) 3048 args, kwargs = None, None 3049 with self._lock: -> 3050 graph_function, _ = self._maybe_define_function(args, kwargs) 3051 return graph_function 3052 ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs) 3442 3443 self._function_cache.missed.add(call_context_key) -> 3444 graph_function = self._create_graph_function(args, kwargs) 3445 self._function_cache.primary[cache_key] = graph_function 3446 ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes) 3277 arg_names = base_arg_names + missing_arg_names 3278 graph_function = ConcreteFunction( -> 3279 func_graph_module.func_graph_from_py_func( 3280 self._name, 3281 self._python_function, ~/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes) 997 _, original_func = tf_decorator.unwrap(python_func) 998 --> 999 func_outputs = python_func(*func_args, **func_kwargs) 1000 1001 # invariant: `func_outputs` contains only Tensors, CompositeTensors, ~/.local/lib/python3.8/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds) 670 # the function a weak reference to itself to avoid a reference cycle. 671 with OptionalXlaContext(compile_with_xla): --> 672 out = weak_wrapped_fn().__wrapped__(*args, **kwds) 673 return out 674 ~/.local/lib/python3.8/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 984 except Exception as e: # pylint:disable=broad-except 985 if hasattr(e, "ag_error_metadata"): --> 986 raise e.ag_error_metadata.to_exception(e) 987 else: 988 raise OperatorNotAllowedInGraphError: in user code: /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/distribute/tpu_strategy.py:1448 replicated_fn * result[0] = fn(*replica_args, **replica_kwargs) /home/jules/.local/lib/python3.8/site-packages/transformers/generation_tf_utils.py:399 generate * output = self._generate_no_beam_search( /home/jules/.local/lib/python3.8/site-packages/transformers/generation_tf_utils.py:455 _generate_no_beam_search * while cur_len < max_length: /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/autograph/operators/control_flow.py:858 while_stmt _py_while_stmt(test, body, get_state, set_state, opts) /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/autograph/operators/control_flow.py:952 _py_while_stmt while test(): /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:900 __bool__ self._disallow_bool_casting() /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:503 _disallow_bool_casting self._disallow_when_autograph_enabled( /home/jules/.local/lib/python3.8/site-packages/tensorflow/python/framework/ops.py:489 _disallow_when_autograph_enabled raise errors.OperatorNotAllowedInGraphError( OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. ```
05-06-2021 00:12:08
05-06-2021 00:12:08
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,600
closed
[consistent use] `F` vs. `nn.functional`
We use 3 different ways of doing the same: 1. `F.foo()` 2. `nn.functional.foo()` 3. `torch.nn.functional.foo()` and these could also be imported: 4. `from torch.nn.functional import foo; foo()` Asking others it appears that `F` is not quite liked, so it's 2, 3 or 4. 2 and 3 often lead to longer lines which autoformatter wraps, leading to 3 lines of code instead of 1 and which gives less readable code. So it seems that option 4 might be the best outcome. For 2, the global update would be easy: ``` find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|from torch.nn import functional as F||' {} \; find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|import torch.nn.functional as F||' {} \; find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's| F\.| nn.functional.|g' {} \; make fixup ``` For 4, it will take much more work, but can be semi-automated. @LysandreJik, @sgugger, @patrickvonplaten
05-05-2021 19:27:46
05-05-2021 19:27:46
I personally vote for 4, and if there is any risk of ambiguity, doing ``` from torch.nn.functional import foo as torch_foo ``` <|||||>2 or 4 for me, but I don't have strong feeling about this issue<|||||>2 looks good to me but no strong feelings either<|||||>I further analyzed the codebase and it looks like we use `nn.` almost exclusively, for all other cases outside of `nn.functional`, with an occasional `torch.nn.` So I'd say 2 is probably the most consistent way, unless we start explicitly importing `Parameter`, `Embedding`, `ModuleList` from `nn`. I know the additional "functional" string pushes some single lines into autoformatter's 3 lines mode. But for consistency I think sticking to just a single: ``` from torch import nn ``` is a goodness. @sgugger, will option 2 work for you?<|||||>If we stick to 2, then I will also normalize `s|torch.nn.foo|nn.foo|` calls unless you object. <|||||>That works for me.<|||||>Status: merged: - [x] src https://github.com/huggingface/transformers/pull/12124 - [x] examples https://github.com/huggingface/transformers/pull/12156 - [x] templates https://github.com/huggingface/transformers/pull/12153 - [x] tests https://github.com/huggingface/transformers/pull/12155 - [x] docs https://github.com/huggingface/transformers/pull/12161<|||||>just need to merge docs, but all is done here.
transformers
11,599
closed
Auto modelcard
# What does this PR do? This PR adds functionality in the Trainer to auto-generate model cards and some utilities to do the same without the Trainer if people are not using it. In passing, the old `ModelCard` class is deprecated (to be removed in v5). As an example [here](https://huggingface.co/sgugger/test-glue-mrpc) is a repo that is generated by the `run_glue` script with this new functionality, using the following command on a machine with 2 GPUs: ``` accelerate launch examples/pytorch/text-classification/run_glue.py \ --model_name_or_path bert-base-cased \ --task_name mrpc \ --do_train \ --do_eval \ --learning_rate 2e-5 \ --per_device_train_batch_size 16 \ --per_device_eval_batch_size 16 \ --evaluation_strategy epoch \ --logging_strategy epoch \ --weight_decay 1e-2 \ --output_dir ~/tmp/test-glue-mrpc \ --overwrite_output_dir \ --push_to_hub ``` I've only adjusted the glue example for now, will do the others once we have settled on an API.
05-05-2021 19:06:15
05-05-2021 19:06:15
transformers
11,598
closed
Add the ImageClassificationPipeline
This PR adds the `ImageClassificationPipeline`. It is tested on DeiT and ViT and should enable the inference API for these models. Since I encountered an issue with the `AutoExtractor`, I fixed it as seen with @sgugger (namely switched it from using names such as `"deit"` to using the configuration class as a key, similarly to what we do in tokenizers and the other auto classes. Please let me know if you would like me to split this PR into multiple PRs for simpler reviews, happy to do so. @NielsRogge, happy to have your review.
05-05-2021 18:36:13
05-05-2021 18:36:13
LGTM, thank you for adding this.<|||||>I should have addressed all comments. @Narsil, if you could review once again when you have time, would love your feedback on the updated tests and on the updated `__init__` method.
transformers
11,597
closed
Is the "dummy inputs and standard lengths" implication, when using tracing for exporting a model, still true?
Hi, We're experimenting with exporting a CamemBERT model via tracing. We used a relatively short sentence as sample input. Then we used the resulting traced model for inference of a sentence with greater length than the sample input's. According to https://huggingface.co/transformers/serialization.html#dummy-inputs-and-standard-lengths, it should've raised an error. However, it seems to just work for us 🤔 . Are we missing something? Or that section in the docs isn't up-to-date and that implication no longer applies? Thanks, Yair
05-05-2021 12:49:22
05-05-2021 12:49:22
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,596
closed
fix head_mask for albert encoder part(`AlbertTransformer`)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #8323 Fix a subtle bug for using a submodule `AlbertTransformer` in `AlbertModel`. ## Who can review? @LysandreJik
05-05-2021 12:47:14
05-05-2021 12:47:14
Great, thanks for the fix @baeseongsu ! Could you run the code quality tool to ensure the PR fits the quality requirements? ``` pip install -U -e .[quality] make fixup ```<|||||>@LysandreJik Sure 👍
transformers
11,595
closed
Accept tensorflow-rocm package when checking TF availability
When working [on AMD GPU:s the TensorFlow package is called `tensorflow-rocm`](https://rocmdocs.amd.com/en/latest/Deep_learning/Deep-learning.html#tensorflow). This trivial PR just adds that to the list of accepted package names when checking for TensorFlow availability in `src/transformers/file_utils.py`. At least the basic benchmark `run_benchmark_tf.py` seems to work fine on AMD MI50 and MI100 GPUs
05-05-2021 12:42:12
05-05-2021 12:42:12
transformers
11,594
open
Distil BART for text simplification
# 🌟 New model addition ## Model description Not completed yet. I will notify very soon when it is done. If anyone has any leads regarding the same issue, please share. It would be great help.
05-05-2021 09:49:51
05-05-2021 09:49:51
Can you provide a link to a paper, implementation and optional weights, if possible?<|||||>Hello NeilsRogge, I am building the model on basis of DistilBERT. I am still in the initial phase, but if you want to read/know more about it please go through this link https://huggingface.co/transformers/_modules/transformers/models/distilbert/modeling_tf_distilbert.html#TFDistilBertModel
transformers
11,593
closed
[WIP] HF style example flax version
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> `run_flax_glue.py` is simplified and made more similar to `run_glue_no_trainer.py`. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-05-2021 09:44:46
05-05-2021 09:44:46
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,592
closed
Cannot use multiple GPUs to finetune RAG using sample code with customized knowledge
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: linux - Python version: 3.7 - PyTorch version (GPU?): torch1.8.1with cuda10.2 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes @patrickvonplaten, @lhoestq ## Information Model I am using: RAG-sequence-base The problem arises when using: * the official example scripts: - using finetune_rag.sh(finetune_rag_ray.sh) at examples/research_projects/rag The tasks I am working on is: *my own task or dataset: A simple test input csv as mentioned in readme of finetune rag repo is used to create my own knowledge. [test.csv](https://github.com/huggingface/transformers/files/6426148/test.csv) ## To reproduce Steps to reproduce the behavior: 1. create a database using use_knowledge_database.py with any sample csv. 2. run finetune_rag.sh using this database, with multiple GPUs 3. stuck at loading knowledge database, not sure its about loading indexes or what. configs I use ``` --data_dir $DATA_DIR \ --output_dir $OUTPUT_DIR \ --model_name_or_path $MODEL_NAME_OR_PATH \ --model_type rag_sequence \ --fp16 \ --gpus 4 \ --profile \ --do_train \ --do_predict \ --n_val -1 \ --train_batch_size 8 \ --eval_batch_size 1 \ --max_source_length 128 \ --max_target_length 25 \ --val_max_target_length 25 \ --test_max_target_length 25 \ --label_smoothing 0.1 \ --dropout 0.1 \ --attention_dropout 0.1 \ --weight_decay 0.001 \ --adam_epsilon 1e-08 \ --max_grad_norm 0.1 \ --lr_scheduler polynomial \ --learning_rate 3e-05 \ --num_train_epochs 100 \ --warmup_steps 500 \ --gradient_accumulation_steps 1 \ --index_name custom \ --passages_path ../try/my_knowledge_dataset \ --index_path ../try/my_knowledge_dataset_hnsw_index.faiss ``` I'm using 4 Tesla T4's as my GPUs, with `faiss-cpu==1.6.3, dataset==1.0.1, pyarrow==0.17.1`, and switching to ray won't solve this problem either. ## Expected behavior finish loading index and proceed training
05-05-2021 09:38:24
05-05-2021 09:38:24
Hi ! Is there an error message ? Is there any CPU activity when it gets stuck (maybe the index is just being loaded) ?<|||||>I'm not getting any error message, and yes is there is CPU activity. But given the fact that my dataset is quite small(check test.csv), would it really that much time(>5mins) to load it? And by the way, where would the index by loaded it? I'm confused by the instruction to use faiss-cpu. Would gpu be better? Would be able to provide you with some screenshot of this issue later.<|||||>I notice for single-gpu training there is not such steps as initializing a retriever, and if I use ray, I would get stuck at here: ![image](https://user-images.githubusercontent.com/43310105/117157908-939a0400-adf1-11eb-952f-640b3986b40d.png) At this time by `nvidia-smi` ![image](https://user-images.githubusercontent.com/43310105/117158075-af050f00-adf1-11eb-85ba-c9bb97d52382.png) while by `top` ![image](https://user-images.githubusercontent.com/43310105/117158246-d52aaf00-adf1-11eb-93eb-5e4774b1afef.png) I'm not sure how to further illustrate this issue, is this because my CPU gets overloaded?<|||||>For such tasks CPU usage often means that FAISS (the indexing library) is doing something. Did you try interrupting the program to see if the stacktrace could help us locate at which line the code is stuck ?<|||||>In the beginning of this stuck, I can use ctrl+C to interrupt, which gives the following lines: ``` ^CTraceback (most recent call last): File "examples/research_projects/rag/finetune_rag.py", line 625, in <module> main(args) File "examples/research_projects/rag/finetune_rag.py", line 597, in main profiler=pl.profiler.AdvancedProfiler() if args.profile else None, File "/root/transformers-master/examples/research_projects/rag/lightning_base.py", line 389, in generic_train trainer.fit(model) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit results = self.accelerator_backend.train() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 138, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 249, in ddp_train self.model_to_device(model, process_idx) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 178, in model_to_device model.cuda(self.trainer.root_gpu) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda return super().cuda(device=device) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 491, in cuda return self._apply(lambda t: t.cuda(device)) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 387, in _apply module._apply(fn) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 387, in _apply module._apply(fn) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 387, in _apply module._apply(fn) [Previous line repeated 4 more times] File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 409, in _apply param_applied = fn(param) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 491, in <lambda> return self._apply(lambda t: t.cuda(device)) KeyboardInterrupt Traceback (most recent call last): File "/root/transformers-master/examples/research_projects/rag/finetune_rag.py", line 625, in <module> main(args) File "/root/transformers-master/examples/research_projects/rag/finetune_rag.py", line 597, in main profiler=pl.profiler.AdvancedProfiler() if args.profile else None, File "/root/transformers-master/examples/research_projects/rag/lightning_base.py", line 389, in generic_train trainer.fit(model) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 440, in fit results = self.accelerator_backend.train() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 138, in train results = self.ddp_train(process_idx=self.task_idx, model=model) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 249, in ddp_train self.model_to_device(model, process_idx) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/accelerators/ddp_accelerator.py", line 178, in model_to_device model.cuda(self.trainer.root_gpu) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/pytorch_lightning/utilities/device_dtype_mixin.py", line 124, in cuda return super().cuda(device=device) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 491, in cuda return self._apply(lambda t: t.cuda(device)) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 387, in _apply module._apply(fn) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 387, in _apply module._apply(fn) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 387, in _apply module._apply(fn) [Previous line repeated 4 more times] File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 409, in _apply param_applied = fn(param) File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/modules/module.py", line 491, in <lambda> return self._apply(lambda t: t.cuda(device)) ``` And when checking `top`, I see that among red-marked processes(2635, 2636), 2636 stays all the time while 2635 come and goes ![image](https://user-images.githubusercontent.com/43310105/117175751-c1874480-ae01-11eb-81bd-46a636024553.png) I cannot use ctrl+C to interrupt it after a while, and when I try kill the process, result: ``` 2021-05-06 00:28:50,962 WARNING worker.py:1115 -- The autoscaler failed with the following error: Terminated with signal 15 File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 376, in <module> monitor.run() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 284, in run self._run() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 202, in _run time.sleep(AUTOSCALER_UPDATE_INTERVAL_S) 2021-05-06 00:28:50,962 WARNING worker.py:1115 -- The autoscaler failed with the following error: Terminated with signal 15 File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 376, in <module> monitor.run() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 284, in run self._run() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 202, in _run time.sleep(AUTOSCALER_UPDATE_INTERVAL_S) 2021-05-06 00:28:50,967 WARNING worker.py:1115 -- The autoscaler failed with the following error: Terminated with signal 15 File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 376, in <module> monitor.run() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 284, in run self._run() File "/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/ray/_private/monitor.py", line 202, in _run time.sleep(AUTOSCALER_UPDATE_INTERVAL_S) ```<|||||>This line `self.model_to_device(model, process_idx)` says that the model is being loaded on GPU. I'm not sure why it would make the script hang though. Pinging @shamanez who has been playing with this script for #10410 If it really is because of the model loading to GPU then you may need to wait a bit more (though I'm surprised it could take that much time).<|||||>As far as I'm concerned, these lines doesn't mean anything, which might just result from my interrupting it too early. The latter ones in moniter.py seems more like its reason to get stuck.<|||||>And by the way, when I use ray, I use finetune_rag_ray.sh in the same dir, with gpus=4 and num_retrievers=2. Using pytorch dist results in similar problems(but torch always have only one retriever, and when I set num_retrievers=1 when using ray, I still get stuck), while using gpus=1 could do finetuning with no bugs.<|||||>@lhoestq @Caplimbo So what you are saying is pypi can't even start the training loop right... Can you please let me know the size of your passage set and faiss index ? Also the RAM <|||||>I'm using a passage set with about 30M, and so is the index, which is obtained by the `test.csv' I attached in the issue(see the first comment). RAM is 128G, so I guess it's not a RAM issue... Have you ever tested using multi-gpus with a customized index?<|||||>yeah, it worked for me. 30Million passages right? On Thu, May 6, 2021 at 12:30 PM Caplimbo ***@***.***> wrote: > I'm using a passage set with about 30M, and so is the index, which is > obtained by the `test.csv' I attached in the issue(see the first comment). > RAM is 32G, so I guess it's not a RAM issue... Have you ever tested using > multi-gpus with a customized index? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-833139689>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGTOHMBJKOMG2V5KQE3TMHPIXANCNFSM44EKNA2A> > . > -- [image: Augmented Human Lab] <http://www.ahlab.org/> [image: uni] <https://www.auckland.ac.nz/en/abi.html> Gayal Shamane Ph.D. Candidate Augmented Human Lab Auckland Bioengineering Institute | The University of Auckland <|||||>> yeah, it worked for me. 30Million passages right? > […](#) > On Thu, May 6, 2021 at 12:30 PM Caplimbo ***@***.***> wrote: I'm using a passage set with about 30M, and so is the index, which is obtained by the `test.csv' I attached in the issue(see the first comment). RAM is 32G, so I guess it's not a RAM issue... Have you ever tested using multi-gpus with a customized index? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#11592 (comment)](https://github.com/huggingface/transformers/issues/11592#issuecomment-833139689)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEA4FGTOHMBJKOMG2V5KQE3TMHPIXANCNFSM44EKNA2A> . > -- [image: Augmented Human Lab] <http://www.ahlab.org/> [image: uni] <https://www.auckland.ac.nz/en/abi.html> Gayal Shamane Ph.D. Candidate Augmented Human Lab Auckland Bioengineering Institute | The University of Auckland Nope, just a size of 30MB. I attached the file `test.csv` in my original comment, and I use use_own_knowledge_dataset.py to process it. And by the way, I have to remove the `git` related codes since I have poor connection to github, could this cause potential problems?<|||||>Or could you please share your environment settings? I am using `dataset==1.0.1` and `pyarrow==0.17.1` since higher versions would report errors when using ray, which seems to be the same with what mentioned here https://discuss.ray.io/t/cant-pickle-pyarrow-dataset-expression/1685/7. Anyway, training on single GPU works smoothly for me, so don't know what might be the problem.<|||||>Wow super weired! I will check it out. On Thu, May 6, 2021, 13:03 Caplimbo ***@***.***> wrote: > yeah, it worked for me. 30Million passages right? > … <#m_5522908282532654251_> > On Thu, May 6, 2021 at 12:30 PM Caplimbo *@*.***> wrote: I'm using a > passage set with about 30M, and so is the index, which is obtained by the > `test.csv' I attached in the issue(see the first comment). RAM is 32G, so I > guess it's not a RAM issue... Have you ever tested using multi-gpus with a > customized index? — You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub <#11592 (comment) > <https://github.com/huggingface/transformers/issues/11592#issuecomment-833139689>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/AEA4FGTOHMBJKOMG2V5KQE3TMHPIXANCNFSM44EKNA2A > . > -- [image: Augmented Human Lab] http://www.ahlab.org/ [image: uni] > https://www.auckland.ac.nz/en/abi.html Gayal Shamane Ph.D. Candidate > Augmented Human Lab Auckland Bioengineering Institute | The University of > Auckland > > Nope, just a size of 30MB. I attached the file test.csv in my original > comment, and I use use_own_knowledge_dataset.py to process it. And by the > way, I have to remove the git related codes since I have poor connection > to github, could this cause potential problems? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-833150369>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGTSWP5IR6X5QZ2QY73TMHTERANCNFSM44EKNA2A> > . > <|||||>Don't know if it might help, but when using native pytorch dist, I get the following before stucked: ![image](https://user-images.githubusercontent.com/43310105/117236448-4bafc700-ae5b-11eb-8ce3-a18f10f91d6c.png) with `nvidia-smi` results: ![image](https://user-images.githubusercontent.com/43310105/117236498-61bd8780-ae5b-11eb-8339-816494e6052b.png) And these PIDs cannot be killed, and I don't know why <|||||>And by the way, I also encounters similar problems(cannot go through further training) when using BART and a `Trainer`. When I use torch1.8.1with cuda 10.2, I get no output and stucks, but if I switch to cuda11.1, one extra line would occur before stucking: ``` PyTorch version 1.8.1+cu111 available. begin loading model ... end loading model! begin trainning.. 0%| | 0/1000 [00:00<?, ?it/s]/root/anaconda3/envs/huggingface/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' ``` And in that case, if I use cuda 11.1, finetuning RAG using pytorch dist would provide extra lines of code: ```initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/4 INFO:lightning:initializing ddp: GLOBAL_RANK: 3, MEMBER: 4/4 INFO:root:Added key: store_based_barrier_key:1 to store for rank: 3 INFO:root:Added key: store_based_barrier_key:1 to store for rank: 0 in ddp connection init port -1 INFO:distributed_pytorch_retriever:initializing retrieval INFO:distributed_pytorch_retriever:dist initialized in ddp connection init port -1 in ddp connection init port -1 INFO:distributed_pytorch_retriever:initializing retrieval INFO:distributed_pytorch_retriever:initializing retrieval INFO:distributed_pytorch_retriever:dist initialized INFO:distributed_pytorch_retriever:dist initialized in ddp connection init port -1 INFO:distributed_pytorch_retriever:initializing retrieval INFO:distributed_pytorch_retriever:dist initialized INFO:root:Added key: store_based_barrier_key:2 to store for rank: 0 INFO:root:Added key: store_based_barrier_key:2 to store for rank: 1 INFO:root:Added key: store_based_barrier_key:2 to store for rank: 3 INFO:root:Added key: store_based_barrier_key:2 to store for rank: 2 INFO:distributed_pytorch_retriever:dist not initialized / main Loading index from ../covid_QA/try/my_knowledge_dataset_hnsw_index.faiss Loaded FaissIndex embeddings from ../covid_QA/try/my_knowledge_dataset_hnsw_index.faiss in ddp end init port -1 in ddp end init port -1 in ddp end init port -1 in ddp end init port -1 ``` But still stucks afterwards. Here I add few extra lines to see what happens, but so far no results I can get<|||||>Could you try changing the `--distributed-port` just in case ?<|||||>Will do later, but what values should I test? <|||||>Any value between 4000 and 40000, just to make just it's not an issue with the default value -1<|||||>No difference, still stucks at the same step<|||||>@Ihoestq I switched the cuda version and can perform training on multiple GPUs now, seems there is an issue with Tesla T4 with cuda 10.2 However, I did a little modification to RAG, and now I can only train it with one GPU, more GPUs would report a OOM problem. Any possible reasons for this?<|||||>What kinda of change you did ? On Mon, May 10, 2021, 18:38 Caplimbo ***@***.***> wrote: > @Ihoestq I switched the cuda version and can perform training on multiple > GPUs now, seems there is an issue with Tesla T4 with cuda 10.2 > > However, I did a little modification to RAG, and now I can only train it > with one GPU, more GPUs would report a OOM problem. Any possible reasons > for this? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-836247855>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGWONRQDTXQE32BBJMLTM55OBANCNFSM44EKNA2A> > . > <|||||>> What kinda of change you did ? > […](#) > On Mon, May 10, 2021, 18:38 Caplimbo ***@***.***> wrote: @Ihoestq I switched the cuda version and can perform training on multiple GPUs now, seems there is an issue with Tesla T4 with cuda 10.2 However, I did a little modification to RAG, and now I can only train it with one GPU, more GPUs would report a OOM problem. Any possible reasons for this? — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#11592 (comment)](https://github.com/huggingface/transformers/issues/11592#issuecomment-836247855)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEA4FGWONRQDTXQE32BBJMLTM55OBANCNFSM44EKNA2A> . I added one additional input to the model, and send it to the retriever to modify the retrival process. This is just one scalar tensor per sample, so I don't think itself would cause problem... <|||||>So other than the question, you add another tensor right? Then did you also changed what you get output the retriever? Then did you also modify the input to answer generator? On Mon, May 10, 2021, 18:42 Caplimbo ***@***.***> wrote: > What kinda of change you did ? > … <#m_2194681267189680230_> > On Mon, May 10, 2021, 18:38 Caplimbo *@*.***> wrote: @Ihoestq I switched > the cuda version and can perform training on multiple GPUs now, seems there > is an issue with Tesla T4 with cuda 10.2 However, I did a little > modification to RAG, and now I can only train it with one GPU, more GPUs > would report a OOM problem. Any possible reasons for this? — You are > receiving this because you were mentioned. Reply to this email directly, > view it on GitHub <#11592 (comment) > <https://github.com/huggingface/transformers/issues/11592#issuecomment-836247855>>, > or unsubscribe > https://github.com/notifications/unsubscribe-auth/AEA4FGWONRQDTXQE32BBJMLTM55OBANCNFSM44EKNA2A > . > > I added one additional input to the model, and send it to the retriever to > modify the retrival process. This is just one scalar tensor per sample, so > I don't think itself would cause problem... > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-836250760>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGUKHBBA73T5IUIXSADTM554NANCNFSM44EKNA2A> > . > <|||||>The changes of output to the retriever is minor. For example, suppose one document we retrieve before encoding it has test "This is example text", what I did is prepend it to something like "<special token> This is example text" , and then encode it to get context_input_ids. After this operation, nothing more was done to the generator input.<|||||>The OOM issue didn't happen at the beginning of an epoch by the way, it's like after 10-20 steps that it happens. When using original RAG,~~I didn't see such behavior of increasing memory demands, does this mean I have something wrong with my own implementation?~~ It's the same even if I use the original version of RAG But so far as I can see, using ray for example, with batchsize=1 on each GPU, occupies almost the same amout of GPU memories as training with batchsize=2 on a single GPU. I suspect the parallel implementation also loads part of the retrieval process on GPU?<|||||>And by the way, since I have to to some pretraining to the generator part, I separately trained(or tuned) a BART from its pretrained weight on huggingface, and then plunge it in to RAG by `RagSequenceForGeneration.from_pretrained_question_encoder_generator`. I'm a little bit worried that RAG model weights provided on huggingface has a different setting with the original BART weight provided, and maybe that's why I cannot get the loss go down like an original RAG does. If that's the case, Should I wait until it converges(like, in 100 epoches?), or should I first separate the generator part from RAG and do my pretraining(tuning) on it? And how can I save the generator part separately?<|||||>> The changes of output to the retriever is minor. For example, suppose one document we retrieve before encoding it has test "This is example text", what I did is prepend it to something like " This is example text" , and then encode it to get context_input_ids. After this operation, nothing more was done to the generator input. This might be due to your input size is a bit too much longer. and GPU allocation changes according to the length of your input during the training. So sometimes after few steps, you can get an OOM error. This is kinda the answer to your second issue.<|||||>> The OOM issue didn't happen at the beginning of an epoch by the way, it's like after 10-20 steps that it happens. When using original RAG,~I didn't see such behavior of increasing memory demands, does this mean I have something wrong with my own implementation?~ > It's the same even if I use the original version of RAG > But so far as I can see, using ray for example, with batchsize=1 on each GPU, occupies almost the same amout of GPU memories as training with batchsize=2 on a single GPU. I suspect the parallel implementation also loads part of the retrieval process on GPU? No, when using RAY nothing gets loaded into the GPU. You can see it by using the top command. If your index is around 20 GB, you can find retriever workers occupy that amount of memory.<|||||>But it's weird that when using only 1 GPU, I can deploy a batch size of 2 with no OOM, while using 2 GPUs leads to OOM even if batch size (per GPU) is 1. Here batchsize1 OOM only occurs in my modified rag, but with original rag I still get OOM with batch size 2 per device when using multiple GPUs.<|||||>I think this is due to low GPU memory. Try to use two 32GB ones. I assume the initialization of the DDP process would consume bit more memory. @Caplimbo when you run with a single GPU can you send me the memory usage?<|||||>@shamanez Sadly I don't have such powerful GPUs. When I use a single GPU, the memory usage of batch size of 2 should be around 15026MB(as far as I can recall, cannot check it now since I'm using all GPUs for training), while with 4GPUs, I managed to train it with batch size 1 per GPU, with a memory usage of 15072MB/GPU. I have to reduce `max_target_length` from 25 to 24, otherwise OOM.<|||||>yeah, make sense. <|||||>Really? I don't see why using batch size 1 per GPU on a multiple GPU setting would require more memory than using batch size 2 on a single GPU...<|||||>Can u send me a screen shot of memory use when you are using a single gpu with batch size one ? On Wed, May 12, 2021, 23:07 Caplimbo ***@***.***> wrote: > Really? I don't see why using batch size 1 per GPU on a multiple GPU > setting would require more memory than using batch size 2 on a single GPU... > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-839683794>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGVIENWUXQI2H7CJEZDTNJON7ANCNFSM44EKNA2A> > . > <|||||>Fine, will do after this round of training is over. Maybe in a day orz.<|||||>At the moment just send me one screen shot. With nvidia-smil. On Wed, May 12, 2021, 23:10 Caplimbo ***@***.***> wrote: > Fine, will do after this round of training is over. Maybe in a day orz. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-839686105>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGXSV6IRXU6PHAQ6FNDTNJO3DANCNFSM44EKNA2A> > . > <|||||>You mean now with multiple GPU training and batch size 1 on each GPU? then it's like: ![image](https://user-images.githubusercontent.com/43310105/117966668-c181ca00-b356-11eb-9504-ec87e61f4c64.png) <|||||>See gpu memory is almost up to the limit. So I assume during the DDP the master GPUs requires bit more memory. This causes an OOM error. In my lab I have a 2 11GB GPUs. Sometimes I also observe the same. On Wed, May 12, 2021, 23:18 Caplimbo ***@***.***> wrote: > You mean now with multiple GPU training and batch size 1 on each GPU? then > it's like: > [image: image] > <https://user-images.githubusercontent.com/43310105/117966668-c181ca00-b356-11eb-9504-ec87e61f4c64.png> > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11592#issuecomment-839690195>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AEA4FGTUTX5H3JY4C6TKBKDTNJPX5ANCNFSM44EKNA2A> > . > <|||||>> See gpu memory is almost up to the limit. So I assume during the DDP the master GPUs requires bit more memory. This causes an OOM error. In my lab I have a 2 11GB GPUs. Sometimes I also observe the same. > […](#) > On Wed, May 12, 2021, 23:18 Caplimbo ***@***.***> wrote: You mean now with multiple GPU training and batch size 1 on each GPU? then it's like: [image: image] <https://user-images.githubusercontent.com/43310105/117966668-c181ca00-b356-11eb-9504-ec87e61f4c64.png> — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <[#11592 (comment)](https://github.com/huggingface/transformers/issues/11592#issuecomment-839690195)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/AEA4FGTUTX5H3JY4C6TKBKDTNJPX5ANCNFSM44EKNA2A> . Sure, when using pytorch for distributed training such behavior is quite usual, but when using ray... I don't know for sure. Will provide you with more infomation once I can do again single GPU training.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,591
closed
add importlib_metadata and huggingface_hub as dependency in the conda recipe
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #11399 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-05-2021 06:45:56
05-05-2021 06:45:56
Thanks @LysandreJik.
transformers
11,590
closed
evaluation in TFTrainer does not run on GPU
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: ubuntu 18.04 - Python version: 3.6.9 - PyTorch version (GPU?): - Tensorflow version (GPU?): tensorflow-gpu==2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patil-suraj @Rocketknight1 ## Information I'm using TFT5ForConditionalGeneration for masked language modelling task. During training GPU utilisation is above 95% but as soon as evaluation starts it goes to 0%. Evaluation is slow. Even though [evaluate function is in strategy.scope()](https://github.com/huggingface/transformers/blob/c065025c4720a87783ca504a9018454893d00649/src/transformers/trainer_tf.py#L580). it does not use gpu. The problem arises when using: * [ ] the official example scripts: (give details below) I'm using the official example script of TFTrainer and modified `run_tf_glue.py` a bit for custom data input. The tasks I am working on is: * [ ] my own task or dataset: (give details below) Final train_dataset and eval_dataset (input to TFTrainer) have the form `({"input_ids": , "attention_mask": ,"decoder_attention_mask": }, labels)` ## To reproduce Steps to reproduce the behavior: I tried reproducing the error using run_tf_squad.py and run_tf_glue.py but both the scripts gave error as the inputs to the trainer were not compatible. Only MRPC task worked, but it had only 400 examples in evaluation so hard to determine. Rest of them simply didn't work, there was an error. If possible I would like to contribute to TFTrainer in terms of running evaluation on GPU and processing squad and glue dataset to match dimensions to TFTrainer inputs. Guidance is really appreciated. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
05-04-2021 22:38:17
05-04-2021 22:38:17
Hi, TF maintainer here! We're currently in the process of rewriting the examples. We're deprecating TFTrainer and using more native Keras. Rewriting our GLUE examples is coming up very soon on my to-do list. That said, the `run_text_classification.py` script is updated to our new standards - feel free to try that and just adapt the input data to use GLUE instead of your own inputs. If that doesn't work for you, I'll try to get the real GLUE script done soon!<|||||>Hello, @Rocketknight1, great job, looking forward to the new version of the real GLUE script. I found `TFTrainer` is still used in the `run_text_classification.py`, does `TFTrainer` will be abandoned?<|||||>Are you sure? I can't see it anywhere: https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_text_classification.py<|||||>Sorry, you're right! I got another file here: https://github.com/huggingface/transformers/blob/master/examples/legacy/text-classification/run_tf_text_classification.py Thank you, I will refer to it.
transformers
11,589
closed
Fix failing test on Windows Platform
- test_tokenization_wav2vec2.py failure on Windows - fixed by changing directory separator when running on Windows. - tests/test_utils_check_copies.py failure on Windows due to splitting text with "\n" on Windows, instead of "\r\n" - fixed by using universal newlines. - tests/extended/test_trainer_ext.py failure on Windows due to "\r" characters encoding issue - fixed by specifying utf-8 encoding. Fix for tests in Windows #11586
05-04-2021 22:31:36
05-04-2021 22:31:36
Hey @Lynx1820, Thanks a lot for the PR! Could you run `make style` to make the tests pass? :-)
transformers
11,588
closed
[trainer] document resume randomness
This PR - documents what awesomeness @sgugger added in https://github.com/huggingface/transformers/pull/11582 - plus what to do if one wants full determinism @sgugger
05-04-2021 20:16:01
05-04-2021 20:16:01
transformers
11,587
closed
Removes SageMakerTrainer code but keeps class as wrapper
# What does this PR do? This PRs removes the `SageMakerTrainer` specific code since it has been merged into Trainer #10975. We had issues and bugs in `SageMakerTrainer` but not in `Trainer` since this class is already deprecated and will be removed in v5 of Transformers I removed the code for the next release. `SageMakerTrainer` will still work since all functionality has been migrated to `Trainer`. I tested it with `run_glue.py`
05-04-2021 17:12:33
05-04-2021 17:12:33
transformers
11,586
closed
Windows Test Errors
There are some errors that I encounter when running tests on Windows. **tests\test_benchmark.py** ``` obj = <MemoryMeasureProcess(MemoryMeasureProcess-8, initial)> file = <_io.BufferedWriter name=10>, protocol = None def dump(obj, file, protocol=None): '''Replacement for pickle.dump() using ForkingPickler.''' ForkingPickler(file, protocol).dump(obj) AttributeError: Can't pickle local object 'measure_peak_memory_cpu.<locals>.MemoryMeasureProcess' ``` **tests/test_modeling_led.py** `RuntimeError: 0INTERNAL ASSERT FAILED at "..\\torch\\csrc\\jit\\ir\\alias_analysis.cpp":532, please report a bug to PyTorch. We don't have an op for aten::constant_pad_nd but it isn't a special case. Argument types: Tensor, int[], bool,` **tests/test_modeling_bart.py** The timeout-decorator uses SIGALRM, which causes an error on Windows. Can we skip this tests on Windows? **test_tokenization_wav2vec2.py** - makes the assumption that paths are separated with forward slashes, but Windows uses backslash. ` AssertionError: Sequences differ: ['added_tokens.json', 'special_tokens_map.j[39 chars]son'] != ['C:\\Users\\AzureUser\\AppData\\Local\\Tem[267 chars]son']` **tests/test_utils_check_copies.py** ``` tests\test_utils_check_copies.py:76: in check_copy_consistency self.assertTrue(len(check_copies.is_copy_consistent(fname)) == 0) AssertionError: False is not true ``` **tests/extended/test_trainer_ext.py** ``` def encode(self, input, final=False): return codecs.charmap_encode(input,self.errors,encoding_table)[0] UnicodeEncodeError: 'charmap' codec can't encode characters in position 0-59: character maps to <undefined> ``` [additional-logs.txt](https://github.com/huggingface/transformers/files/6422688/windows-logs.txt) ## Environment info - `transformers` version: master - Platform: Windows - Python version: - PyTorch version (GPU?): 1.8.1 10.2cuda - Tensorflow version (GPU?): none - Using GPU in script?: 10.2cuda @sgugger, @patil-suraj
05-04-2021 17:02:03
05-04-2021 17:02:03
The tests are not run on Windows, so there is absolutely no guarantee any of them passes there.<|||||>> The tests are not run on Windows, so there is absolutely no guarantee any of them passes there. Isn't Windows a supported platform? If so, would it be helpful to run HF tests on Windows as well? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,585
closed
[template runner CI] copies need to be fixed too
This PR https://github.com/huggingface/transformers/pull/11475 gets model-templates job fail because it doesn't fix the copy and then `make fixup` fails, see: https://github.com/huggingface/transformers/pull/11475/checks?check_run_id=2502230586 Also it's unclear what purpose serves: `git fetch origin master:master` here: https://github.com/huggingface/transformers/blob/2ce0fb84cc500a26b0c45bec1f8a42e33d13e05d/.github/workflows/model-templates.yml#L59 @LysandreJik, @sgugger
05-04-2021 16:47:07
05-04-2021 16:47:07
transformers
11,584
closed
Punctuation in Wav2Vec2
Playing around with Wav2Vec2, and some of the sentences contain punctuation (like apostrophes, etc.) but i don't see any periods, question marks, commas etc. in my output. I'm not sure if this a new feature, or just a way of handling the processing, but would it be possible to get punctuation in the Wav2Vec2 outputs ?
05-04-2021 16:29:28
05-04-2021 16:29:28
Hey @krrishdholakia, Wav2Vec2 was fine-tuned mostly on Librispeech which does not contain any punctuation so it'll be very difficult for the model to predict punctuation. You could take a pre-trained Wav2Vec2 model and fine-tune it on a downstream task that contains punctuation. This should then just work fine<|||||>I came across [this](https://huggingface.co/flexudy/t5-small-wav2vec2-grammar-fixer) t5 model trained to correct grammar outputs of wav2vec2. Not sure if this is what you were looking for @krrishdholakia, but you can pass the text to [this](https://huggingface.co/flexudy/t5-small-wav2vec2-grammar-fixer) model and get back punctuated sentences. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,583
closed
Wrong results for GLUE task STS-B
## Environment info - `transformers` version: 4.6.0.dev0 - Platform: Linux-4.13.0-26-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): bert-base-cased The problem arises when using: * [ ] the official example scripts: examples/pytorch/text-classification/run_glue.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: STS-B ## To reproduce export TASK_NAME=stsb python run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/ ## Expected behavior According to README, Person/Spearman corr. should be close to 88.64/88.48 However, I get Person/Spearman corr. equal to 28.36/27.70 The following warning occurs: python3.8/site-packages/torch/nn/modules/loss.py:528: UserWarning: Using a target size (torch.Size([32])) that is different to the input size (torch.Size([32, 1])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size. return F.mse_loss(input, target, reduction=self.reduction)
05-04-2021 15:42:13
05-04-2021 15:42:13
Are you sure you don't have any bad checkpoints in where your `output_dir` points to? Just executed the command and it works perfectly fine on my end. You can add `--overwrite_output_dir` to your command to make sure to overwrite what is there.<|||||>> Are you sure you don't have any bad checkpoints in where your `output_dir` points to? Just executed the command and it works perfectly fine on my end. You can add `--overwrite_output_dir` to your command to make sure to overwrite what is there. I'm sure the problem is not `output_dir`, it points to an empty directory in the beginning. I've rerun the code adding `--overwrite_output_dir` just to make sure, still getting the same results. The results for other GLUE tasks are all correct, only problem with STS-B.
transformers
11,582
closed
Reproducible checkpoint
# What does this PR do? This PR fixes reproducibility when resuming training from a checkpoint for PyTorch >= 1.6 and non-TPU trainings. The current implementation in the Trainer had two flaws: First when going over the data at the beginning (skipping epochs and batches), the first shuffling was the same in the original training or resuming from the checkpoint, but not the subsequents shufflings. This is because those shufflings were determined by the global torch RNG which had a different state at the end of epoch 1 in a full training vs just after the shuffle of epoch 1 in the resumed training. I did not identify the exact reason but there was one or several calls to the global torch RNG during epoch 1 of the full training and when resuming we only called for the shuffle of epoch 1. To fix this, a generator is now used to determine the shuffling, this way the generator is set with the same seed in both cases (full training or resuming from checkpoint) and then has the exact same number of calls (one shuffle per epoch) so that generator ends up in the same state in the full training or when resuming from a checkpoint. The second thing that was different is the CUDAs RNG states, which are used in the dropout layer of the models. To fix this, all RNG states are saved when saving a checkpoint and set after all the data skip phase when resuming training. Fixes #11504 Fixes #11323
05-04-2021 14:57:12
05-04-2021 14:57:12
@stas00 Could you double check the last changes are all good with you? They include loading/saving all RNGs state (and separately in distributed training) as you suggested.<|||||>Everything looks great, @sgugger. Given the fragile nature of this added feature it'd be relatively easily to break it, so may I suggest that at least some rudimentary test would go a long way to prevent this? (fragile, not because how it was coded, but because it's just fragile in its own random nature) perhaps duplicating test_trainer's resume test and generating a random number on each np, pt, py before and after resume? <|||||>Added a new doc section in a separate PR https://github.com/huggingface/transformers/pull/11588
transformers
11,581
closed
unable to save model1.h5 .I am using huggingface distilbert
Hi Team, I was trying to save model but i got error. model.save("model1.h5") but it didn't work. I tried different method like model.save("model1") so it will save as .pb file. But when i was trying to load I am getting below error. please let me know how to solved it. how can save model after using distil-bert with my model. ~\.conda\envs\bot2\lib\site-packages\tensorflow\python\keras\engine\training.py in get_config(self) 2229 2230 def get_config(self): -> 2231 raise NotImplementedError 2232 2233 @classmethod NotImplementedError:
05-04-2021 13:56:58
05-04-2021 13:56:58
You should use the `save_pretrained` method to save the models.<|||||>when i was trying to save model model.save_pretrained("./model/"). I got error AttributeError: 'Functional' object has no attribute 'save_pretrained'<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,580
closed
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length
## Environment info - `transformers` version: 4.4.0 - Ubuntu 18.04 - Python version: 3.7.4 - PyTorch version: 1.8.1 - Using Colab @Nithin-Holla @LysandreJik Models: Wav2vec 2.0 ## Information Model I am using : Wav2vec 2.0 The problem arises when using: * [Theses steps](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLSR_Wav2Vec2_on_Turkish_ASR_with_%F0%9F%A4%97_Transformers.ipynb#scrollTo=e7cqAWIayn6w) * on my own data, changing all .map functions with mine: ``` import pandas as pd import os from tqdm import tqdm import random train_data = pd.read_csv(os.path.join(os.getcwd(), "data", "wav_file", "train.tsv")) test_data = pd.read_csv(os.path.join(os.getcwd(), "data", "wav_file", "valid.tsv")) from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset["sentence"][picks]) display(HTML(df.to_html())) print(df) show_random_elements(train_data, num_examples=20) show_random_elements(test_data, num_examples=2) import re chars_to_ignore_regex = '[\,\?\.\!\-\;\:\"\“\%\‘\”\�]' for element in train_data.index: train_data["sentence"][element] = re.sub(chars_to_ignore_regex, '', train_data["sentence"][element]).lower() + " " for element in test_data.index: test_data["sentence"][element] = re.sub(chars_to_ignore_regex, '', test_data["sentence"][element]).lower() + " " show_random_elements(train_data, num_examples=20) show_random_elements(test_data, num_examples=2) all_text = "" for element in train_data.index: all_text += train_data["sentence"][element].replace("î", "i") vocab_train = list(set(all_text)) all_text = "" for element in test_data.index: all_text += test_data["sentence"][element].replace("î", "i") vocab_test = list(set(all_text)) vocab_list = list(set(vocab_train) | set(vocab_test)) vocab_dict = {v: k for k, v in enumerate(vocab_list)} vocab_dict vocab_dict["|"] = vocab_dict[" "] del vocab_dict[" "] vocab_dict vocab_dict["[UNK]"] = len(vocab_dict) vocab_dict["[PAD]"] = len(vocab_dict) len(vocab_dict) import json with open(os.path.join(os.getcwd(), "data", "wav_file", "vocab.json"), 'w') as vocab_file: json.dump(vocab_dict, vocab_file) from transformers import Wav2Vec2CTCTokenizer tokenizer = Wav2Vec2CTCTokenizer("/content/drive/MyDrive/wav2vec_commonVoice (copy)/vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|", truncation=True, padding=True) from transformers import Wav2Vec2FeatureExtractor feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding=True, padding_value=0.0, do_normalize=True, return_attention_mask=True) from transformers import Wav2Vec2Processor processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) train = [] for element in train_data.index: dico = dict() dico["path"] = train_data["path"][element] dico["sentence"] = train_data["sentence"][element] train.append(dico) train[0] test = [] for element in test_data.index: dico = dict() dico["path"] = test_data["path"][element] dico["sentence"] = test_data["sentence"][element] test.append(dico) test[0] import torchaudio def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(os.path.join(os.getcwd(), "data", "wav_file", batch["path"])) batch["speech"] = speech_array[0].numpy() batch["sampling_rate"] = sampling_rate batch["target_text"] = batch["sentence"] del batch['path'] del batch["sentence"] return batch train_batch = [] for audio in tqdm(train): train_batch.append(speech_file_to_array_fn(audio)) test_batch = [] for audio in tqdm(test): test_batch.append(speech_file_to_array_fn(audio)) print(len(train_batch)) print(len(test_batch)) import IPython.display as ipd import numpy as np import random rand_int = random.randint(0, len(train_batch)-1) ipd.Audio(data=np.asarray(train_batch[rand_int]["speech"]), autoplay=True, rate=16000) rand_int = random.randint(0, len(train_batch)-1) print("Target text:", train_batch[rand_int]["target_text"]) print("Input array shape:", np.asarray(train_batch[rand_int]["speech"]).shape) print("Sampling rate:", train_batch[rand_int]["sampling_rate"]) train_final = [] batch = dict() for element in range(0, len(train_batch) - 8, 8): batch["speech"] = [] batch["sampling_rate"] = [] batch["target_text"] = [] for batch_size in range(8): batch["speech"].append(train_batch[element + batch_size]["speech"]) batch["target_text"].append(train_batch[element + batch_size]["target_text"]) batch["sampling_rate"].append(train_batch[element + batch_size]["sampling_rate"]) batch["input_values"] = processor(batch["speech"], sampling_rate=batch["sampling_rate"][0]).input_values with processor.as_target_processor(): batch["labels"] = processor(batch["target_text"]).input_ids train_final.append(batch) batch_test = dict() test_final = [] for element in range(0, len(test_batch) - 8, 8): batch_test["speech"] = [] batch_test["sampling_rate"] = [] batch_test["target_text"] = [] for batch_size in range(8): batch_test["speech"].append(test_batch[element + batch_size]["speech"]) batch_test["target_text"].append(test_batch[element + batch_size]["target_text"]) batch_test["sampling_rate"].append(test_batch[element + batch_size]["sampling_rate"]) batch_test["input_values"] = processor(batch["speech"], sampling_rate=batch_test["sampling_rate"][0]).input_values with processor.as_target_processor(): batch_test["labels"] = processor(batch_test["target_text"]).input_ids test_final.append(batch_test) import torch from dataclasses import dataclass, field from typing import Any, Dict, List, Optional, Union @dataclass class DataCollatorCTCWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor (:class:`~transformers.Wav2Vec2Processor`) The processor used for proccessing the data. padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not provided. * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). max_length (:obj:`int`, `optional`): Maximum length of the ``input_values`` of the returned list and optionally padding length (see above). max_length_labels (:obj:`int`, `optional`): Maximum length of the ``labels`` returned list and optionally padding length (see above). pad_to_multiple_of (:obj:`int`, `optional`): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). """ processor: Wav2Vec2Processor padding: Union[bool, str] = True truncated: Union[bool, str] = True max_length: Optional[int] = None max_length_labels: Optional[int] = None pad_to_multiple_of: Optional[int] = None pad_to_multiple_of_labels: Optional[int] = None def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=self.padding, max_length=self.max_length_labels, pad_to_multiple_of=self.pad_to_multiple_of_labels, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True) from jiwer import wer def compute_metrics(pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer(predictions=pred_str, references=label_str) return {"wer": wer} from transformers import Wav2Vec2ForCTC model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-base", attention_dropout=0.1, hidden_dropout=0.1, feat_proj_dropout=0.0, mask_time_prob=0.05, layerdrop=0.1, gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer) ) model.freeze_feature_extractor() from transformers import TrainingArguments training_args = TrainingArguments( output_dir="/home/kamil/wav2vec_commonVoice/Checkpoints", # output_dir="./wav2vec2-large-xlsr-turkish-demo", group_by_length=True, per_device_train_batch_size=16, gradient_accumulation_steps=2, evaluation_strategy="steps", num_train_epochs=1000, fp16=True, save_steps=400, eval_steps=400, logging_steps=400, learning_rate=0.00005, warmup_steps=500, save_total_limit=2 ) from transformers import Trainer trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=train_final, eval_dataset=test_final, tokenizer=processor.feature_extractor, ) trainer.train() ``` The tasks I am working on is: * Finetuning the model my own task or dataset: (give details below) ## To reproduce Since Im fintuning on my own data, I can't use load_dataset method from datasets. So I changed some .map functions (as shown above) so I can have the same data structure to finetune the model. I had the following error: `ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.` occured chen the train part is launched. Any ideas ? Thanks
05-04-2021 12:28:47
05-04-2021 12:28:47
@Kamilbentounes Were you able to resolve this error? If so, how did you do it? I'm also having the same error.<|||||>I am having the same error again in the following notebook. It occurs on cell # 3 https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter7/section5_pt.ipynb<|||||>You could load your local 'csv' files with 🤗 [datasets load_dataset](https://huggingface.co/docs/datasets/loading.html#local-and-remote-files). I hope this helps.
transformers
11,579
closed
Support saving (and loading) models to a remote bucket
# 🚀 Feature request Support saving and loading models to and from remote buckets, e.g. Google Cloud Storage and Amazon S3. ## Motivation Currently this is not supported, making it more difficult to train in AWS/GCP.
05-04-2021 12:22:42
05-04-2021 12:22:42
For reference, pandas leverages `s3fs` and `gcsfs`. A generic implementation is available through `fsspec.open()` (see [docs](https://filesystem-spec.readthedocs.io/en/latest/usage.html#higher-level)). Would the maintainers be open to that? I would be happy to contribute.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This seems to be a useful feature. any thoughts about implementation of this?<|||||>[Hugging Datasets](https://huggingface.co/docs/datasets/filesystems.html) supports this feature. It really looks like a useful feature that the PreTrainedModel doesn't support today
transformers
11,578
closed
How to use `model.generate` with custom model with additional parameters?
Hi, I'm working on custom `BartForConditionalGeneration` model and it has additional parameters as following. parameter with `'pos_' `prefix is the parameter I added personally. ``` class SynSemBartForConditionalGeneration(BartPretrainedModel): ... def forward( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, labels=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, pos_input_ids=None, pos_attention_mask=None, pos_head_mask=None, pos_inputs_embeds=None, pos_output_attentions=None, pos_output_hidden_states=None, pos_return_dict=None, ): ``` I trained this model successfully, thanks to huggingface Trainer. Then I tried to run `model.generate(input_ids=input_ids, pos_input_ids=pos_input_ids)` I thought `pos_input_ids` is **"Additional model specific kwargs"** and it would **be forwarded to the `forward` function** of my model. However, it keeps giving me `TypeError: forward() got an unexpected keyword argument 'pos_input_ids'`. I would appreciate any help 😥
05-04-2021 12:16:43
05-04-2021 12:16:43
Hi there, Could you post the full stack trace ? Also, does the `encoder`s forward has this argument? Because if you see here, the extra `kwargs` are passed to the encoder as well https://github.com/huggingface/transformers/blob/09b0bcfea98eb9e863248fd29566b6d1caf9a0ea/src/transformers/generation_utils.py#L410-L414<|||||>Does the full stack trace mean full error message? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/yeoun/.local/lib/python3.6/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/home/yeoun/.local/lib/python3.6/site-packages/transformers/generation_utils.py", line 927, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) File "/home/yeoun/.local/lib/python3.6/site-packages/transformers/generation_utils.py", line 412, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) File "/home/yeoun/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'pos_input_ids' ``` `encoder`'s `forward` doesn't have this argument. I connected two separate encoders, all has the same parameters as the original Bart encoder, but in the `model`'s `forward` method I used 'pos_' prefix for the inputs of the second encoder to distinct them. ``` class SynSemBartModel(BartPretrainedModel): ... def forward( self, input_ids=None, attention_mask=None, decoder_input_ids=None, decoder_attention_mask=None, head_mask=None, decoder_head_mask=None, encoder_outputs=None, past_key_values=None, inputs_embeds=None, decoder_inputs_embeds=None, use_cache=None, output_attentions=None, output_hidden_states=None, return_dict=None, pos_input_ids=None, pos_attention_mask=None, pos_head_mask=None, pos_inputs_embeds=None, pos_output_attentions=None, pos_output_hidden_states=None, pos_return_dict=None, pos_encoder_outputs=None, ): ... encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, head_mask=head_mask, inputs_embeds=inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, ) ... pos_encoder_outputs = self.pos_encoder( input_ids=pos_input_ids, attention_mask=pos_attention_mask, head_mask=pos_head_mask, inputs_embeds=pos_inputs_embeds, output_attentions=pos_output_attentions, output_hidden_states=pos_output_hidden_states, return_dict=pos_return_dict, ) ``` I added two `encoders`' outputs and then feed into the `decoder`. Should I change my model structure in order to use this function? or is there any alternatives? Thanks <|||||>In this case, you could override the `_prepare_encoder_decoder_kwargs_for_generation` method and get the necessary output from both your encoder, add them, and then return that as the `last_hidden_state` of the encoder. The output is an instance of `BaseModelOutputClass`. Another option is to use the generation methods separately, where you compute the necessary `encoder_outputs` and then pass that to `beam_serach` or whatever method you are using. Please refer to [beam_search](https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.beam_search) docstring to see an example<|||||>It works after I override `_prepare_encoder_decoder_kwargs_for_generation` and `prepare_inputs_for_generation` method. Thanks a lot!
transformers
11,577
closed
potential mismatch between `save_pretrained` and `from_pretrained` for `AutoTokenizer`
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-4.15.0-48-generic-x86_64-with-debian-buster-sid - Python version: 3.7.3 - PyTorch version (GPU?): 1.6.0+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Since it is related to tokenizer, I guess @LysandreJik could help to have a look. ## Information Model I am using (Bert, XLNet ...): `AutoTokenizer` I tried to save the official pretrained tokenizer `bert-base-multilingual-cased` into a local folder so I can load it from this folder next time. I got the official `bert-base-multilingual-cased` tokenizer: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( pretrained_model_name_or_path='bert-base-multilingual-cased', use_fast=True, ) ``` The tokenizer looks like this: ``` PreTrainedTokenizerFast(name_or_path='bert-base-multilingual-cased', vocab_size=119547, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}) ``` Tried to save it locally: ```python tokenizer.save_pretrained('bert-base-multilingual-cased-good') ``` The folder structure after saving: ``` bert-base-multilingual-cased-good/ ├── special_tokens_map.json ├── tokenizer_config.json └── vocab.txt ``` However, when I tried to load the tokenizer from the local folder, I got this error: ```python fail_tokenizer = AutoTokenizer.from_pretrained( pretrained_model_name_or_path='bert-base-multilingual-cased-good', use_fast=True, ) ``` error message: ``` OSError: Can't load config for 'bert-base-multilingual-cased-good'. Make sure that: - 'bert-base-multilingual-cased-good' is a correct model identifier listed on 'https://huggingface.co/models' - or 'bert-base-multilingual-cased-good' is the correct path to a directory containing a config.json file ``` Seems the code was looking for `config.json` under this folder. :thinking: According to the codes below, I guess the assumption should be correct: - https://github.com/huggingface/transformers/blob/04ab2ca639ee6fd1002ce0a498d892245f0c9093/src/transformers/configuration_utils.py#L456 - https://github.com/huggingface/transformers/blob/04ab2ca639ee6fd1002ce0a498d892245f0c9093/src/transformers/file_utils.py#L218 After googling, I found an answer from stackoverflow that we could just rename the `tokenizer_config.json` as `config.json`. That sounded nice, so I just tried it, now my folder structure became: (I use a new folder `bert-base-multilingual-cased-bad` just for showcase) ``` bert-base-multilingual-cased-bad ├── config.json ├── special_tokens_map.json └── vocab.txt ``` Then I tried to load from it: ```python bad_tokenizer = AutoTokenizer.from_pretrained( pretrained_model_name_or_path='bert-base-multilingual-cased-bad', use_fast=True, ) ``` It works! :100: However, now the tokenizer became very weird: ``` PreTrainedTokenizerFast(name_or_path='bert-base-multilingual-cased-bad', vocab_size=119547, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}) ``` Well, the `model_max_len` became `1000000000000000019884624838656` instead of `512`. After further checking other attributes that were included in the original `tokenizer_config.json`, I found even `do_lower_case` now becomes `True`, scary... :fearful: I bet the `bad_tokenizer` didn't take into account the settings from `config.json`, it might just initialize with the default settings instead. So I tried to copy `config.json` and renamed it as `tokenizer_config.json`, now my folder structure looks like this: (I got both `config.json` and `tokenizer_config.json` although they are just the same file with different names) ``` bert-base-multilingual-cased-work ├── config.json ├── special_tokens_map.json ├── tokenizer_config.json └── vocab.txt ``` Although it looks weird, it works as expected: (I use a new folder `bert-base-multilingual-cased-work` just for showcase) ```python work_tokenizer = AutoTokenizer.from_pretrained( pretrained_model_name_or_path='bert-base-multilingual-cased-work', use_fast=True, ) ``` Now the tokenizer seems fine: ``` PreTrainedTokenizerFast(name_or_path='bert-base-multilingual-cased-work', vocab_size=119547, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'unk_token': '[UNK]', 'sep_token': '[SEP]', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'mask_token': '[MASK]'}) ``` ### Why I think this could be severe? Initially, I just renamed `tokenizer_config.json` as `config.json` and deployed my NLP model (that's why I need to save the tokenizer locally) without checking the tokenizer in detail. However, I found the results from the deployed model didn't match with the local ones. So I did some investigation and found that this issue. If you also encountered similar issues, maybe you can try to check your tokenizer. I think it doesn't make sense that the code looks for `config.json` in function `from_pretrained` while there is no `config.json` generated when function `save_pretrained` is called (https://github.com/huggingface/transformers/blob/04ab2ca639ee6fd1002ce0a498d892245f0c9093/src/transformers/tokenization_utils_base.py#L110). It's better to unify the config naming system, otherwise, it might cause severe issues that are very hard to spot.
05-04-2021 10:15:39
05-04-2021 10:15:39
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Faced with the same issue<|||||>Hi! This is close to something @SaulLu is working on. Thanks for opening this issue, we're keeping an eye on it and will resolve it.<|||||>> Hi! This is close to something @SaulLu is working on. Thanks for opening this issue, we're keeping an eye on it and will resolve it. Thank you! 🤩<|||||>This is fixed on main! Closing 😉 <|||||>This issue still persists. Downloaded files from https://huggingface.co/bert-base-multilingual-cased/tree/main and tried to create tokenizer using from_pretrained from local files <img width="947" alt="Screenshot 2023-06-01 at 17 26 29" src="https://github.com/huggingface/transformers/assets/22324507/943d1b83-8801-4b61-8bd0-ae7ef639a926"> <|||||>Hey! If you issue is about potential missmatch (but not about the files) please open a seperate issue with a reproducing script so that we can help you.
transformers
11,576
closed
no connection error
- `transformers` version: 4.5.0 - Platform: linux - Python version: 3.8 - PyTorch version (GPU?): 1.7.1 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?: Models: mbart and mt5 @patrickvonplaten, @patil-suraj output: ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.
05-04-2021 09:57:40
05-04-2021 09:57:40
Please put the code you used that led to this error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,575
closed
longform
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
05-04-2021 06:39:50
05-04-2021 06:39:50
transformers
11,574
closed
Number of BART layers is confusing in the Pretrained models page
In the [Pretrained models](https://huggingface.co/transformers/pretrained_models.html) page, facebook/bart-large is described as 24-layer, 1024-hidden, 16-heads, 406M parameters, and facebook/bart-base as 12-layer, 768-hidden, 16-heads, 139M parameters. I think this is a little confusing because bart-base consists of 6 encoder layers and 6 decoder layers, and bart-large consists of 12 layers each. (You can check in [the fairseq repo](https://github.com/pytorch/fairseq/tree/master/examples/bart)) I think specifying the layers as: "N encoder layers, N decoder layers" would be a lot clearer
05-04-2021 04:10:14
05-04-2021 04:10:14
Hi @mnskim Usually, we specify the total number of layers in a model instead of an individual components layer (i.e encoder or decoder). And seq2seq models like BART, T5 usually use the same number of layers in both encoder-decoder i.e 6-6 or 12-12. We could specify the encoder layer and decoder layer if they are different. But yeah, if you think specifying encoder and decoder layer is clearer then feel free to open a PR :) We could write 12 layer (6 encoder and 6 decoder layers). Thanks. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>There is another issue with this page. facebook/bart-base actually contains 12 heads, not 16 heads. Unfortunately, the file for this page has been removed, so it cannot be updated anymore. The wrong page is still available on Google, though. ![](https://user-images.githubusercontent.com/68557794/160236798-5fb25524-5851-48a1-b6c6-605b9afb197d.png) ![](https://user-images.githubusercontent.com/68557794/160236815-40cd6335-cc5b-40f4-93a5-59a2f7d6788f.png)
transformers
11,573
closed
Make quality scripts work when one backend is missing.
# What does this PR do? This PR makes the quality scripts issue a warning instead of erroring when one backend is missing (which shouldn't be the case since the contributing guide says to run `pip install -e .[dev]` but we never know). It's also fine since a user is unlikely to make changes to a specific backend of the library if it's not even installed. Fixes #11570
05-04-2021 00:09:18
05-04-2021 00:09:18
The CI will have all backends installed. This takes care of contributors that cannot install one backend for one reason or another (for instance if jax/flax was not available on Windows which wouldn't surprise me).<|||||>> The CI will have all backends installed. All is good then. I totally agree, that not needing to force someone to install libs they don't need is a goodness. Very awesome!<|||||>Oh very good idea, that way we get the best of both worlds! Thanks for the suggestion, will apply that this morning and merge.
transformers
11,572
closed
[Deepspeed] fix resize_token_embeddings
the introduction of `model.resize_token_embeddings(len(tokenizer))` in https://github.com/huggingface/transformers/commit/57c8e822f7faa1c19f9926338f21f3aab2269997#diff-09777f56cee1060a535a72ce99a6c96cdb7f330c8cc3f9dcca442b3f7768237a uncovered a bug in Deepspeed integration. We have to create the new resized embedding with the same dtype as the original. If not deepspeed will fail to initialize with: `"Model must initialized in fp16 mode for ZeRO Stage 3.` error. This PR fixes this bug. @sgugger
05-03-2021 19:15:07
05-03-2021 19:15:07
transformers
11,571
closed
reformer-enwik8 Fine-tuning
Could you pls provide a demo for training and Fine-tuning the model of 'reformer-enwik8'? @patrickvonplaten
05-03-2021 18:08:23
05-03-2021 18:08:23
Hi, you could use [this ](https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/PyTorch_Reformer.ipynb) notebook as a reference, it shows how to train reformer on the crime-and-punishment data. You could usually find such examples in the community notebooks section [here](https://huggingface.co/transformers/community.html). Also, please use [forum](https://discuss.huggingface.co/) to ask such questions.<|||||>@patil-suraj Thank you for your reply. Actually, I do not know how to build the enwik8 dataset, it do not need a tokenizer, which is different from others. I am an beginner <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,570
closed
[fixup/style] requires TF but doesn't say that cleanly
It looks like `make fixup` et al, require tf (and pt), but fail in unexpected way when the requirements are missing: ``` python utils/check_repo.py Checking all models are properly tested. Checking all objects are properly documented. Checking all models are in at least one auto class. Traceback (most recent call last): File "/home/michael/projects/transformers/utils/check_repo.py", line 481, in <module> check_repo_quality() File "/home/michael/projects/transformers/utils/check_repo.py", line 477, in check_repo_quality check_all_models_are_auto_configured() File "/home/michael/projects/transformers/utils/check_repo.py", line 290, in check_all_models_are_auto_configured all_auto_models = get_all_auto_configured_models() File "/home/michael/projects/transformers/utils/check_repo.py", line 253, in get_all_auto_configured_models for attr_name in dir(transformers.models.auto.modeling_tf_auto): File "/home/michael/projects/transformers/src/transformers/file_utils.py", line 1690, in __getattr__ raise AttributeError(f"module {self.__name__} has no attribute {name}") AttributeError: module transformers.models.auto has no attribute modeling_tf_auto make: *** [Makefile:35 : extra_quality_checks] Erreur 1 ``` Thank you, @michaelbenayoun for flagging this Should we add a small script first that checks all the requirements so that the error is not misleading, but something like: "need to install `pip install -e .[dev]` to develop `transformers`"? @sgugger, @LysandreJik
05-03-2021 16:48:59
05-03-2021 16:48:59
The same thing happens with flax.<|||||>This should take care of all the deps: ``` pip install -e .[dev] ``` Please let us know if it didn't.<|||||>I don't think we need a new script for that. Maybe add the check inside the script that fails (`check_all_models_are_auto_configured`) and issue a warning if not all backends are detected (I don't think we need to error out, since it's unlikely the user will bring changes that break a backend when that backend is not even installed)? I can do this later this evening or tomorrow.<|||||>Also with a bit of some further massaging of `setup.py`'s `extras`, we could automate this - basically need to be able to load `extras[dev]` outside of `setup.py`, so the check could be to simply import everything that is in `extras[dev]`.<|||||>Note that this specific script only relies on the model backends only, so no need for the whole of dev yet.<|||||>If it's easier - then by all means. I just thought that if we already maintain `extras[dev]` then any dev tool could just have that as pre-requisite.
transformers
11,569
closed
Fix metric computation in `run_glue_no_trainer`
# What does this PR do? There is a problem in the metric computation in `run_glue_no_trainer`, which always takes the argmax regardless of the task: when the problem is a regression problem it should not be the case. Fixes #11555
05-03-2021 15:21:36
05-03-2021 15:21:36
transformers
11,568
closed
`TypeError: TextInputSequence must be str` from Fast Tokenizer
### Bug On version 4.5.1, trying to use fast tokenizer for Roberta, and got the above error. Weirdly, saw `transformers/models/gpt2/tokenization_gpt2_fast.py` in the traceback even no `gpt2` models are involved. **Script to reproduce** ``` from transformers import AutoTokenizer from transformers.data.processors.utils import InputExample if __name__ == "__main__": tokenizer = AutoTokenizer.from_pretrained( "roberta-base", use_fast=True ) MAX_LENGTH = 256 LABEL_LIST = [0, 1] OUTPUT_MODE = "classificaiton" inputs = ["Ututu goes public.", "This moon is huge."] examples = [InputExample(guid=str(index), text_a=text, label=None) for index, text in enumerate(inputs)] # this throws TypeError: TextInputSequence must be str batch_encoding = tokenizer( [(example.text_a, example.text_b) for example in examples], max_length=MAX_LENGTH, padding='max_length', truncation=True ) print(batch_encoding) ``` **Traceback** ``` Traceback (most recent call last): File "tests/integration/tmp_test.py", line 22, in <module> truncation=True File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2271, in __call__ **kwargs, File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2456, in batch_encode_plus **kwargs, File "/usr/local/lib/python3.7/site-packages/transformers/models/gpt2/tokenization_gpt2_fast.py", line 163, in _batch_encode_plus return super()._batch_encode_plus(*args, **kwargs) File "/usr/local/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 388, in _batch_encode_plus is_pretokenized=is_split_into_words, TypeError: TextInputSequence must be str ``` Saw a similar issue raised for QnA pipeline, but I'm not doing QnA here. Thoughts?
05-03-2021 14:57:11
05-03-2021 14:57:11
You're sending this to your tokenizer: `[(example.text_a, example.text_b) for example in examples]` But this is: `[('Ututu goes public.', None), ('This moon is huge.', None)]`. A tokenizer cannot handle `None`, only text.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,567
closed
Temporary files from an interrupted download litter the disk
## Environment info - `transformers` version: 4.5.1 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.9.4 - PyTorch version (GPU?): 1.8.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information Model I am using (Bert, XLNet ...): EleutherAI/gpt-neo-125M The problem arises when using: * the official example scripts ## To reproduce Steps to reproduce the behavior: 1. run the transformers library code snippet from https://huggingface.co/EleutherAI/gpt-neo-2.7B: ``` from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-2.7B") model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-2.7B") ``` 2. abort as it's downloading 3. rerun: it start's anew 4. behold a tmp file remaining until you remove it manually ## Expected behavior The download should resume where it left off, or the temp file should be deleted when the process is aborted, or at least when it is rerun. (It would also be nice if the download location was allocated on the same disk partition as the calling .py - I put my pycharm project on a partition with enough space expecting the model file to also go there; instead the remaining space on my primary partition was eaten.) (It would also be nice if the files ended up with the same name as the model.)
05-03-2021 14:10:53
05-03-2021 14:10:53
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,566
closed
Fixes a useless warning in `generate`.
# What does this PR do? We removed `max_length` as a necessary parameter for `greedy`/`sample`/`beam_search`/`group_beam_search`. Unfortunately a extra warnings made its way in the PR which is unwarranted for. Fixes #11371 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00 @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-03-2021 12:43:16
05-03-2021 12:43:16
Sorry about that ! I'll wait for @patrickvonplaten before merging.
transformers
11,565
closed
hyperparameter_search raytune: ModuleNotFoundError: No module named 'datasets_modules'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: Linux-4.15.0-142-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @richardliaw, @amogkam ## Information Model I am using (Bert, XLNet ...): Bert (neuralmind/bert-base-portuguese-cased) The problem arises when using: * [ ] the official example scripts: (give details below) * [ x ] my own modified scripts: (give details below) The tasks I am working on is: * [ x ] an official GLUE/SQUaD task: (give the name) * [ x ] my own task or dataset: (give details below) I'm running a modified run_ner example to use trainer.hyperparameter_search with raytune. I'm using my own datasets, but I have run into the same issue using other glue scripts and official glue datasets, such as the ones other people ran into here: [https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/34) [https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35](https://discuss.huggingface.co/t/using-hyperparameter-search-in-trainer/785/35) [Colab from @piegu ](https://colab.research.google.com/drive/1I3VNCUVat3qEXxXxoY0Z_xp_viWaOuYZ?usp=sharing) At first I was using the run_ner and transformers version from the current 4.6.0-dev branch, but I ran into the same issue as reported here: #11249 So I downgraded transformers and ray to 4.4.2 and 1.2.0 (creating a fresh conda environment), and made the necessary adjustments to the run_ner script, to become compatible with 4.4.2. ## To reproduce Steps to reproduce the behavior: This is the full code from the script: ``` #!/usr/bin/env python # coding: utf-8 import json import logging import os import sys import copy from dataclasses import dataclass, field from typing import Optional, Dict, Any import numpy as np from datasets import ClassLabel, load_dataset, load_metric from ray import tune from ray.tune.integration.wandb import WandbLogger from ray.tune.logger import DEFAULT_LOGGERS from ray.tune.schedulers import PopulationBasedTraining import transformers from transformers import ( AutoConfig, AutoModelForTokenClassification, AutoTokenizer, DataCollatorForTokenClassification, HfArgumentParser, PreTrainedTokenizerFast, Trainer, TrainingArguments, set_seed, ) from transformers.trainer_utils import get_last_checkpoint, is_main_process from transformers.utils import check_min_version # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.4.0") logger = logging.getLogger(__name__) @dataclass class RayArguments: """[summary] """ time_budget_h: str = field( metadata={"help": "Time budget in hours."} ) @dataclass class ModelArguments: """ Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. """ model_name_or_path: str = field( metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} ) config_name: Optional[str] = field( default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} ) tokenizer_name: Optional[str] = field( default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} ) cache_dir: Optional[str] = field( default=None, metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, ) model_revision: str = field( default="main", metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, ) use_auth_token: bool = field( default=False, metadata={ "help": "Will use the token generated when running `transformers-cli login` (necessary to use this script " "with private models)." }, ) @dataclass class DataTrainingArguments: """ Arguments pertaining to what data we are going to input our model for training and eval. """ task_name: Optional[str] = field(default="ner", metadata={"help": "The name of the task (ner, pos...)."}) dataset_name: Optional[str] = field( default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} ) dataset_config_name: Optional[str] = field( default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} ) train_file: Optional[str] = field( default=None, metadata={"help": "The input training data file (a csv or JSON file)."} ) validation_file: Optional[str] = field( default=None, metadata={"help": "An optional input evaluation data file to evaluate on (a csv or JSON file)."}, ) test_file: Optional[str] = field( default=None, metadata={"help": "An optional input test data file to predict on (a csv or JSON file)."}, ) overwrite_cache: bool = field( default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} ) preprocessing_num_workers: Optional[int] = field( default=None, metadata={"help": "The number of processes to use for the preprocessing."}, ) pad_to_max_length: bool = field( default=False, metadata={ "help": "Whether to pad all samples to model maximum sentence length. " "If False, will pad the samples dynamically when batching to the maximum length in the batch. More " "efficient on GPU but very bad for TPU." }, ) max_train_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of training examples to this " "value if set." }, ) max_val_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of validation examples to this " "value if set." }, ) max_test_samples: Optional[int] = field( default=None, metadata={ "help": "For debugging purposes or quicker training, truncate the number of test examples to this " "value if set." }, ) label_all_tokens: bool = field( default=False, metadata={ "help": "Whether to put the label for one word on all tokens of generated by that word or just on the " "one (in which case the other tokens will have a padding index)." }, ) return_entity_level_metrics: bool = field( default=False, metadata={"help": "Whether to return all the entity levels during evaluation or just the overall ones."}, ) def __post_init__(self): if self.dataset_name is None and self.train_file is None and self.validation_file is None: raise ValueError("Need either a dataset name or a training/validation file.") else: if self.train_file is not None: extension = self.train_file.split(".")[-1] assert extension in ["csv", "json"], "`train_file` should be a csv or a json file." if self.validation_file is not None: extension = self.validation_file.split(".")[-1] assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file." self.task_name = self.task_name.lower() def compute_objective(metrics: Dict[str, float]) -> float: """ The default objective to maximize/minimize when doing an hyperparameter search. It is the evaluation loss if no metrics are provided to the :class:`~transformers.Trainer`, the sum of all metrics otherwise. Args: metrics (:obj:`Dict[str, float]`): The metrics returned by the evaluate method. Return: :obj:`float`: The objective to minimize or maximize """ metrics = copy.deepcopy(metrics) loss = metrics.pop("eval_loss", None) _ = metrics.pop("epoch", None) # Remove speed metrics speed_metrics = [m for m in metrics.keys() if m.endswith("_runtime") or m.endswith("_samples_per_second")] for sm in speed_metrics: _ = metrics.pop(sm, None) return loss if len(metrics) == 0 else sum(metrics.values()) def main(): parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments, RayArguments)) model_args, data_args, training_args, ray_args = parser.parse_args_into_dataclasses() # Detecting last checkpoint. last_checkpoint = None if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: last_checkpoint = get_last_checkpoint(training_args.output_dir) if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: raise ValueError( f"Output directory ({training_args.output_dir}) already exists and is not empty. " "Use --overwrite_output_dir to overcome." ) elif last_checkpoint is not None: logger.info( f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." ) # Setup logging logging.basicConfig( format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", handlers=[logging.StreamHandler(sys.stdout)], ) logger.setLevel(logging.INFO if is_main_process(training_args.local_rank) else logging.WARN) # Log on each process the small summary: logger.warning( f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" ) # Set the verbosity to info of the Transformers logger (on main process only): if is_main_process(training_args.local_rank): transformers.utils.logging.set_verbosity_info() transformers.utils.logging.enable_default_handler() transformers.utils.logging.enable_explicit_format() logger.info("Training/evaluation parameters %s", training_args) # Set seed before initializing model. set_seed(training_args.seed) # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ # (the dataset will be downloaded automatically from the datasets Hub). # # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called # 'text' is found. You can easily tweak this behavior (see below). # # In distributed training, the load_dataset function guarantee that only one local process can concurrently # download the dataset. if data_args.dataset_name is not None: # Downloading and loading a dataset from the hub. datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) else: data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file if data_args.test_file is not None: data_files["test"] = data_args.test_file extension = data_args.train_file.split(".")[-1] datasets = load_dataset(extension, data_files=data_files) # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at # https://huggingface.co/docs/datasets/loading_datasets.html. if training_args.do_train: column_names = datasets["train"].column_names features = datasets["train"].features else: column_names = datasets["validation"].column_names features = datasets["validation"].features text_column_name = "tokens" if "tokens" in column_names else column_names[0] label_column_name = ( f"{data_args.task_name}_tags" if f"{data_args.task_name}_tags" in column_names else column_names[1] ) # In the event the labels are not a `Sequence[ClassLabel]`, we will need to go through the dataset to get the # unique labels. def get_label_list(labels): unique_labels = set() for label in labels: unique_labels = unique_labels | set(label) label_list = list(unique_labels) label_list.sort() return label_list if isinstance(features[label_column_name].feature, ClassLabel): label_list = features[label_column_name].feature.names # No need to convert the labels since they are already ints. label_to_id = {i: i for i in range(len(label_list))} else: label_list = get_label_list(datasets["train"][label_column_name]) label_to_id = {l: i for i, l in enumerate(label_list)} num_labels = len(label_list) # Load pretrained model and tokenizer # # Distributed training: # The .from_pretrained methods guarantee that only one local process can concurrently # download model & vocab. config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, num_labels=num_labels, finetuning_task=data_args.task_name, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=True, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, model_max_length=512 ) model = AutoModelForTokenClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) # Tokenizer check: this script requires a fast tokenizer. if not isinstance(tokenizer, PreTrainedTokenizerFast): raise ValueError( "This example script only works for models that have a fast tokenizer. Checkout the big table of models " "at https://huggingface.co/transformers/index.html#bigtable to find the model types that meet this " "requirement" ) # Preprocessing the dataset # Padding strategy padding = "max_length" if data_args.pad_to_max_length else False # Tokenize all texts and align the labels with them. def tokenize_and_align_labels(examples): tokenized_inputs = tokenizer( examples[text_column_name], padding=padding, truncation=True, # We use this argument because the texts in our dataset are lists of words (with a label for each word). is_split_into_words=True, ) labels = [] for i, label in enumerate(examples[label_column_name]): word_ids = tokenized_inputs.word_ids(batch_index=i) previous_word_idx = None label_ids = [] for word_idx in word_ids: # Special tokens have a word id that is None. We set the label to -100 so they are automatically # ignored in the loss function. if word_idx is None: label_ids.append(-100) # We set the label for the first token of each word. elif word_idx != previous_word_idx: label_ids.append(label_to_id[label[word_idx]]) # For the other tokens in a word, we set the label to either the current label or -100, depending on # the label_all_tokens flag. else: label_ids.append(label_to_id[label[word_idx]] if data_args.label_all_tokens else -100) previous_word_idx = word_idx labels.append(label_ids) tokenized_inputs["labels"] = labels return tokenized_inputs if training_args.do_train: if "train" not in datasets: raise ValueError("--do_train requires a train dataset") train_dataset = datasets["train"] if data_args.max_train_samples is not None: train_dataset = train_dataset.select(range(data_args.max_train_samples)) train_dataset = train_dataset.map( tokenize_and_align_labels, batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, ) if training_args.do_eval: if "validation" not in datasets: raise ValueError("--do_eval requires a validation dataset") eval_dataset = datasets["validation"] if data_args.max_val_samples is not None: eval_dataset = eval_dataset.select(range(data_args.max_val_samples)) eval_dataset = eval_dataset.map( tokenize_and_align_labels, batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, ) if training_args.do_predict: if "test" not in datasets: raise ValueError("--do_predict requires a test dataset") test_dataset = datasets["test"] if data_args.max_test_samples is not None: test_dataset = test_dataset.select(range(data_args.max_test_samples)) test_dataset = test_dataset.map( tokenize_and_align_labels, batched=True, num_proc=data_args.preprocessing_num_workers, load_from_cache_file=not data_args.overwrite_cache, ) # Data collator data_collator = DataCollatorForTokenClassification(tokenizer, pad_to_multiple_of=8 if training_args.fp16 else None) # Metrics metric = load_metric("seqeval") def compute_metrics(p): predictions, labels = p predictions = np.argmax(predictions, axis=2) # Remove ignored index (special tokens) true_predictions = [ [label_list[p] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] true_labels = [ [label_list[l] for (p, l) in zip(prediction, label) if l != -100] for prediction, label in zip(predictions, labels) ] results = metric.compute(predictions=true_predictions, references=true_labels) if data_args.return_entity_level_metrics: # Unpack nested dictionaries final_results = {} for key, value in results.items(): if isinstance(value, dict): for n, v in value.items(): final_results[f"{key}_{n}"] = v else: final_results[key] = value return final_results else: return { "precision": results["overall_precision"], "recall": results["overall_recall"], "f1": results["overall_f1"], "accuracy": results["overall_accuracy"], } def model_init(): model = AutoModelForTokenClassification.from_pretrained( model_args.model_name_or_path, from_tf=bool(".ckpt" in model_args.model_name_or_path), config=config, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) return model class CustomTrainer(Trainer): def __init__(self, *args, **kwargs): super(CustomTrainer, self).__init__(*args, **kwargs) def _hp_search_setup(self, trial: Any): try: trial.pop('wandb', None) except AttributeError: pass super(CustomTrainer, self)._hp_search_setup(trial) # Initialize our Trainer trainer = CustomTrainer( model_init=model_init, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset if training_args.do_eval else None, compute_metrics=compute_metrics, tokenizer=tokenizer, data_collator=data_collator, ) # Hyperparameter Search def hp_space_fn(*args, **kwargs): config = { "seed": tune.choice([42, 43, 44]), "weight_decay": tune.choice([0.0, 0.1, 0.2, 0.3]), "adam_epsilon": tune.choice([1e-6, 1e-7, 1e-8]), "max_grad_norm": tune.choice([1.0, 2.0]), "warmup_steps": tune.choice([50, 100, 500, 1000]), "learning_rate": tune.choice([2e-5, 3e-5, 4e-5, 5e-5]), "num_train_epochs": tune.quniform(0.0, 8.0, 0.5), } wandb_config = { "wandb": { "project": "hf-ner-testing", "api_key": os.environ.get("API_KEY"), "log_config": True } } config.update(wandb_config) return config time_budget_h = 60 * 60 * int(ray_args.time_budget_h) best_run = trainer.hyperparameter_search( direction="maximize", backend="ray", scheduler=PopulationBasedTraining( time_attr='time_total_s', metric='eval_f1', mode='max', perturbation_interval=600.0 ), hp_space=hp_space_fn, loggers=DEFAULT_LOGGERS + (WandbLogger,), time_budget_s=time_budget_h, keep_checkpoints_num=1, checkpoint_score_attr='eval_f1', compute_objective=compute_objective ) output_params_file = os.path.join( training_args.output_dir, "best_run.json" ) with open(output_params_file, "w") as f: json.dump( best_run.hyperparameters, f, indent=4) return best_run if __name__ == "__main__": main() ``` And these are the args I used for running it: ``` --model_name_or_path neuralmind/bert-base-portuguese-cased --train_file train.json --validation_file dev.json --output_dir output --do_train --do_eval --evaluation_strategy steps --per_device_train_batch_size=2 --per_device_eval_batch_size=2 --time_budget_h 2 ``` This is the full output log: ``` /media/discoD/anaconda3/envs/transformers/bin/python /media/discoD/pycharm-community-2019.2/plugins/python-ce/helpers/pydev/pydevd.py --multiproc --qt-support=auto --client 127.0.0.1 --port 38419 --file /media/discoD/repositorios/transformers_pedro/examples/pytorch/token-classification/run_ner_hp_search_442.py --model_name_or_path neuralmind/bert-base-portuguese-cased --train_file train.json --validation_file dev.json --output_dir transformers-hp --do_train --do_eval --evaluation_strategy steps --per_device_train_batch_size=2 --per_device_eval_batch_size=2 --time_budget_h 2 Connected to pydev debugger (build 211.7142.13) 05/03/2021 08:10:04 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False 05/03/2021 08:10:04 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir=transformers-hp, overwrite_output_dir=False, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=IntervalStrategy.STEPS, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=2, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_ratio=0.0, warmup_steps=0, logging_dir=runs/May03_08-10-04_user-XPS-8700, logging_strategy=IntervalStrategy.STEPS, logging_first_step=False, logging_steps=500, save_strategy=IntervalStrategy.STEPS, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, fp16_full_eval=False, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=transformers-hp, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=[], deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard', 'wandb'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, skip_memory_metrics=False, _n_gpu=1) 05/03/2021 08:10:04 - WARNING - datasets.builder - Using custom data configuration default-438421c06175ed26 05/03/2021 08:10:04 - WARNING - datasets.builder - Reusing dataset json (/home/user/.cache/huggingface/datasets/json/default-438421c06175ed26/0.0.0/83d5b3a2f62630efc6b5315f00f20209b4ad91a00ac586597caee3a4da0bef02) [INFO|configuration_utils.py:463] 2021-05-03 08:10:06,050 >> loading configuration file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/config.json from cache at /home/user/.cache/huggingface/transformers/e716e2151985ba669e7197b64cdde2552acee146494d40ffaf0688a3f152e6ed.18a0b8b86f3ebd4c8a1d8d6199178feae9971ff5420f1d12f0ed8326ffdff716 [INFO|configuration_utils.py:499] 2021-05-03 08:10:06,063 >> Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "directionality": "bidi", "finetuning_task": "ner", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3", "4": "LABEL_4", "5": "LABEL_5", "6": "LABEL_6", "7": "LABEL_7", "8": "LABEL_8", "9": "LABEL_9", "10": "LABEL_10", "11": "LABEL_11", "12": "LABEL_12" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_10": 10, "LABEL_11": 11, "LABEL_12": 12, "LABEL_2": 2, "LABEL_3": 3, "LABEL_4": 4, "LABEL_5": 5, "LABEL_6": 6, "LABEL_7": 7, "LABEL_8": 8, "LABEL_9": 9 }, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 0, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "position_embedding_type": "absolute", "transformers_version": "4.4.2", "type_vocab_size": 2, "use_cache": true, "vocab_size": 29794 } [INFO|configuration_utils.py:463] 2021-05-03 08:10:06,767 >> loading configuration file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/config.json from cache at /home/user/.cache/huggingface/transformers/e716e2151985ba669e7197b64cdde2552acee146494d40ffaf0688a3f152e6ed.18a0b8b86f3ebd4c8a1d8d6199178feae9971ff5420f1d12f0ed8326ffdff716 [INFO|configuration_utils.py:499] 2021-05-03 08:10:06,777 >> Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "directionality": "bidi", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "output_past": true, "pad_token_id": 0, "pooler_fc_size": 768, "pooler_num_attention_heads": 12, "pooler_num_fc_layers": 3, "pooler_size_per_head": 128, "pooler_type": "first_token_transform", "position_embedding_type": "absolute", "transformers_version": "4.4.2", "type_vocab_size": 2, "use_cache": true, "vocab_size": 29794 } [INFO|tokenization_utils_base.py:1702] 2021-05-03 08:10:09,936 >> loading file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/vocab.txt from cache at /home/user/.cache/huggingface/transformers/aa6d50227b77416b26162efcf0cc9e9a702d13920840322060a2b41a44a8aff4.af25fb1e29ad0175300146695fd80069be69b211c52fa5486fa8aae2754cc814 [INFO|tokenization_utils_base.py:1702] 2021-05-03 08:10:09,936 >> loading file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/tokenizer.json from cache at None [INFO|tokenization_utils_base.py:1702] 2021-05-03 08:10:09,937 >> loading file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/added_tokens.json from cache at /home/user/.cache/huggingface/transformers/9188d297517828a862f4e0b0700968574ca7ad38fbc0832c409bf7a9e5576b74.5cc6e825eb228a7a5cfd27cb4d7151e97a79fb962b31aaf1813aa102e746584b [INFO|tokenization_utils_base.py:1702] 2021-05-03 08:10:09,937 >> loading file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/special_tokens_map.json from cache at /home/user/.cache/huggingface/transformers/eecc45187d085a1169eed91017d358cc0e9cbdd5dc236bcd710059dbf0a2f816.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d [INFO|tokenization_utils_base.py:1702] 2021-05-03 08:10:09,938 >> loading file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/tokenizer_config.json from cache at /home/user/.cache/huggingface/transformers/f1a9ba41d40e8c6f5ba4988aa2f7702c3b43768183e4b82483e04f2848841ecf.a6c00251b9344c189e2419373d6033016d0cd3d87ea59f6c86069046ac81956d [INFO|modeling_utils.py:1051] 2021-05-03 08:10:10,709 >> loading weights file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/pytorch_model.bin from cache at /home/user/.cache/huggingface/transformers/1e42c907c340c902923496246dae63e33f64955c529720991b7ec5543a98e442.fa492fca6dcee85bef053cc60912a211feb1f7173129e4eb1a5164e817f2f5f2 [WARNING|modeling_utils.py:1158] 2021-05-03 08:10:13,606 >> Some weights of the model checkpoint at neuralmind/bert-base-portuguese-cased were not used when initializing BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:1169] 2021-05-03 08:10:13,607 >> Some weights of BertForTokenClassification were not initialized from the model checkpoint at neuralmind/bert-base-portuguese-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 100%|██████████| 7/7 [00:02<00:00, 3.06ba/s] 100%|██████████| 2/2 [00:00<00:00, 3.13ba/s] [INFO|modeling_utils.py:1051] 2021-05-03 08:10:19,160 >> loading weights file https://huggingface.co/neuralmind/bert-base-portuguese-cased/resolve/main/pytorch_model.bin from cache at /home/user/.cache/huggingface/transformers/1e42c907c340c902923496246dae63e33f64955c529720991b7ec5543a98e442.fa492fca6dcee85bef053cc60912a211feb1f7173129e4eb1a5164e817f2f5f2 [WARNING|modeling_utils.py:1158] 2021-05-03 08:10:22,280 >> Some weights of the model checkpoint at neuralmind/bert-base-portuguese-cased were not used when initializing BertForTokenClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias'] - This IS expected if you are initializing BertForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:1169] 2021-05-03 08:10:22,280 >> Some weights of BertForTokenClassification were not initialized from the model checkpoint at neuralmind/bert-base-portuguese-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. [INFO|trainer.py:482] 2021-05-03 08:10:24,327 >> The following columns in the training set don't have a corresponding argument in `BertForTokenClassification.forward` and have been ignored: ner_tags, tokens. [INFO|trainer.py:482] 2021-05-03 08:10:24,334 >> The following columns in the evaluation set don't have a corresponding argument in `BertForTokenClassification.forward` and have been ignored: ner_tags, tokens. [INFO|integrations.py:184] 2021-05-03 08:10:24,396 >> No `resources_per_trial` arg was passed into `hyperparameter_search`. Setting it to a default value of 1 CPU and 1 GPU for each trial. 2021-05-03 08:10:25,807 INFO services.py:1172 -- View the Ray dashboard at http://127.0.0.1:8265 2021-05-03 08:10:27,788 WARNING function_runner.py:540 -- Function checkpointing is disabled. This may result in unexpected behavior when using checkpointing features or certain schedulers. To enable, set the train function arguments to be `func(config, checkpoint_dir=None)`. == Status == Memory usage on this node: 21.2/31.4 GiB PopulationBasedTraining: 0 checkpoints, 0 perturbs Resources requested: 1/8 CPUs, 1/1 GPUs, 0.0/7.67 GiB heap, 0.0/2.64 GiB objects (0/1.0 accelerator_type:GTX) Result logdir: /home/user/ray_results/_inner_2021-05-03_08-10-27 Number of trials: 1/20 (1 RUNNING) +--------------------+----------+-------+----------------+-----------------+-----------------+--------------------+--------+----------------+----------------+ | Trial name | status | loc | adam_epsilon | learning_rate | max_grad_norm | num_train_epochs | seed | warmup_steps | weight_decay | |--------------------+----------+-------+----------------+-----------------+-----------------+--------------------+--------+----------------+----------------| | _inner_2a8cd_00000 | RUNNING | | 1e-06 | 4e-05 | 2 | 3 | 42 | 500 | 0 | +--------------------+----------+-------+----------------+-----------------+-----------------+--------------------+--------+----------------+----------------+ wandb: Currently logged in as: pvcastro (use `wandb login --relogin` to force relogin) 2021-05-03 08:10:31,794 ERROR trial_runner.py:616 -- Trial _inner_2a8cd_00000: Error processing event. Traceback (most recent call last): File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/trial_runner.py", line 586, in _process_trial results = self.trial_executor.fetch_result(trial) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/ray_trial_executor.py", line 609, in fetch_result result = ray.get(trial_future[0], timeout=DEFAULT_GET_TIMEOUT) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 47, in wrapper return func(*args, **kwargs) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 1456, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(TuneError): ray::ImplicitFunc.train_buffered() (pid=4311, ip=172.16.9.2) File "python/ray/_raylet.pyx", line 480, in ray._raylet.execute_task File "python/ray/_raylet.pyx", line 432, in ray._raylet.execute_task.function_executor File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/trainable.py", line 167, in train_buffered result = self.train() File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/trainable.py", line 226, in train result = self.step() File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 366, in step self._report_thread_runner_error(block=True) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 512, in _report_thread_runner_error raise TuneError( ray.tune.error.TuneError: Trial raised an exception. Traceback: ray::ImplicitFunc.train_buffered() (pid=4311, ip=172.16.9.2) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 248, in run self._entrypoint() File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 315, in entrypoint return self._trainable_func(self.config, self._status_reporter, File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 576, in _trainable_func output = fn() File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 651, in _inner inner(config, checkpoint_dir=None) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 644, in inner fn_kwargs[k] = parameter_registry.get(prefix + k) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 167, in get return ray.get(self.references[k]) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 47, in wrapper return func(*args, **kwargs) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 245, in deserialize_objects self._deserialize_object(data, metadata, object_ref)) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 192, in _deserialize_object return self._deserialize_msgpack_data(data, metadata_fields) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 170, in _deserialize_msgpack_data python_objects = self._deserialize_pickle5_data(pickle5_data) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 158, in _deserialize_pickle5_data obj = pickle.loads(in_band, buffers=buffers) ModuleNotFoundError: No module named 'datasets_modules' (pid=4311) 2021-05-03 08:10:31,755 ERROR function_runner.py:254 -- Runner Thread raised error. (pid=4311) Traceback (most recent call last): (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 248, in run (pid=4311) self._entrypoint() (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 315, in entrypoint (pid=4311) return self._trainable_func(self.config, self._status_reporter, (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 576, in _trainable_func (pid=4311) output = fn() (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 651, in _inner Result for _inner_2a8cd_00000: {} (pid=4311) inner(config, checkpoint_dir=None) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 644, in inner (pid=4311) fn_kwargs[k] = parameter_registry.get(prefix + k) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 167, in get (pid=4311) return ray.get(self.references[k]) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 47, in wrapper (pid=4311) return func(*args, **kwargs) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 1448, in get (pid=4311) values, debugger_breakpoint = worker.get_objects( (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 319, in get_objects (pid=4311) return self.deserialize_objects(data_metadata_pairs, (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 282, in deserialize_objects (pid=4311) return context.deserialize_objects(data_metadata_pairs, object_refs) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 245, in deserialize_objects (pid=4311) self._deserialize_object(data, metadata, object_ref)) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 192, in _deserialize_object (pid=4311) return self._deserialize_msgpack_data(data, metadata_fields) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 170, in _deserialize_msgpack_data (pid=4311) python_objects = self._deserialize_pickle5_data(pickle5_data) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 158, in _deserialize_pickle5_data (pid=4311) obj = pickle.loads(in_band, buffers=buffers) (pid=4311) ModuleNotFoundError: No module named 'datasets_modules' (pid=4311) Exception in thread Thread-2: (pid=4311) Traceback (most recent call last): (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/threading.py", line 932, in _bootstrap_inner (pid=4311) self.run() (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 267, in run (pid=4311) raise e (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 248, in run (pid=4311) self._entrypoint() (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 315, in entrypoint (pid=4311) return self._trainable_func(self.config, self._status_reporter, (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 576, in _trainable_func (pid=4311) output = fn() (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 651, in _inner (pid=4311) inner(config, checkpoint_dir=None) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/function_runner.py", line 644, in inner (pid=4311) fn_kwargs[k] = parameter_registry.get(prefix + k) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 167, in get (pid=4311) return ray.get(self.references[k]) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 47, in wrapper (pid=4311) return func(*args, **kwargs) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 1448, in get (pid=4311) values, debugger_breakpoint = worker.get_objects( (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 319, in get_objects (pid=4311) return self.deserialize_objects(data_metadata_pairs, (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/worker.py", line 282, in deserialize_objects (pid=4311) return context.deserialize_objects(data_metadata_pairs, object_refs) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 245, in deserialize_objects (pid=4311) self._deserialize_object(data, metadata, object_ref)) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 192, in _deserialize_object (pid=4311) return self._deserialize_msgpack_data(data, metadata_fields) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 170, in _deserialize_msgpack_data (pid=4311) python_objects = self._deserialize_pickle5_data(pickle5_data) (pid=4311) File "/media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/serialization.py", line 158, in _deserialize_pickle5_data (pid=4311) obj = pickle.loads(in_band, buffers=buffers) (pid=4311) ModuleNotFoundError: No module named 'datasets_modules' Problem at: /media/discoD/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/integration/wandb.py 197 run python-BaseException CondaError: KeyboardInterrupt Process finished with exit code 137 (interrupted by signal 9: SIGKILL) ```
05-03-2021 11:17:14
05-03-2021 11:17:14
Looks like using --dataset_name parameters prevents other tasks from working, causing the same error. But using custom datasets with --train_file and --validation_file passing csv/json paths seems to work. However, this doesn't happen for NER dataset. I'm getting an error for --dataset_name conll2003 or using custom json, with the following structure (one dict per line): ``` {"tokens": [], "ner_tags": []}, {"tokens": [], "ner_tags": []}, {"tokens": [], "ner_tags": []}, ```<|||||>Is this being run on a single node?<|||||>Yes @amogkam , on my own desktop running ubuntu on a GTX 1070<|||||>Looks like this is being caused by `load_dataset`. My understanding of this [code](https://github.com/huggingface/datasets/blob/master/src/datasets/load.py) that `load_dataset` dynamically creates a new module in the driver process. However, in the remote processes that Ray Tune is creating, it is trying to import this dynamically created module somewhere and cannot recognize it. @lhoestq @albertvillanova do you have more information on what could be going on here? Do you know where this import of the dynamically created module is happening?<|||||>@amogkam and this dynamic module creation doesn't happen with certain custom datasets? [This gist](https://gist.github.com/ruanchaves/3015d0d1a790d45472396b67d0879e64) from @ruanchaves works OK in the same conda environment, passing a --train_file and a --validation_file as csvs with a sentence and a label (sentence1 and label for column names).<|||||>Yes the `datasets` library downloads the script that allows to load the dataset passed as `--dataset_name` such as "conll2003". For csv files it doesn't require additional modules. Basically the `datasets` library downloads the script, store it into the a `datasets_modules` directory in the cache and add this dir to the python path (if it's not been already added before).<|||||>Where does this `datasets_modules` get imported? It looks like this is happening somewhere in the `Trainer` right?<|||||>@lhoestq so it's not necessary for csv files, but for json the same doesn't apply? Are there any workarounds I could use for the moment with my NER dataset? <|||||>Hi @pvcastro, Generally, in order to load a specific dataset named "data_args.dataset_name", the corresponding module file "data_args.dataset_name.py" is required. However, the library `datasets` comes with some pre-packed modules: "csv", "json", "pandas" and "text" (you can see them here: https://github.com/huggingface/datasets/tree/master/src/datasets/packaged_modules). In order to use this pre-packed modules, you have to pass their name as dataset_name (for example: `load_dataset("json", ...)`. You have an example here: https://huggingface.co/docs/datasets/loading_datasets.html#json-files<|||||>Thanks for the info @albertvillanova ! I'm already doing this for this scenario...I'm using a NER dataset converted to json. Do JSON files loaded this way also use `datasets_modules` somehow?<|||||>No, but you have to pass "json" as the dataset name to `load_dataset`. Please, have a look at the example in the link above because it is exactly your use case: a JSON dataset loaded by using the pre-packed json module.<|||||>I see @albertvillanova . From this code I got from examples/run_ner, this is being done: ``` data_files = {} if data_args.train_file is not None: data_files["train"] = data_args.train_file if data_args.validation_file is not None: data_files["validation"] = data_args.validation_file if data_args.test_file is not None: data_files["test"] = data_args.test_file extension = data_args.train_file.split(".")[-1] datasets = load_dataset(extension, data_files=data_files) ```<|||||>Exactly. As you see, if for example your train file is: ```python data_args.train_file = "train.json" ``` then `extension` will be equal to `"json"`: ```python extension = data_args.train_file.split(".")[-1] ``` As I told you, you have to pass "json" as the dataset name parameter to `load_dataset(dataset_name,...`<|||||>Right. Just to be clear @albertvillanova , are you just being thorough in your explanation, or are you trying to say that my script **should** be working, because I'm already doing `load_dataset("json",...)`? Thanks!<|||||>What I am saying is just that if you want that that one of the pre-packaged modules is used (json, csv, etc.), you have to be sure that the dataset name passed to `load_dataset` must be "json", "csv", ... If on the contrary you pass a custom `--dataset_name`, then the library will need to download the corresponding module.<|||||>@amogkam I believe the dataset_modules path is added upon `load_dataset`, which can occur before the creation of the Trainer. To support this, I think we need to allow the custom path to be added before the invocation of each trial. <|||||>@richardliaw Is there a way to make it work if I am using a custom dataset, not from json or csv? I am using a dataset generator<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@richardliaw @amogkam anyone working on this?<|||||>ah yes! will put on todo list. On Fri, Jun 25, 2021 at 8:14 AM Pedro Vitor Quinta de Castro < ***@***.***> wrote: > @richardliaw <https://github.com/richardliaw> @amogkam > <https://github.com/amogkam> anyone working on this? > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/11565#issuecomment-868571102>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ABCRZZLICPENUETSL5PXSJLTUSMNHANCNFSM44AUIF4Q> > . > <|||||>@Yard1 is going to take a look at this this week.<|||||>Please assign me!
transformers
11,564
closed
Adds Flax BERT finetuning example on GLUE
# What does this PR do? Adds Flax BERT finetuning example which finetunes on one of the GLUE tasks. I evaluated all tasks 5 times and added the average runs, the best run. and stdev in a table in the README. I used the seed of the best run as the default. I also ran all experiments on three devices: 8 Cloud TPU-v3, 1 Cloud TPU-v3, 1 P100 GPU. I compared the runtimes and put them in another table in the README. This PR was discussed over Slack with @patrickvonplaten and @sgugger . ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
05-03-2021 11:03:03
05-03-2021 11:03:03
Changes were done according to discussion [here](https://github.com/huggingface/transformers/pull/11593)<|||||>Test failures are unrelated
transformers
11,563
closed
Remove `datasets` submodule.
Remove submodule.
05-03-2021 10:02:21
05-03-2021 10:02:21
transformers
11,562
closed
[Wav2Vec2] Fix convert
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Correct conversion script. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-03-2021 09:38:51
05-03-2021 09:38:51
transformers
11,561
closed
AttributeError: 'TrainingArguments' object has no attribute 'resume_from_checkpoint' in training GPT2
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 4.6.0.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.9 - PyTorch version (GPU?): 1.8.1+cpu (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten @sgugger ## Information Model I am using: GPT-2 The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I am running ``` subprocess.call([python_path, 'transformers/examples/pytorch/language-modeling/run_clm.py', '--model_type', 'gpt2', '--model_name_or_path', 'gpt2', '--train_file', 'train.txt', '--do_train', '--validation_file', 'eval.txt', '--do_eval', '--per_gpu_train_batch_size', '1', '--save_steps', '-1', '--num_train_epochs', '2', '--output_dir', output_dir]) ``` and after the downloads/initialization phase I obtain the following: ``` Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm 2021.1.1\plugins\python\helpers\pydev\pydevd.py", line 1483, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "C:\Program Files\JetBrains\PyCharm 2021.1.1\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "transformers/examples/pytorch/language-modeling/run_clm.py", line 459, in <module> main() File "transformers/examples/pytorch/language-modeling/run_clm.py", line 417, in main if training_args.resume_from_checkpoint is not None: AttributeError: 'TrainingArguments' object has no attribute 'resume_from_checkpoint' ``` I tried to debug it and in effect the `TrainingArguments` object doesn't have the attribute `resume_from_checkpoint`. I see that it is indicated as an optional field, but on google colab, with the same command, it is present, with value None, as expected. ## Expected behavior It should start the training, and the weird thing is that running it on Google Colab it works perfectly.
05-03-2021 09:14:32
05-03-2021 09:14:32
You don't have the latest master installed and the examples are synced with it. The easiest way is probably to do: ``` pip uninstall transformers pip install git+https://github.com/huggingface/transformers ``` to upgrade to the latest! Otherwise you can do an [editable install](https://huggingface.co/transformers/installation.html#editable-install) to avoid those problems in the future.<|||||>You are right, I even tried to pull from the repo, but I didn't realize my installation was not synched with it. Now it should be, using the editable install. Thank you!
transformers
11,560
closed
Error In Fine-Tuning Transformer XL ValueError: The two structures don't have the same sequence length. Input structure has length 3, while shallow structure has length 2.
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information I am performing a type of machine translation task in which I have to translate English Sentences to Hinglish Sentences. I am trying to use the pre-trained Transformer-XL Model by fine tuning it on my custom dataset. Here is my code: ``` import pandas as pd import tensorflow as tf from transformers import TransfoXLTokenizer from transformers import TFTransfoXLModel import numpy as np from sklearn.model_selection import train_test_split #Loading data dataFrame = pd.read_csv("data.csv") dataFrame.head(3) #-----Output 1----- #Splitting Dataset X = dataFrame['English'] Y = dataFrame['Hinglish'] X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size = 0.2, random_state = 42) #Tokenization tokenizer = TransfoXLTokenizer.from_pretrained('transfo-xl-wt103') tokenizer.pad_token = tokenizer.eos_token XTrainEncodings = tokenizer(X_train.to_list(), max_length = 150, padding = True) XTestEncodings = tokenizer(X_test.to_list(), max_length = 150, padding = True) YTrainEncodings = tokenizer(Y_train.to_list(), max_length = 150, padding = True) YTestEncodings = tokenizer(Y_test.to_list(), max_length = 150, padding = True) print("XTrainEncodings : ", XTrainEncodings) print("YTrainEncodings : ", YTrainEncodings) #-----Output 2----- #Converting to Tensors X_train = tf.data.Dataset.from_tensor_slices((dict(XTrainEncodings), (dict(YTrainEncodings)))) X_test = tf.data.Dataset.from_tensor_slices((dict(XTestEncodings), (dict(YTestEncodings)))) print(X_train) #-----Output 3----- #Fine Tuning model = TFTransfoXLModel.from_pretrained('transfo-xl-wt103') optimizer = tf.keras.optimizers.Adam(learning_rate = 5e-5) model.compile(optimizer = optimizer, loss = tf.losses.SparseCategoricalCrossentropy(), metrics = ['accuracy']) history = model.fit(X_train.batch(1), epochs = 2, batch_size = 1, validation_data = X_test.batch(1)) ``` **Outputs** ``` -----Output 1----- English Hinglish How are you ? Tum kaise ho ? I am fine. Main theek hoon ...... -----Output 2----- XTrainEncodings : {'input_ids': [[4241, 0, 0, 0, 0, 0], [4827, 37, 304, 788, 0, 0],.... YTrainEncodings : {'input_ids': [[13762, 0, 0, 0, 0], [71271, 24, 33289, 788, 0],.... -----Output 3----- <TensorSliceDataset shapes: ({input_ids: (6,)}, {input_ids: (5,)}), types: ({input_ids: tf.int32}, {input_ids: tf.int32})> ``` **Error** ``` ValueError: in user code: /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function * return step_function(self, iterator) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica return self._call_for_each_replica(fn, args, kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica return fn(*args, **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step ** outputs = model.train_step(data) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py:758 train_step self.compiled_metrics.update_state(y, y_pred, sample_weight) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:387 update_state self.build(y_pred, y_true) /usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/compile_utils.py:318 build self._metrics, y_true, y_pred) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:1163 map_structure_up_to **kwargs) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:1245 map_structure_with_tuple_paths_up_to expand_composites=expand_composites) /usr/local/lib/python3.7/dist-packages/tensorflow/python/util/nest.py:878 assert_shallow_structure input_length=len(input_tree), shallow_length=len(shallow_tree))) ValueError: The two structures don't have the same sequence length. Input structure has length 3, while shallow structure has length 2. ``` Please help me in detecting the reason and solving the error. Also I want to know whether I am following a correct way to achieve my task or I am missing something. Thanks
05-03-2021 08:22:45
05-03-2021 08:22:45
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @rajgar114, Sorry to answer so late! It's quite difficult to reproduce an error that occurs in a full training loop. Could you by chance provide a minimal reproducible code snippet? Ideally one that doesn't require any dataset but just a tensorflow dummy tensor?<|||||>Sorry, I didn't get that. I have pasted the complete code. You can try to reproduce that even on 2-3 line of dataset as given in Output:- ``` English Hinglish How are you ? Tum kaise ho ? I am fine. Main theek hoon ``` This can be easily converted to tensors as I have used above. This can act as dummy tensors after tokenization.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,559
closed
fix the mlm longformer example by changing [MASK] to <mask>
This PR fixes an official Huggingface example in the Longformer model ## Before submitting - [x ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). @Rocketknight1 this PR is to address https://github.com/huggingface/transformers/issues/11488#issuecomment-830065245 Is this what you meant?
05-03-2021 07:49:14
05-03-2021 07:49:14
Is there any way to look at the "new" docs locally? Because I don't know how to test my change if it's docs related<|||||>Yes! You can run `make docs` at the root of your clone. You can also run `make html` in the `docs` folder :)<|||||>You'll need to do `pip install .[docs]` beforehand to have all the appropriate dependencies.<|||||>thanks! It's looking good: ![image](https://user-images.githubusercontent.com/11276933/116860790-61767f80-ac02-11eb-8a53-5533669951a8.png) <|||||>Great job, thanks for this! I made one minor change - I ran the `black` code formatter to make sure the code fits our style guidelines. You can do this yourself in future by running `pip install -U -e .[quality]` in the project root to install the code formatters, and then `make style` to reformat the code, but it's a very minor detail. Once the tests check out I'll merge it. Thanks again for making my job as a maintainer easier!<|||||>No problem, this is what open source is about :D
transformers
11,558
closed
[Flax BERT/Roberta] few small fixes
# What does this PR do? This PR fixes a few docs in Flax models.
05-03-2021 07:21:35
05-03-2021 07:21:35
nit question: Any reason why we are accepting `bias_init` as a parameter while in all other modules we just use directly in `setup` https://github.com/huggingface/transformers/blob/a5d2967bd8a5ed2456c593fa9eb5d9c0d726ae7a/src/transformers/models/bert/modeling_flax_bert.py#L480-L483
transformers
11,557
closed
DPR with ELECTRA models
# 🚀 Feature request Dense Passage Retrieval should work with any language model as encoder, in particular also with ELECTRA models. However, the current implementation of the class `DPREncoder` seems to be limited to BERT models: https://github.com/huggingface/transformers/blob/a5d2967bd8a5ed2456c593fa9eb5d9c0d726ae7a/src/transformers/models/dpr/modeling_dpr.py#L155 In my understanding, ELECTRA does not provide a pooled_output by default. Therefore, the following line breaks because there are not enough values to unpack (expected 2, got 1): https://github.com/huggingface/transformers/blob/a5d2967bd8a5ed2456c593fa9eb5d9c0d726ae7a/src/transformers/models/dpr/modeling_dpr.py#L181 ## Motivation I would like to train a Dense Passage Retrieval model with transformers and FARM. It works with BERT models as encoders of queries and passages but not with ELECTRA models. I expect the ELECTRA DPR model to outperform the BERT DPR model with regard to retrieval performance. The code for training DPR is similar to this example [here](https://github.com/deepset-ai/haystack/blob/master/tutorials/Tutorial9_DPR_training.py). If the handling of the pooled_output is really the only reason that prevents ELECTRA models from being used, I could imagine to work on this myself but I'd be happy to get some guidance first. Maybe there are other problems that I oversaw.
05-03-2021 07:03:36
05-03-2021 07:03:36
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,556
closed
FlaxGPT2
# What does this PR do? [From Patrick:] This PR adds GPT2 for Flax and lays the groundwork for `generation` in Flax. The model is a straightforward translation from PyTorch with two exceptions: 1.) the `past_key_values/caching` is more complex since `generation` has to compile to be usable on TPU with Flax/Jax. This means that the cache cannot grow dynamically, but has to be a static, padded tensor that is filled during generation. This means that the cache can now only be used as follows: i) decide on a `max_length` to which the model should generate at most and call `model.init_cache(batch_size, max_length)`. This will then add a `"cache"` variable to the expected variables by the model and returns a 0 padded tensor of shape `(batch_size, max_length)`. ii) This cache is then subsequently passed & retrieved similar to how it's done in PyTorch: `logits, past_key_values = model(input_ids, past_key_values=cache)`. When doing so, internally, the `cache` is marked as `mutable` so that the Flax model can step-by-step fill the padded tensor. We also need to pass a `_use_cache` boolean internally to tell the model to initialize the cache in the first place. iii) The implemented caching system differs slightly from the original flax caching system [here](https://github.com/google/flax/blob/6111a4b2c5cb56520549c6dfc911135fbd8dbb7f/flax/linen/attention.py#L252) since we want to allow the very first decoding pass to take an `input_prompt` that is longer than 1 (*e.g.* for GPT3-like few-shot prompting) - see: https://github.com/google/flax/issues/1317. iv) A test verifies that compiled generation works as expected v) This also means that when using the cache, the `attention_mask` has to have the same static shape as the cache, I guess we could have better error messages here to, but I think this should mostly be handled by the generation method which will be added in a follow-up PR: https://github.com/huggingface/transformers/pull/11685. 2.) Since in Flax we rely on `flax.linen.dot_product_attention`, we are adding the attention mask once whereas in PyTorch we are setting certain values to the padding value via `torch.where(...)` [here](https://github.com/huggingface/transformers/blob/c73e35323d5805bff27c5dbd9a5691008be1316a/src/transformers/models/gpt2/modeling_gpt2.py#L187) and are adding large negative numbers [here](https://github.com/huggingface/transformers/blob/c73e35323d5805bff27c5dbd9a5691008be1316a/src/transformers/models/gpt2/modeling_gpt2.py#L191). This is unnecessarily costly in Flax and only matters for unrealistic cases where tokens in the middle of the sequence are padded. For all realistic cases PT and Flax behaves exactly the same, which is shown in the overwritten comparison tests.
05-03-2021 04:54:06
05-03-2021 04:54:06
@LysandreJik @sgugger - this PR is blocking a couple of other things. Let me know if you guys aren't really ok with some of those things
transformers
11,555
closed
NaN Person/Spearman corr. when fine-tuning BERT with example code and commands
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.1 - Platform: Google COLAB and Ubuntu 18 LTS (tested both) - Python version: 3.7 and 3.6 - PyTorch version (GPU?): 1.8.1 with 1 GPU - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): `bert-base-cased` The problem arises when using: * [x] the official example scripts: transformers/examples/pytorch/text-classification/run_glue_no_trainer.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: STS-B * [ ] my own task or dataset: (give details below) Here is the training log I got with the command for `run_glue_no_trainer.py` given in [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#pytorch-version-no-trainer) ``` 2021-05-03 00:56:13.263483: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0 05/03/2021 00:56:14 - INFO - __main__ - Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda Use FP16 precision: False Downloading and preparing dataset glue/stsb (download: 784.05 KiB, generated: 1.09 MiB, post-processed: Unknown size, total: 1.86 MiB) to /root/.cache/huggingface/datasets/glue/stsb/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad... Downloading: 100% 803k/803k [00:00<00:00, 993kB/s] Dataset glue downloaded and prepared to /root/.cache/huggingface/datasets/glue/stsb/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96da6053ad. Subsequent calls will reuse this data. loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307 Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "finetuning_task": "stsb", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0" }, "initializer_range": 0.02, "intermediate_size": 3072, "label2id": { "LABEL_0": 0 }, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.5.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } loading configuration file https://huggingface.co/bert-base-cased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/a803e0468a8fe090683bdc453f4fac622804f49de86d7cecaee92365d4a0f829.a64a22196690e0e82ead56f388a3ef3a50de93335926ccfa20610217db589307 Model config BertConfig { "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "transformers_version": "4.5.1", "type_vocab_size": 2, "use_cache": true, "vocab_size": 28996 } loading file https://huggingface.co/bert-base-cased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/6508e60ab3c1200bffa26c95f4b58ac6b6d95fba4db1f195f632fa3cd7bc64cc.437aa611e89f6fc6675a049d2b5545390adbc617e7d655286421c191d2be2791 loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/226a307193a9f4344264cdc76a12988448a25345ba172f2c7421f3b6810fddad.3dab63143af66769bbb35e3811f75f7e16b2320e12b7935e216bd6159ce6d9a6 loading file https://huggingface.co/bert-base-cased/resolve/main/added_tokens.json from cache at None loading file https://huggingface.co/bert-base-cased/resolve/main/special_tokens_map.json from cache at None loading file https://huggingface.co/bert-base-cased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/ec84e86ee39bfe112543192cf981deebf7e6cbe8c91b8f7f8f63c9be44366158.ec5c189f89475aac7d8cbd243960a0655cfadc3d0474da8ff2ed0bf1699c2a5f loading weights file https://huggingface.co/bert-base-cased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/092cc582560fc3833e556b3f833695c26343cb54b7e88cd02d40821462a74999.1f48cab6c959fc6c360d22bea39d06959e90f5b002e77e836d2da45464875cda Some weights of the model checkpoint at bert-base-cased were not used when initializing BertForSequenceClassification: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of BertForSequenceClassification were not initialized from the model checkpoint at bert-base-cased and are newly initialized: ['classifier.weight', 'classifier.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 100% 6/6 [00:00<00:00, 17.48ba/s] 100% 2/2 [00:00<00:00, 21.17ba/s] 100% 2/2 [00:00<00:00, 23.50ba/s] 05/03/2021 00:56:24 - INFO - __main__ - Sample 468 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 138, 1825, 1110, 4518, 6870, 1105, 18700, 1113, 170, 2727, 1104, 19915, 119, 102, 138, 1685, 4648, 1110, 2807, 1113, 170, 3267, 119, 102], 'labels': 0.0, 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}. 05/03/2021 00:56:24 - INFO - __main__ - Sample 3376 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 1124, 8121, 1224, 1142, 1989, 1113, 1103, 1148, 1352, 3143, 1118, 170, 1646, 1697, 119, 102, 1828, 6096, 8121, 1113, 9667, 1113, 1103, 1148, 1352, 3143, 1118, 1126, 1237, 1697, 119, 102], 'labels': 3.0, 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}. 05/03/2021 00:56:24 - INFO - __main__ - Sample 3943 of the training set: {'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'input_ids': [101, 17448, 118, 21145, 10788, 1107, 1497, 17192, 3170, 102, 19553, 17448, 118, 21145, 4876, 3243, 1166, 1497, 17192, 3170, 102], 'labels': 3.799999952316284, 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}. 05/03/2021 00:56:28 - INFO - __main__ - ***** Running training ***** 05/03/2021 00:56:28 - INFO - __main__ - Num examples = 5749 05/03/2021 00:56:28 - INFO - __main__ - Num Epochs = 3 05/03/2021 00:56:28 - INFO - __main__ - Instantaneous batch size per device = 32 05/03/2021 00:56:28 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 32 05/03/2021 00:56:28 - INFO - __main__ - Gradient Accumulation steps = 1 05/03/2021 00:56:28 - INFO - __main__ - Total optimization steps = 540 33% 179/540 [00:24<00:52, 6.93it/s]/usr/local/lib/python3.7/dist-packages/scipy/stats/stats.py:3508: PearsonRConstantInputWarning: An input array is constant; the correlation coefficent is not defined. warnings.warn(PearsonRConstantInputWarning()) /usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py:2559: RuntimeWarning: invalid value encountered in true_divide c /= stddev[:, None] /usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py:2560: RuntimeWarning: invalid value encountered in true_divide c /= stddev[None, :] 05/03/2021 00:56:55 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow 05/03/2021 00:56:55 - INFO - __main__ - epoch 0: {'pearson': nan, 'spearmanr': nan} 66% 359/540 [00:51<00:25, 7.12it/s]05/03/2021 00:57:22 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow 05/03/2021 00:57:22 - INFO - __main__ - epoch 1: {'pearson': nan, 'spearmanr': nan} 100% 539/540 [01:19<00:00, 7.31it/s]05/03/2021 00:57:50 - INFO - /usr/local/lib/python3.7/dist-packages/datasets/metric.py - Removing /root/.cache/huggingface/metrics/glue/stsb/default_experiment-1-0.arrow 05/03/2021 00:57:50 - INFO - __main__ - epoch 2: {'pearson': nan, 'spearmanr': nan} Configuration saved in /tmp/stsb/config.json Model weights saved in /tmp/stsb/pytorch_model.bin 100% 540/540 [01:22<00:00, 6.51it/s] ``` ## To reproduce Steps to reproduce the behavior: 1. `git clone https://github.com/huggingface/transformers.git` 2. `pip install -r transformers/examples/pytorch/text-classification/requirements.txt` 3. `pip install transformers` 4. Execute the command given in [README](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification#pytorch-version-no-trainer) ``` mkdir /tmp/stsb/ -p python transformers/examples/pytorch/text-classification/run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name stsb \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/stsb/ ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Seeing about 88.64 and 88.48 for Person and Spearman correlations respectively, but I consistently observed through multiple runs that both values are NaN <!-- A clear and concise description of what you would expect to happen. -->
05-03-2021 04:50:11
05-03-2021 04:50:11
Oh yes, we forgot to add special case for sts-b (which is a regression task) in this script. Will send a fix later this morning! Thanks for flagging!<|||||>Thank you @sgugger for the prompt fix!
transformers
11,549
closed
Bugs when trying to train a T5model from scratch in the run_summarization.py script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.6.0 - Platform: Macbook - Python version: 3.7 - PyTorch version (GPU?):GPU - Using GPU in script?:yes ### Who can help Models: - t5: @patrickvonplaten, @patil-suraj Examples: - maintained examples (from examples/pytorch/summarization): @sgugger, @patil-suraj ## Information I am trying to train a t5 model from scratch without loading pre-trained weights. I am not sure how to train a model from scratch using my custom CSV dataset through changing run_summarization.py script. I got a bug with decoder input. How to specify decoder_inputs when using run_summarization.py. Model I am using (T5): The problem arises when using: * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] my own task or dataset: (running a custom dataset, with one column "text" and one column "summary" in the customer CSV file) ## To reproduce Steps to reproduce the behavior: 1. creating customed CSV data file as readme said 2. comment loading model weight script, and load a T5Model with config 3. using command : python examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-small\ --do_train \ --do_eval \ --do_predict\ --train_file coms_train.csv\ --validation_file coms_val.csv\ --test_file coms_test.csv\ --source_prefix "summarize: "\ --output_dir ./t5_output \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate <pre lang="python"> config = AutoConfig.from_pretrained( model_args.config_name if model_args.config_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast_tokenizer, revision=model_args.model_revision, use_auth_token=True if model_args.use_auth_token else None, ) # model = AutoModelForSeq2SeqLM.from_pretrained( # model_args.model_name_or_path, # from_tf=bool(".ckpt" in model_args.model_name_or_path), # config=config, # cache_dir=model_args.cache_dir, # revision=model_args.model_revision, # use_auth_token=True if model_args.use_auth_token else None, # ) model = T5Model(config = config) </pre> Bugs: ```{r} File "examples/pytorch/summarization/run_summarization.py", line 596, in <module> main() File "examples/pytorch/summarization/run_summarization.py", line 531, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/transformers/trainer.py", line 1192, in train tr_loss += self.training_step(model, inputs) File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/transformers/trainer.py", line 1590, in training_step loss = self.compute_loss(model, inputs) File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/transformers/trainer.py", line 1622, in compute_loss outputs = model(**inputs) File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1343, in forward return_dict=return_dict, File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/Users/leo/.pyenv/versions/3.7.7/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 877, in forward raise ValueError(f"You have to specify either {err_msg_prefix}inputs or {err_msg_prefix}inputs_embeds") ValueError: You have to specify either decoder_inputs or decoder_inputs_embeds ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I want to train T5model with my customer CSV dataset without loading pretrained weights. I only want to train their original model architecture without pre-trained model. Thanks!
05-03-2021 00:41:23
05-03-2021 00:41:23
Hi there, This is because you are using `T5Model`. When using `T5Model`, `decoder_input_ids` should be directly passed, but the examples pass `lables` which `T5Model` doesn't accept. Also, for this task `T5ForConditionalGenerations` should be used instead of `T5Model`, it will take `lables` and prepare the `decoder_input_ids` by shifting the `labels`. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,548
closed
why run_translation.py automatically is running on cpu?
-`transformers` version: 4.5.0 - Platform: linux - Python version: 3.8 - PyTorch version (GPU?): 1.7.1 - Tensorflow version (GPU?): - Using GPU in script?: yes - Using distributed or parallel set-up in script?:yes @patrickvonplaten, @patil-suraj Models: mbart-large-50
05-02-2021 18:41:18
05-02-2021 18:41:18
Could you please post the command that you used?<|||||>I run the command for telugu to telugu python examples/pytorch/translation/run_translation.py --model_name_or_path facebook/mbart-large-50 --do_train --do_eval --do_predict --source_lang te_IN --target_lang te_IN --test_file /home/aniruddha/mbart/ --train_file "/content/te_train_mbart.json" --validation_file "/content/te_dev_mbart.json" --source_prefix "translate Telugu to Telugu: " --output_dir "/content/transformers/examples/pytorch/summarization/chkpt" --overwrite_output_dir yes --per_device_train_batch_size=1 --per_device_eval_batch_size=2 --predict_with_generate yes<|||||>@patil-suraj <|||||>Hi! Is your CUDA environment correctly set up? What is the output of the following in your environment? ``` python -c "import torch;print(torch.cuda.is_available())" ```<|||||>True<|||||>Then it should run on GPU. Why do you say it doesn't?<|||||>I have same issues. How do you solve it ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,547
closed
tokenizer not padding input_ids
### Tokenizer Bug Tested on RoBERTa and BERT tokenizer of version `4.5.1`, when calling `tokenizer()` with parameter set to pad sequence to `max_length`, the return value does not pad `input_ids`, only `token_type_ids` and `attention_masks`. This result in a downstream error in model `forward` step, where it complains about tensor size mismatch. **Scripts to reproduce** ``` from typing import Union from transformers import AutoTokenizer from transformers.data.processors.glue import glue_convert_examples_to_features from transformers.data.processors.utils import InputExample if __name__ == '__main__': tokenizer = AutoTokenizer.from_pretrained( "roberta-base", model_input_names=["token_type_ids", "attention_mask"], use_fast=False ) MAX_LENGTH = 256 LABEL_LIST = [0, 1] OUTPUT_MODE = 'classificaiton' inputs = ["Ututu goes public.", "This moon is huge."] examples = [InputExample(guid=str(index), text_a=text, label=None) for index, text in enumerate(inputs)] # test with tokenizer - `input_ids` in the return value is not padded to max_length, `token_type_ids` and `attention_mask` are. batch_encoding = tokenizer( [(example.text_a, example.text_b) for example in examples], max_length=MAX_LENGTH, padding='max_length', truncation=True ) print(batch_encoding) ## test with transformers.data.processor function - `input_ids` in the return value is not padded to max_length, `token_type_ids` and `attention_mask` are. # features = glue_convert_examples_to_features(examples, # label_list=LABEL_LIST, # tokenizer=tokenizer, # max_length=MAX_LENGTH, # output_mode=OUTPUT_MODE) # print(f"input_ids: {features[0].input_ids}") # print(f"attention_mask: {features[0].attention_mask}") ``` **Output** ``` {'input_ids': [[0, 41967, 1182, 257, 1411, 285, 4, 2], [0, 713, 6950, 16, 1307, 4, 2]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]} ``` **Snippet of the Downstream Error** ``` ... > attention_scores = attention_scores + attention_mask E RuntimeError: The size of tensor a (10) must match the size of tensor b (256) at non-singleton dimension 3 /usr/local/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py:246: RuntimeError ```
05-02-2021 13:33:47
05-02-2021 13:33:47
Hello! I believe this comes from the fact that you're not passing `input_ids` as a `model_input_names` value. The tokenizer is unaware it should pad this value, so it doesn't. If this code sample was working in previous versions, it's because of a bug that ignored the `input_ids` and always padded them whether they were actually needed by the model or not. I recommend either removing the explicit definition of `model_input_names`, or adding `input_ids` in the list you're passing to the tokenizer.<|||||>Right on. This script worked on version 2.x previously. Closing issue, thx for your help!
transformers
11,546
closed
T5 fp16 crashes on the CPU (but works on CUDA)
## Environment info * transformers version: 4.5.1 * Platform: Linux-4.15.0-126-generic-x86_64-with-glibc2.10 (ubuntu 18.04) * Python version: 3.8.0 * PyTorch version (GPU?): 1.7.1 (True) * Tensorflow version (GPU?): not installed (NA) * Using GPU in script?: Nvidia Quadro RTX 8000 * Using distributed or parallel set-up in script?: no ### Who can help Models: - t5: @patrickvonplaten, @patil-suraj ## Information I am using T5EncoderModel. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Try to generate embeddings with half precision T5EncoderModel on CUDA, see that it works (script below) 2. Try the same thing on the CPU, see that it crashes (`RuntimeError: "baddbmm__mkl" not implemented for 'Half'`) ```python import torch from numpy import ndarray from transformers import T5EncoderModel, T5Tokenizer # This is protein sequence; Each character is one amino acid which is equivalent to one word protein = "MKKLFVVLVVMPLIYGDNFPCSKLTNRTIGNHWNLIETFLLNYSSRLPPNSDVVLGDYFPTVQPWFNCIRNNSNDLYVTLENLKALYWDYAKETITWNHKQRLNVVVNGYPYSITVTTTRNFNSAEGAIICICKGSPPTTTTESSLTCNWGSECRLNHKFPICPSNSESNCGNMLYGLQWFADE" def embed( sequence: str, model: T5EncoderModel, tokenizer: T5Tokenizer, ) -> ndarray: # Every amino acid is a "word" sequence = " ".join(list(sequence)) ids = tokenizer.batch_encode_plus( [sequence], add_special_tokens=True, padding="longest" ) tokenized_sequences = torch.tensor(ids["input_ids"]).to(model.device) attention_mask = torch.tensor(ids["attention_mask"]).to(model.device) with torch.no_grad(): embeddings = model(input_ids=tokenized_sequences, attention_mask=attention_mask) return embeddings[0].cpu().numpy() def main(): model_name = "Rostlab/prot_t5_xl_uniref50" tokenizer = T5Tokenizer.from_pretrained(model_name, do_lower_case=False) model = T5EncoderModel.from_pretrained(model_name) model = model.half() # This passes model = model.to(torch.device("cuda")).eval() embed(protein, model, tokenizer) # This fails model = model.to(torch.device("cpu")).eval() embed(protein, model, tokenizer) if __name__ == "__main__": main() ``` ``` Traceback (most recent call last): File "test-data/t5_cpu.py", line 44, in <module> main() File "test-data/t5_cpu.py", line 40, in main embed(protein, model, tokenizer) File "test-data/t5_cpu.py", line 23, in embed embeddings = model(input_ids=tokenized_sequences, attention_mask=attention_mask) File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1728, in forward encoder_outputs = self.encoder( File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 948, in forward layer_outputs = layer_module( File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 631, in forward self_attention_outputs = self.layer[0]( File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 538, in forward attention_output = self.SelfAttention( File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/mnt/project/seqvec-search/.venv/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 477, in forward scores = torch.matmul( RuntimeError: "baddbmm__mkl" not implemented for 'Half' ``` ## Expected behavior Computing embeddings also works on the CPU and gives approximately the same results as on the GPU
05-02-2021 08:07:03
05-02-2021 08:07:03
AFAIK there is no possibility to run half-precision (fp16) on CPU in general (cc @stas00)<|||||>Last I looked pytorch didn't support that: https://github.com/pytorch/pytorch/issues/48245 But there is a design discussion going here: https://github.com/pytorch/pytorch/issues/55374 which discusses AMP for bf16 and other dtypes, but also to make AMP working on CPU. AMP will automatically use any ops that support the target dtype and fall back to fp32 when it can't, so that the user doesn't have to guess. And the resulting code will be portable hardware-wise. If I understand correctly, it's relatively straightforward to have various ops implemented on cuda, since it's the same architecture more or less (other than older vs newer arch). But on CPU you have a much wider variety of different architectures, even if you just look at AMD vs. Intel, so it's possible that some op can be implemented on this CPU, but not that one. <|||||>This is still relevant, as https://github.com/pytorch/pytorch/issues/55374 is still unresolved<|||||>Probably the best way to make things shift is to comment in the pytorch land and voice the need/importance. I don't think there is anything that we could do to fix this in the `transformers` side at the moment. Please correct me if I'm wrong.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,545
closed
Strugle to change `num_labels` of roberta-large-mnli
- `transformers` version: 4.4.0 - Platform: Linux-5.8.0-45-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no I am trying to build a simple binary classifies based on `roberta-large-mnli` that is initially fitted to three labels. I found that the following snippet should work, but it doesn't `config = AutoConfig.from_pretrained('roberta-large-mnli', num_labels=2)` `model = AutoModelForSequenceClassification.from_pretrained('roberta-large-mnli', config=config)` If fails with the following error message `RuntimeError: Error(s) in loading state_dict for RobertaForSequenceClassification: size mismatch for classifier.out_proj.weight: copying a param with shape torch.Size([3, 1024]) from checkpoint, the shape in current model is torch.Size([2, 1024]). size mismatch for classifier.out_proj.bias: copying a param with shape torch.Size([3]) from checkpoint, the shape in current model is torch.Size([2]).` I prefer to build solution around `AutoModel` if possible. @LysandreJik @sgugger
05-02-2021 06:28:58
05-02-2021 06:28:58
Please have a look at [link](https://stackoverflow.com/questions/67158554/fine-tuning-models-classifier-layer-with-new-label/67187807#67187807). You can not resize a pre-trained model via config. You need to do it by yourself.<|||||>@cronoik , Thank you very for your response! I've changed the code to the following `config = AutoConfig.from_pretrained(self.pretrained_bert, output_attentions=False, output_hidden_states=False)` `self.model = AutoModelForSequenceClassification.from_pretrained(self.pretrained_bert, config=config)` `if self.n_classes != self.model.num_labels:` ` self.model.classifier.out_proj.weight = nn.Parameter(torch.randn(self.n_classes,768))` ` self.model.classifier.out_proj.bias = nn.Parameter(torch.randn(self.n_classes))` However, it seems like this change doesn't make any difference. `print(self.model)` still outputs the same head `(classifier): RobertaClassificationHead(` ` (dense): Linear(in_features=768, out_features=768, bias=True)` ` (dropout): Dropout(p=0.1, inplace=False)` ` (out_proj): Linear(in_features=768, out_features=3, bias=True)` ` )`<|||||>This is just static information that does not affect the code. Just run the model as you did before. It will now return `self.n_classes` values instead of 3. You can change the output with: ``` classifier.out_proj.out_features =2 ```<|||||>for some reason it still fails with the error `py37/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 1011, in forward` ` loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1))` `RuntimeError: shape '[-1, 3]' is invalid for input of size 16` it seems like it still expects 3 labels, and 16 is twice the batch size <|||||>Have you changed the config as well? self.num_labels is initialized from the config. I have tested it with roberta-large-mnli and it is working fine.<|||||>Nope, I tried the following, and apparently it fails, because `num_labels=2` doesn't fit the num_labels from the config itself. `config = AutoConfig.from_pretrained('roberta-large-mnli', num_labels=2)` `self.model = AutoModelForSequenceClassification.from_pretrained(self.pretrained_bert, config=config)` <|||||>@cronoik I've changed `num_labels` of the models and it started running. I thought that replacing the head classifier is a routine process, but it seems not. I wonder if there is more "universal" approach, where there is no need to specify hidden size, `out_proj` and etc.?<|||||>Sorry, I think this is a bit of a misunderstanding. I thought that you wanted to reuse the weights of the classification head but in the example, you showed that you reinitialized them randomly. In this case, the "standard" procedure is the following (AFAIK): ``` from transformers import AutoModelForSequenceClassification m = AutoModelForSequenceClassification.from_pretrained('roberta-large-mnli') c = AutoConfig.from_pretrained('roberta-large-mnli') s = m.state_dict s = {k:v for k,v in s.items() if not k.startswith('classifier')} c.num_labels = 2 m2 = AutoModelForSequenceClassification.from_pretrained(None, config = c, state_dict=s) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,544
closed
'return_dict_in_generate' and 'output_scores' argument in BartForConditionalGeneration.generate()
Dear all I was wondering if I can ask you some questions about how to use `.generate()` for BART or other pre-trained models. The example code is, ``` from transformers import BartTokenizer, BartForConditionalGeneration, BartConfig path = 'facebook/bart-large'\ model = BartForConditionalGeneration.from_pretrained(path) tokenizer = BartTokenizer.from_pretrained(path) ARTICLE_TO_SUMMARIZE = "My friends are cool but they eat too many carbs." inputs = tokenizer([ARTICLE_TO_SUMMARIZE], max_length=1024, return_tensors='pt') # Generate Summary summary_ids = model.generate( inputs['input_ids'], num_beams=4, num_return_sequences=2, max_length=5, early_stopping=True, output_scores=True, return_dict_in_generate=True, ) print(summary_ids.keys()) print(summary_ids['sequences']) print(summary_ids['sequences_scores']) print(len(summary_ids['scores'][0])) print(summary_ids['scores'][0].size()) print([tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids['sequences']]) ``` Then, the output is, ``` odict_keys(['sequences', 'sequences_scores', 'scores']) tensor([[ 2, 2387, 2387, 964, 2], [ 2, 2387, 4, 4, 2]]) tensor([-0.8599, -0.9924]) 4 torch.Size([4, 50265]) ['MyMy friends', 'My..'] ``` Do not worry about poor performance, ['MyMy friends', 'My..'], since I am only trying to understand how this works. So, the question is, 1. `return_dict_in_generate=True` returns `['sequences']`, but together with `output_scores=True`, it returns `['sequences', 'sequences_scores', 'scores']`. There are other arguments, like `output_attentions` or `output_hidden_states`. [BART BartForConditionalGeneration documents](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) do not explain anything about `.generate()`. So, I searched further and found [Utilities for Generation](https://huggingface.co/transformers/internal/generation_utils.html?highlight=generate) that seems to talk about generating outputs using `.generate()` and [Huggingface transformers model](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate) that seems to talk about the general methods of base classes, PreTrainedModel, but there is no document that shows what each variable, ['sequences', 'sequences_scores', 'scores'], actually work or how they are computed. Where is the documents for this? 2. Is `sequences_scores` computed as, \sum_{t} \log p(y_{t} | x, y_{t<})? 3. How do you get `sequences_scores` from `scores`? My initial guess was to apply softmax on `scores` in `dim=1`, then get `topk` with `k=1`, but this does not give me very weird answer. ``` import torch sm = torch.nn.functional.softmax(summary_ids['scores'][0], dim=1) topk = sm.topk(k=1, dim=1) print(sm) print(topk) print(summary_ids['sequences'][0]) ``` which comes out as ``` tensor([[1.2851e-04, 8.8341e-12, 2.4085e-06, ..., 3.9426e-12, 2.8815e-12, 1.0564e-08], [1.9899e-05, 1.9899e-05, 1.9899e-05, ..., 1.9899e-05, 1.9899e-05, 1.9899e-05], [1.9899e-05, 1.9899e-05, 1.9899e-05, ..., 1.9899e-05, 1.9899e-05, 1.9899e-05], [1.9899e-05, 1.9899e-05, 1.9899e-05, ..., 1.9899e-05, 1.9899e-05, 1.9899e-05]]) torch.return_types.topk( values=tensor([[9.9271e-01], [1.9899e-05], [1.9899e-05], [1.9899e-05]]), indices=tensor([[2387], [ 0], [ 0], [ 0]])) tensor([ 2, 2387, 2387, 964, 2]) ``` First token 2387 appears to be correct, but from the second, the probability is 1.9899e-05, which is just equivalent to 1/len(tokenizer). This seems to me that all the tokens are likely to be generated equally. So, How do you get `sequences_scores` from `scores`? 4. How do I get the probability of all the conditional probability of output tokens? For example, if `.generate()` gives output as [I, am, student], then how do I get the conditional probability of each token? [Pr(I | x), Pr(am | x, I), Pr(student | x, I, am)]. Initially, I thought it was 'scores', but I am not sure now. 5. Since I find it difficult to find documents/information on `.generate()` nor any information above, is this something that experienced researchers in NLP or programming would just be able to guess? Thank you in advance
05-02-2021 01:47:54
05-02-2021 01:47:54
hi @ktr0921 Please use the [forum](https://discuss.huggingface.co/) to ask such question, [this](https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175) thread might help
transformers
11,543
closed
Error when trying to use GPTNeo model
## Environment info - `transformers` version: 4.6.0.dev0 - Platform: Windows-10-10.0.19041-SP0 - Python version: 3.8.7 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.4.0 (False) - Using GPU in script?: Not yet - Using distributed or parallel set-up in script?: yes ### Who can help Models: - gpt2: @patrickvonplaten - tensorflow: @Rocketknight1 ## Information The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Using GPTNeo The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I want to try to convert GPT-Neo to a TF.js model for a class ## To reproduce Steps to reproduce the behavior: 1. pip install transformers 2. import the necessary parts 3. run file Here's the error I get after running: ``` model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") AttributeError: type object 'GPTNeoForCausalLM' has no attribute 'from_pretrained ``` ## Expected behavior The model should be saved in a new folder.
05-02-2021 00:23:10
05-02-2021 00:23:10
I cannot reproduce the error. `GPTNeoForCausalLM` clearly has a `from_pretrained(...)` method<|||||>Not sure what I'm doing wrong then. I'll post another snippet. ``` from transformers import GPT2Tokenizer, GPTNeoForCausalLM, pipeline import tensorflowjs tokenizer = GPT2Tokenizer.from_pretrained('EleutherAI/gpt-neo-1.3B') #model = pipeline('text-generation', model='EleutherAI/gpt-neo-1.3B') model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B") model.save("./test_gptneo") ```<|||||>Hi there, this is because PyTorch is not installed. Currently, GPT Neo is only implemented `pt` and `pt` should be installed if you want to use a model implemented in PyTorch. When it's not installed a dummy object is returned, which is the reason for the above error.<|||||>It would be nice to output a warning when a dummy object is imported, because that is quite an opaque error otherwise.<|||||>@LysandreJik The issue is that the `from_pretrained` method which raises the warning in dummy objects is not implemented for `GPTNeoForCausalLM` https://github.com/huggingface/transformers/blob/a5d2967bd8a5ed2456c593fa9eb5d9c0d726ae7a/src/transformers/utils/dummy_pt_objects.py#L1509-L1512 There are few more dummy objects which don't have this method implemented.<|||||>Thanks all, after installing PyTorch, the script does run, only for me to discover that it doesn't have a .save method. Guess I should have looked into that. Any pointers on how I could convert this model to Tensorflow, or even better, TF.js? I might have overlooked a method while browsing through the scripts.<|||||>All Transformers models have `save_pretrained` method, `save` is the wrong method name.<|||||>Were you able to convert the model to Tensorflow?
transformers
11,541
closed
Pegasus tokenizer does not have bos token, cannot pretrain
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Ubuntu 18.04 - Python version: 3.8 - PyTorch version (GPU?): 1.7.1 with GPU (CUDA 10.1) - Tensorflow version (GPU?): N/A - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @LysandreJik ## Information Model I am using (Bert, XLNet ...): Pegasus The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: SQUaD * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I am trying to re-create the basic objective of pre-training with Pegasus. I believe the issue is with the bos token: it does not exist, as per this PR: https://github.com/huggingface/transformers/pull/8731/files. However, it does exist in the [original paper](https://arxiv.org/pdf/1912.08777.pdf) (it's `<s>`) Steps to reproduce: ``` model_name = 'google/pegasus-cnn_dailymail' from transformers import PegasusForConditionalGeneration, PegasusTokenizer model = PegasusForConditionalGeneration.from_pretrained(model_name) ## Taken from paper: input_string = ["Pegasus is <mask_2> . <mask_1> it <mask_2> the model ."] input_ids = tokenizer(input_string, add_special_tokens=False, return_tensors="pt").input_ids print(input_ids) ## tensor([[51881, 117, 3, 110, 107, 2, 126, 3, 109, 861, 110, 107]]) decoder_input_string = ["<s> It is pure white . "] decoder_input_ids = tokenizer(decoder_input_string, add_special_tokens=False, return_tensors="pt", bos_token='<s>').input_ids print(decoder_input_ids) ## tensor([[ 110, 105, 116, 2314, 168, 117, 3763, 695, 110, 107]]) labels_string = ["It is pure white . </s>"] labels = tokenizer(labels_string, add_special_tokens=False, return_tensors="pt").input_ids print(labels) ## tensor([[ 168, 117, 3763, 695, 110, 107, 1]]) loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0] ``` In the final line, I get the following error: ``` ValueError Traceback (most recent call last) <ipython-input-32-13dda8f18c44> in <module> ----> 1 loss = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids, labels=labels)[0] /home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1285 if labels is not None: 1286 loss_fct = CrossEntropyLoss() -> 1287 masked_lm_loss = loss_fct(lm_logits.view(-1, self.config.vocab_size), labels.view(-1)) 1288 1289 if not return_dict: /home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/modules/loss.py in forward(self, input, target) 959 960 def forward(self, input: Tensor, target: Tensor) -> Tensor: --> 961 return F.cross_entropy(input, target, weight=self.weight, 962 ignore_index=self.ignore_index, reduction=self.reduction) 963 /home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/functional.py in cross_entropy(input, target, weight, size_average, ignore_index, reduce, reduction) 2466 if size_average is not None or reduce is not None: 2467 reduction = _Reduction.legacy_get_string(size_average, reduce) -> 2468 return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) 2469 2470 /home/ubuntu/anaconda3/envs/pytorch_new/lib/python3.8/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce, reduction) 2259 2260 if input.size(0) != target.size(0): -> 2261 raise ValueError('Expected input batch_size ({}) to match target batch_size ({}).' 2262 .format(input.size(0), target.size(0))) 2263 if dim == 2: ValueError: Expected input batch_size (10) to match target batch_size (7). ``` it seems to me like the issue is while tokenizing `decoder_input_ids`: the `<s>` gets tokenized as 4 different indexes `110, 105, 116, 2314` instead of just one. This is because there is no bos_token in the tokenizer. ## Expected behavior ValueError should not be thrown and the `decoder_input_ids` should have same length as `labels`, allowing `model(...)` call to work correctly.
05-01-2021 21:34:47
05-01-2021 21:34:47
Hi there, yes, PEGASUS has no BOS token, instead it uses `pad_token_id` as the `decoder_start_token_id`. Also, you won't need to pass `deocder_input_ids`, if you pass the `labels` the model prepares the `decoder_input_ids` by shifting the `labels` and setting the first token to pad token.<|||||>Hi @patil-suraj, thanks for answering! Do you mind sharing a small working snippet? I'm new to this library and this model in particular is a bit difficult to understand. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patil-suraj . Could you please provide a small working snippet of the code you mentioned? <|||||>Hi @adivekar-utexas sorry to only answer now. ```python from transformers import PegasusTokenizer tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-large") src_text = "This is source text" tgt_text = "This is target text" inputs = tokenizer(src_text, return_tensors="pt") inputs["labels"] = tokenizer(tgt_text, return_tensors="pt").input_ids outputs = model(**inputs) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi I am trying to use GSG under Bart environment. If I understand right about the GSG, there must be multiple target sentences not one unlike above example. So, is it right that the shape of decoder input and label shape as below(It there are 2 masked sentences)? - decoder input string : \<pad> It is the first masked sentence.\<pad>It is the second masked sentence. - decoder label string : It is the first masked sentence.\<pad>It is the second masked sentence.\</s>
transformers
11,540
closed
Fix examples in M2M100 docstrings
# What does this PR do? Replaces `tok` with `tokenizer` so examples can run with copy-paste ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patil-suraj
05-01-2021 20:12:54
05-01-2021 20:12:54
transformers
11,539
closed
Unable to load DistilBertModel during training
I'm following the example to train a DistilBert model from scratch from: examples/distillation/README.md [https://github.com/huggingface/transformers/blob/1ab8dc44b3d84ed1894f5b6a6fab58fb39298fc7/examples/distillation/README.md](url) during preparation of data converted my json training data to text and generated binarized_text using script: ``` python scripts/binarized_data.py \ --file_path data/dump.txt \ --tokenizer_type bert \ --tokenizer_name bert-base-uncased \ --dump_file data/binarized_text ``` After that I got pickle file generated in data folder, created **serialization_dir/my_first_training** folder. now when I have all the pre processed data and trying to train to get distilbert model using script: ``` python train.py \ --student_type distilbert \ --student_config training_configs/distilbert-base-uncased.json \ --teacher_type bert \ --teacher_name bert-base-uncased \ --alpha_ce 5.0 --alpha_mlm 2.0 --alpha_cos 1.0 --alpha_clm 0.0 --mlm \ --freeze_pos_embs \ -- n_gpu 1 \ --dump_path serialization_dir/my_first_training \ --data_file data/binarized_text.bert-base-uncased.pickle \ --token_counts data/token_counts.bert-base-uncased.pickle \ --force # overwrites the `dump_path` if it already exists. ``` I am getting following error even thoe I have **dump_path, data_file**: `train.py: error: the following arguments are required: --dump_path, --data_file` full stack: ``` 2021-04-28 11:55:58.684284: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-04-28 11:55:58.684326: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. usage: train.py [-h] [--force] --dump_path DUMP_PATH --data_file DATA_FILE --student_type {distilbert,roberta,gpt2} --student_config STUDENT_CONFIG [--student_pretrained_weights STUDENT_PRETRAINED_WEIGHTS] --teacher_type {bert,roberta,gpt2} --teacher_name TEACHER_NAME [--temperature TEMPERATURE] [--alpha_ce ALPHA_CE] [--alpha_mlm ALPHA_MLM] [--alpha_clm ALPHA_CLM] [--alpha_mse ALPHA_MSE] [--alpha_cos ALPHA_COS] [--mlm] [--mlm_mask_prop MLM_MASK_PROP] [--word_mask WORD_MASK] [--word_keep WORD_KEEP] [--word_rand WORD_RAND] [--mlm_smoothing MLM_SMOOTHING] [--token_counts TOKEN_COUNTS] [--restrict_ce_to_mask] [--freeze_pos_embs] [--freeze_token_type_embds] [--n_epoch N_EPOCH] [--batch_size BATCH_SIZE] [--group_by_size] [--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS] [--warmup_prop WARMUP_PROP] [--weight_decay WEIGHT_DECAY] [--learning_rate LEARNING_RATE] [--adam_epsilon ADAM_EPSILON] [--max_grad_norm MAX_GRAD_NORM] [--initializer_range INITIALIZER_RANGE] [--fp16] [--fp16_opt_level FP16_OPT_LEVEL] [--n_gpu N_GPU] [--local_rank LOCAL_RANK] [--seed SEED] [--log_interval LOG_INTERVAL] [--checkpoint_interval CHECKPOINT_INTERVAL] train.py: error: the following arguments are required: --dump_path, --data_file ``` @patil-suraj
05-01-2021 19:42:21
05-01-2021 19:42:21
Does removing the space between `--` and `n_gpu` help?<|||||>Yes @LysandreJik I was trying to use gpu device 1 , later made device changes from code .Thanks
transformers
11,538
closed
[Wav2vec2] Fixed tokenization mistakes while adding single-char tokens to tokenizer
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10622 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-01-2021 16:43:35
05-01-2021 16:43:35
Hey @Muktan, Thanks a lot for working on this! Could you add a test that shows how your code solves the issue? :-) It should be in `tests/test_tokenization_wav2vec2.py`<|||||>Welcome @patrickvonplaten, I will add a test that shows how the code solves the issue.
transformers
11,537
closed
[Flax] Add FlaxBart models
# What does this PR do? This PR adds Flax implementation of BART and classes for various downstream tasks. Fixes #11478. Most of the code is inspired by the Flax implementation of BERT and PyTorch implementation of BART. From Suraj: A couple of important points to note: - The seq2seq API is slightly different from the PyTorch BART model in that the `__call__` method of `FlaxBART` does not accept `encoder_outputs`. In PT if `encoder_outputs` is passed then the encoder is skipped and only the decoder is called. This is not supported in `FlaxBART` to prevent any unintended issues during JIT compilation as skipping module or passing different inputs to function causes re-compilation. Also, the idiomatic way of accessing intermediate modules in Flax models is to expose explicit methods. So the API is as follows - the `__call__` method expects both the encoder and decoder inputs and does forward pass through both the modules - Every model has a `encode` and `decode` method which should be called if one wants to run just the encoder or decoder. The `decode` method only returns the decoder outputs and for `*ForConditionalGeneration` modules it also returns the `logits` ```python # runs encoder and decoder model(input_ids, decoder_input_ids) # just run the encoder encoder_outputs = model.encode(input_ids) # run the decoder decoder_outputs = model.decode(decoder_input_ids, encoder_outputs) ``` - For now `past_key_values` caching is only implemented in the self-attention layer in the decoder i.e it's not implemented for the cross-attention layer. <hr> **Reviewers:** @patrickvonplaten @sgugger @patil-suraj (and whoever else in the community)
05-01-2021 08:58:32
05-01-2021 08:58:32
Hey @stancld, One important thing we should implement before merging is the caching mechanismh similar to how it's done in GPT2: https://github.com/huggingface/transformers/blob/0b0a598452b02278075a75f84b5ca7bb457224ad/src/transformers/models/gpt2/modeling_flax_gpt2.py#L139. @patil-suraj, could you help here maybe? :-) <|||||>Sure @patrickvonplaten ! @stancld let me know if you need help here or I could take this if you are okay with it :)<|||||>@patrickvonplaten a couple of last questions about the API - Right now the `FlaxBartPretrainedModel.__call__` method also accepts the `encoder_outputs`, but think it would be cleaner to not do that as we already have `encode/decode` methods and let the user use `encode` or `decode` if they want to run just one part of the model. So we could make `decode` method available for every model (right now it's only available for the `ForConditionalGeneration` model) and it'll return the decoder outputs and for `*ForConditionalGeneration` models, it'll also return the `logits`. - The `decode` method returns `FlaxSeq2SeqLM` output, which includes both the encoder and decoder outputs, but when calling `decode` the user already has `encoder_outputs`, so maybe we should just return decoder outputs since the `decode` method only runs the decoder. What do you think? <|||||>IMO: 1) Agree, happy to remove `encoder_outptus` as an input argument from `call` & make `decode` available for all models 2) Yes, it doesn't make too much sense to include all the encoder relevant output here!
transformers
11,536
closed
Adafactor gives RuntimeError: tensors must be 2-D
## Environment info - `transformers` version: 4.2.2 (also tried with the latest version v.4.5.1) - Platform: Linux-4.4.0-1127-aws-x86_64-with-debian-stretch-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help @sgugger @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: In my code, I replaced AdamW (which is working just fine) with **Adafactor** and then I get an error (see below). The code is using also gradient checkpointing. Using **Adafactor from FairSeq** works **well** ``` # Replacing AdamW # optimizer = AdamW([{'params': model.parameters()}], lr=args.lr, eps=args.epsilon) # with Adafactor optimizer = Adafactor( [{'params': model.parameters()}], lr=None, eps=(1e-30, 1e-3), clip_threshold=1.0, decay_rate=-0.8, beta1=None, weight_decay=0.0, relative_step=True, scale_parameter=True, warmup_init=True ) ``` Output: ``` home/ubuntu/transformers/src/transformers/optimization.py:557: UserWarning: This overload of add_ is deprecated: add_(Number alpha, Tensor other) Consider using one of the following signatures instead: add_(Tensor other, *, Number alpha) (Triggered internally at /opt/conda/conda-bld/pytorch_1607370116979/work/torch/csrc/utils/python_arg_parser.cpp:882.) exp_avg_sq_row.mul_(beta2t).add_(1.0 - beta2t, update.mean(dim=-1)) 0%|▎ | 19/6858 [00:37<3:42:15, 1.95s/it] Traceback (most recent call last): File "main.py", line 519, in <module> main() File "main.py", line 510, in main train(allincl_model, epoch, optimizer, scheduler, criterion) File "main.py", line 384, in train optimizer.step() File "/home/ubuntu/transformers/src/transformers/optimization.py", line 561, in step update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col) File "/home/ubuntu/transformers/src/transformers/optimization.py", line 492, in _approx_sq_grad return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0)) RuntimeError: tensors must be 2-D ```
05-01-2021 07:22:16
05-01-2021 07:22:16
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Got the same problem. Have you solved it yet?<|||||>Finally I got to solve this problem. This error is caused by 3-D parameters. When the optimizer gets a `[dim1, dim2, dim3]` parameter, [transformers/optimization.py Line 544](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L544) sets `state["exp_avg_sq_row"]` as `[dim1, dim2]` and `state["exp_avg_sq_col"]` as `[dim1, dim3]`. Then the two parameters in [line 508](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L508) become `[dim1, dim2, 1]` and `[1, dim1, dim3]`, and the error occurs. To solve this issue, I create my own adafactor optimizer and change line 506-508 to ``` r_factor = (exp_avg_sq_row / exp_avg_sq_row.mean(dim=-1, keepdim=True)).rsqrt_().unsqueeze(-1) c_factor = exp_avg_sq_col.unsqueeze(-2).rsqrt() return torch.mul(r_factor, c_factor) ``` according to [fairseq's implementation](https://github.com/pytorch/fairseq/blob/main/fairseq/optim/adafactor.py#L159).<|||||>Actually having the same problem<|||||>@ybch14 - do you think this could also be fixed in `transformers` Adafactor implementation?<|||||>> @ybch14 - do you think this could also be fixed in `transformers` Adafactor implementation? Definitely, just change line 506-508 of [transformers/optimization.py](https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#506) as I mentioned above then all done! I'm creating my custom optimizer just because I'm not familiar with pull request process and in a hurry with my development needs. I would really appreciate it if you can help initiate a pull request. I will attach my local test code here to help your local test: ``` import torch import torch.nn as nn import torch.nn.functional as F from transformers.optimization import Adafactor class Model(nn.Module): def __init__(self): super(Model, self).__init__() self.w = nn.Parameter(torch.randn(2, 3, 4), requires_grad=True) def forward(self): return self.w.mean().sigmoid() device = torch.device("cuda") target = torch.tensor(1.).to(device) model = Model().to(device) y = model() loss = F.binary_cross_entropy(y, target) loss.backward() optimizer = Adafactor(model.parameters(), scale_parameter=True, relative_step=True, warmup_init=True, lr=None) optimizer.step() ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Thanks a lot for your help here @ybch14 ! I've opened a PR to fix it just like you suggested and it seems to work just fine :-)<|||||>BTW, we have some guidelines here on how you can open pull requests: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md<|||||>@patrickvonplaten Thank you for your PR and hope pytorch gets better :)
transformers
11,535
closed
Vectorized Numpy based functions to Torch based Functions for SpecAugment.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10459 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
05-01-2021 07:07:53
05-01-2021 07:07:53
Following are the average test results for the `_compute_mask_indices` function when ran 100 times. Each X.1 subtest case is calculated with `attention_mask = None` and each X.2 subtest case is calculated with `attention_mask` calculated with the following code: ``` attention_mask = torch.ones((batch_size, sequence_length), device=torch_device, dtype=torch.long) attention_mask[:, -sequence_length // 2 :] = 0 ``` 1) Test - 100 times batch_size = 4 sequence_length = 60 mask_prob = 0.5 mask_length = 1 Test **1.1** - Result - seconds New Code GPU: 0.002872414588928223 New Code CPU: 0.0006633639335632324 Old Code: 0.0003826594352722168 Test **1.2** - Result - seconds New Code GPU: 0.002973439693450928 New Code CPU: 0.0006422805786132813 Old Code: 0.0004153728485107422 2) Test - 100 times batch_size = 100 sequence_length = 60 mask_prob = 0.5 mask_length = 1 Test **2.1** - Result - seconds New Code GPU: 0.0663988971710205 New Code CPU: 0.014422652721405029 Old Code: 0.008053600788116455 Test **2.2** - Result - seconds New Code GPU: 0.06568058252334595 New Code CPU: 0.01404146671295166 Old Code: 0.008796172142028809 3) Test - 100 times batch_size = 1000 sequence_length = 60 mask_prob = 0.5 mask_length = 1 Test **3.1** - Result - seconds New Code GPU: 0.6623778533935547 New Code CPU: 0.14311392545700075 Old Code: 0.08917582988739013 Test **3.2** - Result - seconds New Code GPU: 0.6566315603256225 New Code CPU: 0.13569485664367675 Old Code: 0.08646429538726806 4) Test - 100 times batch_size = 4 sequence_length = 1000 mask_prob = 0.5 mask_length = 1 Test **4.1** - Result - seconds New Code GPU: 0.0031879472732543944 New Code CPU: 0.0013749027252197266 Old Code: 0.00248842716217041 Test **4.2** - Result - seconds New Code GPU: 0.0031322765350341795 New Code CPU: 0.0010571050643920898 Old Code: 0.0015622496604919434 5) Test - 100 times batch_size = 4 sequence_length = 60 mask_prob = 0.5 mask_length = 4 Test **5.1** - Result - seconds New Code GPU: 0.003424525260925293 New Code CPU: 0.0008220672607421875 Old Code: 0.0003489851951599121 Test **5.2** - Result - seconds New Code GPU: 0.0034962940216064454 New Code CPU: 0.0007469034194946289 Old Code: 0.0003824186325073242 6) Test - 100 times batch_size = 4 sequence_length = 1000 mask_prob = 0.5 mask_length = 4 Test **6.1** - Result - seconds New Code GPU: 0.003502027988433838 New Code CPU: 0.0014672994613647461 Old Code: 0.0017711663246154786 Test **6.2** - Result - seconds New Code GPU: 0.0034971165657043455 New Code CPU: 0.0011277437210083009 Old Code: 0.0011361241340637207 7) Test - 100 times batch_size = 128 sequence_length = 1000 mask_prob = 0.5 mask_length = 4 Test **7.1** - Result - seconds New Code GPU: 0.10527128219604492 New Code CPU: 0.04762232780456543 Old Code: 0.052808206081390384 Test **7.2** - Result - seconds New Code GPU: 0.1032623028755188 New Code CPU: 0.03513101100921631 Old Code: 0.03523270606994629<|||||>Hey @01-vyom, It looks like the git history is messed up :-/ Sorry about that! This often happens when one does a wrong `git merge` -> could you maybe open a new PR with a clear git history / git diff? Thanks a lot!<|||||>Ok sure<|||||>I am closing this PR.<|||||>@patrickvonplaten created the new PR.
transformers
11,534
closed
How to run transformer model like t5-small, facebook/bart-large-cnn without loading pretrained weights?
Same with the title. When using run_summarization.py, how to run transformer models like t5-small, facebook/bart-large-cnn without loading pre-trained weights? I only want to train their original model architecture without pre-trained model. Thanks!
04-30-2021 17:35:28
04-30-2021 17:35:28
For this you could either initialize a random model, save it and pass it's path as `model_name_or_path` arg. Or modify the script to create a random model instead of pre-trained. i.e to init the model use ``` model = AutoModelForSeq2SeqLM(config) ``` instead of using `.from_pretrained`.<|||||>> For this you could either initialize a random model, save it and pass it's path as `model_name_or_path` arg. > Or modify the script to create a random model instead of pre-trained. i.e to init the model use > > ``` > model = AutoModelForSeq2SeqLM(config) > ``` > > instead of using `.from_pretrained`. Thanks for your swift reply. I already try to use model = AutoModelForSeq2SeqLM(config) instead of using .from_pretrained. But it has bugs: ```{r} File "examples/pytorch/summarization/run_summarization.py", line 358, in main model = AutoModelForSeq2SeqLM(config) TypeError: __init__() takes 1 positional argument but 2 were given ``` It seems I should use ```model = T5ForConditionalGeneration(config = config)``` or ```model = BartForConditionalGeneration(config = config)``` when I want to train a Bart or T5 model from scratch without loading pretrained weights. Is that right? Thank you very much!<|||||>Ohh sorry, It should be `AutoModelForSeq2SeqLM.from_config(...)` and yeah, you could also use the individual classes if you want.<|||||>> Ohh sorry, > > It should be `AutoModelForSeq2SeqLM.from_config(...)` > > and yeah, you could also use the individual classes if you want. Thanks for your reply. One quick question, If I use ```AutoModelForSeq2SeqLM.from_config(...)```, when I mentioned t5-small or t5-base or t5-large, is it the same amone these models? Also If I use model = ```T5ForConditionalGeneration(config = config)```, which model I am using? t5-small or t5-base or t5-large? Thank you very much!<|||||>> which model I am using? t5-small or t5-base or t5-large? This depends on the `config`, this initializes models according the values in the `config`, so if the `config` is of `t5-small` the model will be of `t5-small` size with random weights.<|||||>> > which model I am using? t5-small or t5-base or t5-large? > > This depends on the `config`, this initializes models according the values in the `config`, so if the `config` is of `t5-small` the model will be of `t5-small` size with random weights. I see, thank you so much! Last question, sorry for asking so many times lol. I am trying to train T5-large from scratch, but it is very slow even though I use gpu. Do you know how to run run_summarization. py with multi_gpu? Thank you very much! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,533
closed
Update training tutorial
# What does this PR do? This PRs rewrites the training tutorial that needed a bit of a refresher. It uses a simple example on the IMDB dataset (for basic text classification) with fine-tuning using: - Trainer - Keras - Raw training loop in PyTorch One last section with the raw training loop in TensorFlow could be added in a follow-up PR.
04-30-2021 16:43:28
04-30-2021 16:43:28
Re: freezing layers, someone pointed me to your answer here as well: From your deepspeed talk: https://youtu.be/RG-yV5zgqjQ?t=2450 I guess it's still a bit counterintuitive that the randomly initialized head doesn't cause havoc on the weights of base layers? Just to make sure, it's not that you unfreeze the whole thing after only training the head first? You just train the entire thing unfrozen from the very beginning? I've shared this with several folks and there was enough surprise that it could be worth mentioning this explicitly in this tutorial, as this is something I think lots of folks may not know! P.S. if it is the case that you unfreeze the entire thing from the very beginning this is such a surprising result I feel like it's worth a blog post or something! <|||||>This is a surprise for someone who comes from the fastai community but insisting a lot on this for a user who don't even know what freezing layers is won't be helpful either, which is why I'm not mentioning it in the new version. I'll try to find a compromise between the two :-) And yes, Transformers model are usually fine-tuned without freezing anything, which is what is done in all the research papers for GLUE/Squad etc. Training only the randomly initialized head usually does not achieve anything good and the state it ends in is so bad you can't recover by fine-tuning the whole model after.<|||||>@sgugger thank you for clarifying, and thanks for your patience! I'm really glad I learned about this today as I have been doing it wrong all along. Seems like an interesting research project to find out "why"
transformers
11,532
closed
Files not accessible via IPv6
In certain cases (such as [1]), users only have access to the internet via IPv6. Unfortunately huggingface.co (or the domain hosting the files) does not have AAAA records and is not reachable from IPv6, causing `ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on.` when triggering downloads: ``` from transformers import AutoTokenizer AutoTokenizer.from_pretrained('facebook/mbart-large-cc25') ``` [1] https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html#task-networking-vpc-dual-stack "One of the benefits of using a VPC in dual-stack mode is that tasks that are assigned an IPv6 address are able to access the internet as long as the VPC is configured with either an internet gateway or an egress-only internet gateway. NAT gateways are not needed."
04-30-2021 15:58:51
04-30-2021 15:58:51
@n1t0 might be knowledgeable about this :)<|||||>@n1t0 depending where huggingface.co is hosted, solving this issue may just boil down to turning on dual-stack support in the hosting provider's console.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue hasn't been addressed <|||||>Might be interesting for @sterchelen as well as @n1t0 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue hasn't been addressed<|||||>we are looking into this<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue hasn't been addressed Just commenting to keep the bot from closing the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This issue hasn't been addressed. This can only be addressed by the huggingface team.<|||||>@leezu Can you try again now?<|||||>Thank you @n1t0. I verified `AutoTokenizer.from_pretrained` and `AutoModel.from_pretrained` from an IPv6-only instance (no IPv4 route to internet) works now. Thank you for enabling dual-stack support on your end!<|||||>Yay great job on this @n1t0! You should tweet about it :)
transformers
11,531
open
Adding custom tokens makes the T5Tokenizer always strip spaces
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-3.10.0-957.5.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.6.13 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No If it helps, here's also my `pip-chill`: ```text black==19.10b0 corrupt-text==0.0.1 en-core-web-sm==3.0.0 fairseq==1.0.0a0+f6f220e flake8==3.9.0 pep8==1.7.1 pip-chill==1.0.1 rope==0.14.0 sentencepiece==0.1.95 torchtext==0.8.0 transformers==4.5.1 wikiextractor==3.0.5 ``` Note that `corrupt-text` is a custom library, and the problem persists even when it's uninstalled. It has nothing to do with the problem, as can be seen in the **to reproduce** section. ### Who can help Since it's a tokenizer issue, probably @LysandreJik. ## Information I'm using the `T5Tokenizer`. After adding custom tokens, if the input is tokenized and they're found in the text, they will have stripped spaces around them even if I explicitly give the `add_tokens` and `add_special_tokens` a list of `AddedToken` objects with `lstrip` and `rstrip` explicitly set to `False`. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) Check out the **to reproduce** section to get an example of a code that doesn't work. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) It's not really relevant for this problem but the code is, once again, in the **to reproduce** section. This is likely related to https://github.com/huggingface/transformers/issues/7901. ## To reproduce Try running this code: ```python from transformers import T5Tokenizer from tokenizers import AddedToken text = "Bruh doits <do_not_touch>" tokenizer = T5Tokenizer.from_pretrained("t5-small") tokenizer.add_tokens([AddedToken("doits", lstrip=False, rstrip=False)]) tokenizer.add_special_tokens( { "additional_special_tokens": [ AddedToken("<do_not_touch>", lstrip=False, rstrip=False) ] } ) tokens = tokenizer.tokenize(text) ids = tokenizer( text, add_special_tokens=False, padding=False, truncation=False, return_attention_mask=False, )["input_ids"] print(f"Text: {text}") print(f"Tokens: {tokens}") print(f"IDs: {ids}") print(f"Text after: {tokenizer.convert_tokens_to_string(tokens)}") ``` You will get this: ```text Text: Bruh doits <do_not_touch> Tokens: ['▁', 'Bru', 'h', 'doits', '<do_not_touch>'] IDs: [3, 9465, 107, 32100, 32101] Text after: Bruhdoits<do_not_touch> ``` ## Expected behavior We should get: ```text Text: Bruh doits <do_not_touch> Tokens: ['▁', 'Bru', 'h', '▁', 'doits', '▁', '<do_not_touch>'] IDs: [3, 9465, 107, 3, 32100, 3, 32101] Text after: Bruh doits <do_not_touch> ``` EDIT: Updated the code to have `rstrip=False`, since I made the mistake originally, but still acts the same.
04-30-2021 15:55:42
04-30-2021 15:55:42
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>The issue still persists and tokenizers in general still act weird with special tokens and whitespace.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @LysandreJik, is there any update on this? We are also facing issues with added tokens for both, Rust and Python tokenizers, when using the default mt5 tokenizer. Similar to the issues above, we experience inconsistent behavior with spaces in the immediate surroundings of added tokens. ``` tokenizer_fast = MT5TokenizerFast.from_pretrained("google/mt5-base") tokenizer = MT5Tokenizer.from_pretrained("google/mt5-base") tokenizer_fast.add_tokens("<new_token>") tokenizer.add_tokens("<new_token>") text = "This is a test <new_token>." tokens = tokenizer_fast.tokenize(text) print(tokens) tokenizer_fast.convert_tokens_to_string(tokens) ``` `['▁This', '▁is', '▁', 'a', '▁test', '▁', '<new_token>', '▁', '.']` `'This is a test <new_token> .'` For the fast tokenizer, a space is inserted after the added token. For the slow one, also spaces in front of added tokens are removed: ``` tokens = tokenizer.tokenize(text) print(tokens) tokenizer.convert_tokens_to_string(tokens) ``` `['▁This', '▁is', '▁', 'a', '▁test', '<new_token>', '▁', '.']` `'This is a test<new_token> .'` At least for the Python tokenizer, I believe the problem lies in the way how texts with added tokens are passed to the underlying sentence_piece tokenizer. The texts are basically split by added tokens and the remaining parts are individually passed to sp. By default, the sp tokenizer adds a space at the start of each sequence and removes them at the end: ``` tokenizer.sp_model.encode("A test ", out_type=str) ``` `['▁A', '▁test']` When tokens are converted back into a single string, only the space at the very first position is removed, but not in case there is an added token in front of it ``` tokenizer.sp_model.decode_pieces(['▁This', '▁is', '▁', 'a', '▁test', '<new_token>', '▁', '.']) ``` `'This is a test<new_token> .'` For the slow tokenizer, we could modify the tokens manually to e.g. take into account spaces in the original string. Unfortunately we lack the Rust skills to do this for the fast tokenizer. Are there any plans to adjust this in the near future (since this issue still has the WIP tag)?<|||||>Pinging @SaulLu <|||||>Hey! This is being talked in the PR linked above! Sorry for the late reply<|||||>Regarding the default MT5 problem with addition of a space, this is being handled here: #24565. The problem is not because of striping left right for ponctuation, but `rstrip` and `lstrip` are indeed ingored<|||||>Fixing the rust tokenizer: it's a hack so I might have to change the rust code, but for now the following will strip anything on the right and left, giving the expected results. ```python class T5Converter(SpmConverter): def vocab(self, proto): num_extra_ids = self.original_tokenizer._extra_ids vocab = [(piece.piece, piece.score) for piece in proto.pieces] vocab += [(f"<extra_id_{i}>_", 0.0) for i in range(num_extra_ids - 1, -1, -1)] return vocab .......... ``` I tested: ```python >>> from transformers import AutoTokenizer >>> tokenizer=AutoTokenizer.from_pretrained("google/mt5-small", from_slow = True) >>> tokenizer.tokenize("Hello, <extra_id_0>, ") ['▁Hello', ',', '▁<extra_id_0>', ',', '▁'] ```
transformers
11,530
closed
generate text with inputs_embeds (instead of input_ids) for T5.
model.generate() supports input_ids only ``` outs = model.model.generate(input_ids=batch['source_ids'], attention_mask=batch['source_mask'], output_scores=True, max_length=model.model_arguments.max_output_seq_length) preds_cleaned = [model.tokenizer.decode(ids, skip_special_tokens=True, clean_up_tokenization_spaces=True) for ids in outs] ``` It would be good to have the functionality of generating text from embeddings. model.forward() allows passing inputs_embeds instead of input_ids.
04-30-2021 15:53:04
04-30-2021 15:53:04
One possible solution is to get the `encoder_outputs` by passing `inputs_embeds` to `encoder` and then passing that `encoder_outputs` to `.generate`, so for example ``` with torch.no_grad(): encoder_outputs = model.get_encoder()(inputs_embeds=input_embeds) gen_ids = model.generate(input_ids=None, encoder_outputs=encoder_outputs) ```<|||||>Thanks.
transformers
11,529
closed
Deberta v2 Fast Tokenizer
Fast tokenizers for deberta models were requested in #10498. For the deberta (v1) model, they were implemented in #11387. Deberta v2 fast tokenizers are yet to be implemented.
04-30-2021 15:39:18
04-30-2021 15:39:18
Any kind soul please add fast tokenizer for Deberta V2. Would be really helpful and thanks in advance!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,528
closed
Adds Flax BERT finetuning example on GLUE
# What does this PR do? Adds Flax BERT finetuning example which finetunes on one of the GLUE tasks. I evaluated all tasks 5 times and added the average runs, the best run. and stdev in a table in the README. I used the seed of the best run as the default. I also ran all experiments on three devices: 8 Cloud TPU-v3, 1 Cloud TPU-v3, 1 P100 GPU. I compared the runtimes and put them in another table in the README. This PR was discussed over Slack with @patrickvonplaten and @sgugger . ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
04-30-2021 13:03:17
04-30-2021 13:03:17
transformers
11,527
closed
Run model templates on master
It currently runs on branches, but not on `master`.
04-30-2021 12:41:23
04-30-2021 12:41:23
transformers
11,526
closed
Add Stas and Suraj as authors
# What does this PR do? In recognition of all your hard work and the amazing stuff you've added to the lib, adding @stas00 and @patil-suraj to the authors of the lib 🤗
04-30-2021 12:31:56
04-30-2021 12:31:56
Thank you, Sylvain ❤️ <|||||>Thank you, guys! That feels good!
transformers
11,525
closed
Adding support for `pipeline("automatic-speech-recognition")`.
# What does this PR do? Implements the default load logic to make `AutomaticSpeechRecognitionPipeline` work like other pipelines with `pipeline(task="automatic-speech-recognition", ...)`. The main issue with current implementation is `"config"` choice for AutoModel. It would be great to have the possibility to have something like `AutoModelFor` that would implement the same logic (Load the config, check Architectures and load the first one) Alternatives: - Implement `AutoModelForCTC`, `AutoModelForConditionalGeneration`, allow the `ALLOWED_TASKS` mapping to accept iterables, and try to load models accordingly. This might enable better switch handling case here: https://github.com/huggingface/transformers/blob/master/src/transformers/pipelines/automatic_speech_recognition.py#L141 with actual `isinstance` instead of the dummy string check. This would change `ALLOWED_TASKS` logic but might be closer to existing code. Other discussion could include the `Mixin` which wasn't used here. The main reason is because the mixin supposes that TF is enabled, but the ASR models do not have a TF alternative right now. Still imported the main tests. Better care of error raised could be added for the `feature_extractor` missing or incorrect. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-30-2021 11:04:21
04-30-2021 11:04:21
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Ping <|||||>@patrickvonplaten could make another review , I think the previous PR that reads `config.architectures` alleviated any issue for this PR. what do you think ?<|||||>@sgugger Maybe a small sanity check if you don't mind (code has significantly changed since last review) ? Should be for the better.
transformers
11,524
closed
[examples, translation/summerization] resize token embeds
# What does this PR do? Resize token embedding in the summarization and translation examples. Fixes #11518
04-30-2021 10:00:30
04-30-2021 10:00:30
transformers
11,523
closed
Distributed multi-node support for CPU cluster
# 🚀 Feature request ## Motivation Current distributed run supports only multi-GPU/TPU run. This feature requests adding support for distributed CPU runs using MPI/GLOO or recently added custom backend e.g. intel oneCCL backend using [torch-ccl](https://github.com/intel/torch-ccl) plugin. ## Your contribution An example use for Intel oneCCL backend (using Intel torch-ccl) can be found at https://github.com/ddkalamk/transformers/blob/pcl-v4.0.0/examples/question-answering/run_squad.py#L746 Assumption is we launch the application using MPI (similar to horovod) and based on environment variable initialize distributed backend. Here is another and more comprehensive use case from Facebook DLRM workload: https://github.com/facebookresearch/dlrm/blob/master/extend_distributed.py#L59
04-30-2021 09:14:26
04-30-2021 09:14:26
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,522
closed
Compute probability of target sentences given an input
I need to test the probabilities that the model would produce certain outputs. Let me give you an example: I have a source sentence X, and several possible target sentences Y1, Y2, Y3, Y4 , ... I want to know if I can compute the probability that the model would give to each of the translation Y, given X. Is there a function to compute these values?
04-30-2021 08:48:15
04-30-2021 08:48:15
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
11,521
closed
How to set up a custom tokenizer for distilbart
Hi, I am currently using the distillbart. I pretrained a bart-large model with my own Chinese corpus and I just simply map each character to an id when I do the pre-training. In the distillition.py, the path or name of the tokenizer should be defined. My question is how can I build a custom tokenizer class which can be used in the code? Assume that we get a word dictionary which map character to the input id when I do the pre-training. Thank you.
04-30-2021 08:44:31
04-30-2021 08:44:31
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
11,520
closed
[Master] Make style
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix copies on master ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
04-30-2021 07:54:42
04-30-2021 07:54:42