repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
15,143
closed
[Fix doc example] - OpenAIGPTDoubleHeadsModel
# What does this PR do? This line fails https://github.com/huggingface/transformers/blob/7b83feb50a8965e9d8f13b6c4042239710b97c76/src/transformers/models/openai/modeling_openai.py#L690 Should change from `lm_logits` to `logits` (See `OpenAIGPTDoubleHeadsModelOutput`) ## Who can review @patrickvonplaten, @LysandreJik
01-13-2022 17:24:48
01-13-2022 17:24:48
transformers
15,142
closed
Update model_sharing.mdx
Fix typo # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-13-2022 17:20:50
01-13-2022 17:20:50
@sgugger simple typo fix
transformers
15,141
closed
Check the repo consistency in model templates test
# What does this PR do? Since the split of quality in two, the repo consistency was not checked anymore in the template tests. This PR fixes that. Some failures coming from the template docs are then caught and fixed.
01-13-2022 16:11:24
01-13-2022 16:11:24
transformers
15,140
closed
AutoTokenizer | TypeError: an integer is required (got type NoneType)
Wasn't sure if this was for `Transformers` or `Tokenizers` libraries. Based on [SO post](https://stackoverflow.com/q/70699247/17840900). Goal: Amend this [Notebook][1] to work with **distilbert-base-uncased** model Error occurs in **Section 1.3**. Kernel: `conda_pytorch_p36`. I did Restart & Run All, and refreshed file view in working directory. --- Section 1.3: ```python # define the tokenizer tokenizer = AutoTokenizer.from_pretrained( configs.output_dir, do_lower_case=configs.do_lower_case) ``` Traceback: ``` Evaluating PyTorch full precision accuracy and performance: /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/data/processors/glue.py:67: FutureWarning: This function will be removed from the library soon, preprocessing should be handled with the 🤗 Datasets library. You can have a look at this example script for pointers: https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py warnings.warn(DEPRECATION_WARNING.format("function"), FutureWarning) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-31-1f864e3046eb> in <module> 144 # Evaluate the original FP32 BERT model 145 print('Evaluating PyTorch full precision accuracy and performance:') --> 146 time_model_evaluation(model, configs, tokenizer) 147 148 # Evaluate the INT8 BERT model after the dynamic quantization <ipython-input-31-1f864e3046eb> in time_model_evaluation(model, configs, tokenizer) 132 def time_model_evaluation(model, configs, tokenizer): 133 eval_start_time = time.time() --> 134 result = evaluate(configs, model, tokenizer, prefix="") 135 eval_end_time = time.time() 136 eval_duration_time = eval_end_time - eval_start_time <ipython-input-31-1f864e3046eb> in evaluate(args, model, tokenizer, prefix) 22 results = {} 23 for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs): ---> 24 eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True) 25 26 if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]: <ipython-input-31-1f864e3046eb> in load_and_cache_examples(args, task, tokenizer, evaluate) 121 all_input_ids = torch.tensor([f.input_ids for f in features], dtype=torch.long) 122 all_attention_mask = torch.tensor([f.attention_mask for f in features], dtype=torch.long) --> 123 all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) 124 if output_mode == "classification": 125 all_labels = torch.tensor([f.label for f in features], dtype=torch.long) TypeError: an integer is required (got type NoneType) ``` Please let me know if there's anything else I can add to post. [1]: https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/notebooks/bert/Bert-GLUE_OnnxRuntime_quantization.ipynb
01-13-2022 15:40:30
01-13-2022 15:40:30
Hey @danielbellhv, you're switching from BERT, which has the concept of token type IDs, to DistilBERT, which doesn't. In most cases that isn't important as our API should be able to handle that behind the scenes. In this notebook however, there's a very explicit conversion of all token IDs here: ``` all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) ``` This prevents the switch from models which do leverage token type IDs to others which do not. I recommend you take a look at the [following example](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue_no_trainer.py) to see how we use our tools to be agnostic to all models. In your current situation here, I would remove all explicit mentions to `token_type_ids`. DistilBERT wasn't trained with them, so it's fine to remove the concept of this variable when switching to this model.<|||||>Thank you, @LysandreJik. I'll try implementing suggested changes in a new copy of the Notebook. --- Note for anyone else. This isn't as simple as removing the aforementioned line: > ``` > all_token_type_ids = torch.tensor([f.token_type_ids for f in features], dtype=torch.long) > ``` That yields: ``` Evaluating PyTorch full precision accuracy and performance: Evaluating PyTorch full precision accuracy and performance: --------------------------------------------------------------------------- UnboundLocalError Traceback (most recent call last) <ipython-input-19-2d6264ca04e9> in <module> 146 # Evaluate the original FP32 BERT model 147 print('Evaluating PyTorch full precision accuracy and performance:') --> 148 time_model_evaluation(model, configs, tokenizer) 149 150 # Evaluate the INT8 BERT model after the dynamic quantization <ipython-input-19-2d6264ca04e9> in time_model_evaluation(model, configs, tokenizer) 134 def time_model_evaluation(model, configs, tokenizer): 135 eval_start_time = time.time() --> 136 result = evaluate(configs, model, tokenizer, prefix="") 137 eval_end_time = time.time() 138 eval_duration_time = eval_end_time - eval_start_time <ipython-input-19-2d6264ca04e9> in evaluate(args, model, tokenizer, prefix) 22 results = {} 23 for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs): ---> 24 eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True) 25 26 if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]: <ipython-input-19-2d6264ca04e9> in load_and_cache_examples(args, task, tokenizer, evaluate) 129 all_labels = torch.tensor([f.label for f in features], dtype=torch.float) 130 --> 131 dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels) 132 return dataset 133 UnboundLocalError: local variable 'all_token_type_ids' referenced before assignment Evaluating PyTorch full precision accuracy and performance: --------------------------------------------------------------------------- UnboundLocalError Traceback (most recent call last) <ipython-input-19-2d6264ca04e9> in <module> 146 # Evaluate the original FP32 BERT model 147 print('Evaluating PyTorch full precision accuracy and performance:') --> 148 time_model_evaluation(model, configs, tokenizer) 149 150 # Evaluate the INT8 BERT model after the dynamic quantization <ipython-input-19-2d6264ca04e9> in time_model_evaluation(model, configs, tokenizer) 134 def time_model_evaluation(model, configs, tokenizer): 135 eval_start_time = time.time() --> 136 result = evaluate(configs, model, tokenizer, prefix="") 137 eval_end_time = time.time() 138 eval_duration_time = eval_end_time - eval_start_time <ipython-input-19-2d6264ca04e9> in evaluate(args, model, tokenizer, prefix) 22 results = {} 23 for eval_task, eval_output_dir in zip(eval_task_names, eval_outputs_dirs): ---> 24 eval_dataset = load_and_cache_examples(args, eval_task, tokenizer, evaluate=True) 25 26 if not os.path.exists(eval_output_dir) and args.local_rank in [-1, 0]: <ipython-input-19-2d6264ca04e9> in load_and_cache_examples(args, task, tokenizer, evaluate) 129 all_labels = torch.tensor([f.label for f in features], dtype=torch.float) 130 --> 131 dataset = TensorDataset(all_input_ids, all_attention_mask, all_token_type_ids, all_labels) 132 return dataset 133 UnboundLocalError: local variable 'all_token_type_ids' referenced before assignment ``` <|||||>There's about 4-5 lines which should need to be removed AFAICT from the notebook, searching for `token_type_ids` should show all of them
transformers
15,139
closed
Error when running TFT5ForConditionalGeneration with tensorflow-cpu==2.8.0-rc0
### Environment info - `transformers` version: 4.15.0 - Platform: Linux-5.4.0-88-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.8.0-rc0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help - T5: @patrickvonplaten or - TensorFlow: @Rocketknight1 ## Information Model I am using T5: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behaviour: 1. upgrade tensorflow to 2.8.0 > pip install -U tensorflow-cpu==2.8.0rc0 2. follow example, like in https://github.com/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-%20Training.ipynb 3. error raised during fit call: ``` ValueError: Found unexpected losses or metrics that do not correspond to any Model output: dict_keys(['loss']). Valid mode output names: ['output_1']. Received struct is: {'loss': <function dummy_loss at 0x7f7a2d430e50>}. ``` ## Expected behavior No error
01-13-2022 15:28:09
01-13-2022 15:28:09
I'm seeing the same issue on official example: examples/tensorflow/question-answering<|||||>Can you confirm if this issue still occurs when using TF 2.6 or 2.7?<|||||>In case of 2.7.0 issue does not reproduce. The following steps allow of easy reproduce using transformers/examples/tensorflow/question-answering: ``` python -m virtualenv -p python3 venv source venv/bin/activate pip install tensorflow==2.8.0-rc0 git clone https://github.com/huggingface/transformers cd transformers pip install . cd examples/tensorflow/question-answering pip install -r requirements.txt pip install torch python run_qa.py --model_name_or_path distilbert-base-cased --output_dir output --dataset_name squad --do_train --do_eval ``` <|||||>Hmm, okay, this seems like a compatibility issue with TF 2.8 that we'll have to resolve soon. I'd guess it's likely caused by us overriding `train_step` in our code, but TF 2.8 changing the default `train_step` to handle the new `Model.compute_losses` and `Model.compute_metrics`. Will discuss with the team and see if we can figure out a solution before the final 2.8 release. Thank you for the bug report! <|||||>@tdomagal @atom00 We have identified the issue - the Keras method `compute_loss` which was newly added in 2.8 is clobbering the `compute_loss` method that all HuggingFace models already had. We're going to rename our method and see if that resolves things. Thank you again for this report - it alerted us to an important issue before 2.8 went live!
transformers
15,138
closed
[TBD] discrepancy regarding the `tokenize` method behavior - should the unknown token be included or not
It seems that not all tokenizers have the same behavior regarding the `tokenize` method, in particular some tokenizers will show the token unknow if the token does not belong to the vocabulary and other tokenizers will show the snippet corresponding to this token unknow in the initial text. I've created a [google colab](https://colab.research.google.com/drive/1lptT0LXsO1B9QCrOL439SHxdeFJMFe8V?usp=sharing) showing the output of the `tokenizer.tokenize(text)` method and `tokenizer.convert_ids_to_tokens(tokenizer.encode(text, add_special_tokens=False))` on several tokenizers. I have the impression that this difference is related to the tokenization algorithm used (and therefore probably to the behavior in the `tokenizers` library). This raises several questions: - for what reasons is this `tokenize` method used by users. - is it upsetting to have multiple behaviors? - what is the right behavior? - if we choose to standardize this behavior, will it cause BC problems?
01-13-2022 15:20:11
01-13-2022 15:20:11
Hi @SaulLu, thank you for opening an issue! IMO the `tokenize` methods should behave somewhat consistently across tokenizers - but from what I understood this may be impossible given the different tokenization methods. My opinion on the matter is: - We should try to have all `tokenize` methods conform - If some cannot, then clearly document them - If we do some changes, these will have to remain backward-compatible until v5, but we can have a v5 major version that *slightly* changes the behavior if we document each of them clearly. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,137
closed
Add test suite for flaubert tokenizer
This is tragic, the flaubert tokenizer has no tests. :crying_cat_face: Details to accompany this issue are coming soon, including a template to add these missing tests. This will probably be a good first issue.
01-13-2022 15:10:15
01-13-2022 15:10:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>included into a more global issue: #16627
transformers
15,136
closed
AutoTokenizer | ValueError: Couldn't instantiate the backend tokenizer from one of:
Wasn't sure if this was for `Transformers` or `Tokenizers` libraries. Based on [SO post](https://stackoverflow.com/q/70698407/17840900). Goal: Amend this [Notebook][1] to work with **albert-base-v2** model. Error occurs in **Section 1.3**. Kernel: `conda_pytorch_p36`. I did Restart & Run All, and refreshed file view in working directory. --- There are 3 listed ways this error can be caused. I'm not sure which my case falls under. Section 1.3: ```python # define the tokenizer tokenizer = AutoTokenizer.from_pretrained( configs.output_dir, do_lower_case=configs.do_lower_case) ``` Traceback: ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-25-1f864e3046eb> in <module> 140 # define the tokenizer 141 tokenizer = AutoTokenizer.from_pretrained( --> 142 configs.output_dir, do_lower_case=configs.do_lower_case) 143 144 # Evaluate the original FP32 BERT model ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 551 else: 552 if tokenizer_class_py is not None: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1752 use_auth_token=use_auth_token, 1753 cache_dir=cache_dir, -> 1754 **kwargs, 1755 ) 1756 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1880 # Instantiate tokenizer. 1881 try: -> 1882 tokenizer = cls(*init_inputs, **init_kwargs) 1883 except OSError: 1884 raise OSError( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert_fast.py in __init__(self, vocab_file, tokenizer_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, **kwargs) 159 cls_token=cls_token, 160 mask_token=mask_token, --> 161 **kwargs, 162 ) 163 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_fast.py in __init__(self, *args, **kwargs) 116 else: 117 raise ValueError( --> 118 "Couldn't instantiate the backend tokenizer from one of: \n" 119 "(1) a `tokenizers` library serialization file, \n" 120 "(2) a slow tokenizer instance to convert or \n" ValueError: Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. ``` Please let me know if there's anything else I can add to post. [1]: https://github.com/microsoft/onnxruntime-inference-examples/blob/main/quantization/notebooks/bert/Bert-GLUE_OnnxRuntime_quantization.ipynb
01-13-2022 14:42:33
01-13-2022 14:42:33
Do you have `sentencepiece` installed? If you do, have you restarted the notebook kernel after installing it?<|||||>Thanks for getting back, @LysandreJik I ran `pip install sentincepiece`. Progress has been made, that error is not there anymore. However, in the same code line, I get an error with `sentencepiece`. Wrapping `str()` around both parameters yields the same Traceback. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-12-1f864e3046eb> in <module> 140 # define the tokenizer 141 tokenizer = AutoTokenizer.from_pretrained( --> 142 configs.output_dir, do_lower_case=configs.do_lower_case) 143 144 # Evaluate the original FP32 BERT model ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 551 else: 552 if tokenizer_class_py is not None: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1752 use_auth_token=use_auth_token, 1753 cache_dir=cache_dir, -> 1754 **kwargs, 1755 ) 1756 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1776 copy.deepcopy(init_configuration), 1777 *init_inputs, -> 1778 **(copy.deepcopy(kwargs)), 1779 ) 1780 else: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1880 # Instantiate tokenizer. 1881 try: -> 1882 tokenizer = cls(*init_inputs, **init_kwargs) 1883 except OSError: 1884 raise OSError( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, sp_model_kwargs, **kwargs) 179 180 self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) --> 181 self.sp_model.Load(vocab_file) 182 183 @property ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in Load(self, model_file, model_proto) 365 if model_proto: 366 return self.LoadFromSerializedProto(model_proto) --> 367 return self.LoadFromFile(model_file) 368 369 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in LoadFromFile(self, arg) 169 170 def LoadFromFile(self, arg): --> 171 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 172 173 def DecodeIdsWithCheck(self, ids): TypeError: not a string --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-12-1f864e3046eb> in <module> 140 # define the tokenizer 141 tokenizer = AutoTokenizer.from_pretrained( --> 142 configs.output_dir, do_lower_case=configs.do_lower_case) 143 144 # Evaluate the original FP32 BERT model ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)] 549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None): --> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) 551 else: 552 if tokenizer_class_py is not None: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs) 1752 use_auth_token=use_auth_token, 1753 cache_dir=cache_dir, -> 1754 **kwargs, 1755 ) 1756 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1776 copy.deepcopy(init_configuration), 1777 *init_inputs, -> 1778 **(copy.deepcopy(kwargs)), 1779 ) 1780 else: ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, resolved_vocab_files, pretrained_model_name_or_path, init_configuration, use_auth_token, cache_dir, *init_inputs, **kwargs) 1880 # Instantiate tokenizer. 1881 try: -> 1882 tokenizer = cls(*init_inputs, **init_kwargs) 1883 except OSError: 1884 raise OSError( ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/models/albert/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, sp_model_kwargs, **kwargs) 179 180 self.sp_model = spm.SentencePieceProcessor(**self.sp_model_kwargs) --> 181 self.sp_model.Load(vocab_file) 182 183 @property ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in Load(self, model_file, model_proto) 365 if model_proto: 366 return self.LoadFromSerializedProto(model_proto) --> 367 return self.LoadFromFile(model_file) 368 369 ~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sentencepiece/__init__.py in LoadFromFile(self, arg) 169 170 def LoadFromFile(self, arg): --> 171 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) 172 173 def DecodeIdsWithCheck(self, ids): TypeError: not a string ```<|||||>[Solution](https://github.com/huggingface/tokenizers/issues/878)
transformers
15,135
closed
Error when running a wandb sweeps on run_summarization.py
## Environment info - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.11.0-37-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.8.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using T5-Base or Pegasus The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Replace the if __name__ == "__main__" function in the run_summarization.py example script with: ``` if __name__ == "__main__": # main() wandb.login() config_defaults = { 'num_train_epochs': 3, 'learning_rate': 0.00003, 'weight_decay': 0.1 } wandb.init(project="kaizan-sum", entity="kmfoda_kaizan", config=config_defaults) sweep_config = { "name": "lr-epoch-weight-decay-sweep-batch-", "method": "bayes", "metric": {"name": "bert_rogue", "goal": "maximize"}, "parameters": { "weight_decay": {"min": 0.0, "max": 1.0}, "num_train_epochs": {"min": 1, "max": 40}, "learning_rate": {"min": 0.0, "max": 4e-4}, }, "early_terminate": {"type": "hyperband", "min_iter": 6,}, } sweep_id = wandb.sweep(sweep_config) wandb.agent(sweep_id, function=main) ``` 2. Run the following: ``` python3 transformers/examples/pytorch/summarization/run_summarization.py \ --model_name_or_path t5-base \ --per_device_train_batch_size 2 \ --output_dir output_dir \ --overwrite_output_dir \ --fp16 \ --do_train \ --predict_with_generate \ --report_to wandb \ --load_best_model_at_end True \ --greater_is_better True \ --evaluation_strategy steps \ --save_steps 1200 \ --eval_steps 50 \ --logging_steps 400 \ --max_train_samples 100 \ --max_eval_samples 10 \ --dataset_name samsum ``` 3. After the 1st run finished I get the following error: ``` wandb: ERROR Problem finishing run Traceback (most recent call last): File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 1788, in _atexit_cleanup self._on_finish() File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 1936, in _on_finish self._console_stop() # TODO: there's a race here with jupyter console logging File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 1828, in _console_stop self._restore() File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/wandb_run.py", line 1758, in _restore self._err_redir.uninstall() File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 754, in uninstall _WSCH.remove_fd(self._pipe_read_fd) File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 667, in remove_fd self._unregister() File "/usr/local/lib/python3.6/dist-packages/wandb/sdk/lib/redirect.py", line 655, in _unregister signal.signal(signal.SIGWINCH, self._old_handler) File "/usr/lib/python3.6/signal.py", line 47, in signal handler = _signal.signal(_enum_to_int(signalnum), _enum_to_int(handler)) ValueError: signal only works in main thread /usr/lib/python3.6/multiprocessing/semaphore_tracker.py:143: UserWarning: semaphore_tracker: There appear to be 6 leaked semaphores to clean up at shutdown len(cache))]([url](url)) ``` ## Expected behavior Wandb sweeps should save the run and kickstart a new run without this Value Error
01-13-2022 12:35:06
01-13-2022 12:35:06
The stack trace only shows errors coming from WandB, not the example, so I'm not sure what you want us to fix.<|||||>Hi @sgugger, you're right it doesn't. The reason I raised it with HF rather than WandB is that I can run a sweep on a function with a trainer in it just fine but whenever I try and run a sweep on the main function of run_summarizer.py I get this error.<|||||>@KMFODA I'm from W&B, let me check if this way of calling Sweeps is supported, possibly not. Have you tried just calling sweeps from the `wandb` command line? - use the unmodified run_summarization.py script - Put the location of the script and the sweep config in a yaml file - set up the sweep config via the wandb cli - Kick off the training via the wandb cli The [sweeps quickstart here](https://docs.wandb.ai/guides/sweeps/quickstart) should get you going hopefully.<|||||>Amazing thanks @morganmcg1 calling sweeps from the `wandb` command line fixes this!
transformers
15,134
closed
doc-builder -> doc-build
Changes the repository on which the documentation builds will be pushed.
01-13-2022 10:24:39
01-13-2022 10:24:39
Just want to make sure [this feature](https://huggingface.slack.com/archives/C02GLJ5S0E9/p1639171159299300) is unrelated to this PR<|||||>> Just want to make sure this feature is unrelated to this PR I have not touched the notebook part of the PR, the notebooks will still get automatically updated AFAICT
transformers
15,133
closed
Add from_encoder_decoder_pretrained to some dummy obj
# What does this PR do? Fix https://discuss.huggingface.co/t/flaxvisionencoderdecodermodel-decoder-start-token-id/13635 ## Who can review? @NielsRogge
01-13-2022 09:42:58
01-13-2022 09:42:58
`check_dummies.py` need to be adjusted to make it work https://github.com/huggingface/transformers/blob/9a94bb8e218033cffa1ef380010b528410ba3ca7/utils/check_dummies.py#L36-L52<|||||>Info: should not edit these dummy files manually ``` # This file is autogenerated by the command `make fix-copies`, do not edit. ```
transformers
15,132
closed
Faster model templates
Changes the container for the model templates to speed up significantly the environment setup.
01-13-2022 09:18:38
01-13-2022 09:18:38
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,131
closed
Electra model class support loading weights from other types of BERTs
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Electra model class support loading weights from other types of BERTs(roberta, bert, etc..). Most of them share the same backbone, so is it possible to support this features? Thanks. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
01-13-2022 08:27:02
01-13-2022 08:27:02
Hello, thanks for opening an issue and feature request! Could you enlighten me as to why you'd like that feature? I haven't thought deeply about it, but you're right that these models do share a very similar architecture! As we try to have our tools be as model agnostic as possible, I'm wondering what purpose loading BERT's weight in ELECTRA would serve, vs. simply loading BERT's weights in BERT?<|||||>Hi, my purpose is that I need a "post-train" phase between "pre-train" and "fine-tune" for downstream domain adaptation. Due to limited pre-trained resource, I have to adapt a pre-train model with faster electra training method to transfer an excellent pre-train models to downstream domain, so that I could fine-tuning with different tasks, or projects. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,130
closed
run_summarization.py download datasets error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Y - Using distributed or parallel set-up in script?: N ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): T5-small. The problem arises when using: * [x] the official example script: The summarization example [run_summarization_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py) throws an error when loading datasets. So does [run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py). The tasks I am working on is: * [x] an official summarization task: cnn_dailymail ## To reproduce Steps to reproduce the behavior: ```bash python run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ./tst-summarization ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior ```bash --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-16-b3897e385817> in <module>() 617 618 if __name__ == "__main__": --> 619 main() <ipython-input-16-b3897e385817> in main() 343 if args.dataset_name is not None: 344 # Downloading and loading a dataset from the hub. --> 345 raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name) 346 else: 347 data_files = {} /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1697 ignore_verifications=ignore_verifications, 1698 try_from_hf_gcs=try_from_hf_gcs, -> 1699 use_auth_token=use_auth_token, 1700 ) 1701 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 594 if not downloaded_from_gcs: 595 self._download_and_prepare( --> 596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 597 ) 598 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py in _split_generators(self, dl_manager) 253 def _split_generators(self, dl_manager): 254 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 256 # Generate shared vocabulary 257 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py in _subset_filenames(dl_paths, split) 154 else: 155 logger.fatal("Unsupported split: %s", split) --> 156 cnn = _find_files(dl_paths, "cnn", urls) 157 dm = _find_files(dl_paths, "dm", urls) 158 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 133 else: 134 logger.fatal("Unsupported publisher: %s", publisher) --> 135 files = sorted(os.listdir(top_dir)) 136 137 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Check the cache path: ```bash # the file exists, but not a directory $ ls /root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b /root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b $ ls /root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/ ls: cannot access '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/': Not a directory $ cat /root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b <!DOCTYPE html><html><head><title>Google Drive - Quota exceeded</title><meta http-equiv="content-type" content="text/html; charset=utf-8"/><link href=&#47;static&#47;doclist&#47;client&#47;css&#47;2674426593&#45;untrustedcontent.css rel="stylesheet" nonce="pCgnLp5yR1/ZpQj0eTvLkg"><link rel="icon" href="//ssl.gstatic.com/images/branding/product/1x/drive_2020q4_32dp.png"/><style nonce="pCgnLp5yR1/ZpQj0eTvLkg">#gbar,#guser{font-size:13px;padding-top:0px !important;}#gbar{height:22px}#guser{padding-bottom:7px !important;text-align:right}.gbh,.gbd{border-top:1px solid #c9d7f1;font-size:1px}.gbh{height:0;position:absolute;top:24px;width:100%}@media all{.gb1{height:22px;margin-right:.5em;vertical-align:top}#gbar{float:left}}a.gb1,a.gb4{text-decoration:underline !important}a.gb1,a.gb4{color:#00c !important}.gbi .gb4{color:#dd8e27 !important}.gbf .gb4{color:#900 !important} </style><script nonce="IYVNn9QbZ2QHAY59Th92mg"></script></head><body><div id=gbar><nobr><a target=_blank class=gb1 href="https://www.google.com/webhp?tab=ow">Search</a> <a target=_blank class=gb1 href="http://www.google.com/imghp?hl=en&tab=oi">Images</a> <a target=_blank class=gb1 href="https://maps.google.com/maps?hl=en&tab=ol">Maps</a> <a target=_blank class=gb1 href="https://play.google.com/?hl=en&tab=o8">Play</a> <a target=_blank class=gb1 href="https://www.youtube.com/?gl=US&tab=o1">YouTube</a> <a target=_blank class=gb1 href="https://news.google.com/?tab=on">News</a> <a target=_blank class=gb1 href="https://mail.google.com/mail/?tab=om">Gmail</a> <b class=gb1>Drive</b> <a target=_blank class=gb1 style="text-decoration:none" href="https://www.google.com/intl/en/about/products?tab=oh"><u>More</u> &raquo;</a></nobr></div><div id=guser width=100%><nobr><span id=gbn class=gbi></span><span id=gbf class=gbf></span><span id=gbe></span><a target="_self" href="/settings?hl=en_US" class=gb4>Settings</a> | <a target=_blank href="//support.google.com/drive/?p=web_home&hl=en_US" class=gb4>Help</a> | <a target=_top id=gb_70 href="https://accounts.google.com/ServiceLogin?hl=en&passive=true&continue=https://drive.google.com/uc%3Fexport%3Ddownload%26id%3D0BwmD_VLjROrfTHk4NFg2SndKcjQ&service=writely&ec=GAZAMQ" class=gb4>Sign in</a></nobr></div><div class=gbh style=left:0></div><div class=gbh style=right:0></div><div class="uc-main"><div id="uc-text"><p class="uc-error-caption">Sorry, you can&#39;t view or download this file at this time.</p><p class="uc-error-subcaption">Too many users have viewed or downloaded this file recently. Please try accessing the file again later. If the file you are trying to access is particularly large or is shared with many people, it may take up to 24 hours to be able to view or download the file. If you still can't access a file after 24 hours, contact your domain administrator.</p></div></div><div class="uc-footer"><hr class="uc-footer-divider">&copy; 2022 Google - <a class="goog-link" href="//support.google.com/drive/?p=web_home">Help</a> - <a class="goog-link" href="//support.google.com/drive/bin/answer.py?hl=en_US&amp;answer=2450387">Privacy & Terms</a></div></body></html> ``` It seems that the script downloaded the web pages rather than the data file.
01-13-2022 04:29:48
01-13-2022 04:29:48
Hey @cyk1337, I cannot reproduce this error sadly. The code you provided: ```bash python run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ./tst-summarization ``` works fine for me. Could you maybe try to empty your cache completely `rm -r ~/.cache/huggingface/datasets` and try again?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Hey @cyk1337, I cannot reproduce this error sadly. The code you provided: > > ```shell > python run_summarization_no_trainer.py \ > --model_name_or_path t5-small \ > --dataset_name cnn_dailymail \ > --dataset_config "3.0.0" \ > --source_prefix "summarize: " \ > --output_dir ./tst-summarization > ``` > > works fine for me. Could you maybe try to empty your cache completely `rm -r ~/.cache/huggingface/datasets` and try again? I have met the same question `"NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories'"` And I have tried to empty my cace using `rm -r ~/.cache/huggingface/datasets`, but it did not work. When I replace dataset_name cnn_dailymail with xsum, it backs to normal.
transformers
15,129
closed
[examples/flax/language-modeling] set loglevel
This PR sets the unset loglevel to a `info` like the rest of flax examples where it wasn't done so. Fixes a report in https://github.com/huggingface/transformers/pull/14909#issuecomment-1011467609 @patil-suraj, @sgugger
01-12-2022 22:38:07
01-12-2022 22:38:07
Thanks so much @stas00 , problem is solved now! Level is not correctly `INFO` during training!<|||||>Thank you for confirming that, @stefan-it.<|||||>Not the main person on the Flax examples so will let @patil-suraj or @patrickvonplaten chime in on the PR :-)
transformers
15,128
closed
Example script to edit kenlm arpa file does not work correctly in kaggle notebook
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> Kaggle Python environment with GPU acceleration - `transformers` version: 4.12.5 - Platform: Linux-5.10.68+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.1 (True) - Tensorflow version (GPU?): 2.6.2 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (gpu) - Jax version: 0.2.25 - JaxLib version: 0.1.70 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten The problem arises when using: * [ ] the official example scripts: (give details below) https://huggingface.co/blog/wav2vec2-with-ngram The tasks I am working on is: Training KenLM and using KenLM with xlsr ## To reproduce Steps to reproduce the behavior: At step: ![image](https://user-images.githubusercontent.com/25264037/149226114-7f52e449-f319-4893-97b0-1a1b333b4f99.png) if I run it like that my output in kenlm.arpa is (count is updated correctly but < /s > line is not added ) ![image](https://user-images.githubusercontent.com/25264037/149225404-074f09b7-8ad0-42c2-9a71-44057dcabdae.png) Once I change that code above to (Inserting that change as a code block did not make the problem visible, sublime code and kaggle notebook images below) ![image](https://user-images.githubusercontent.com/25264037/149228896-ef8a0893-a16e-47f1-9eb8-9a67ece01aea.png) ![image](https://user-images.githubusercontent.com/25264037/149227673-de309990-3b1f-4d68-9df7-78823095e6c9.png) I get correct output ![image](https://user-images.githubusercontent.com/25264037/149225552-0b657b14-5ae2-4885-9bac-20261f2e262d.png) ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
01-12-2022 21:54:51
01-12-2022 21:54:51
I see thanks a lot for reporting the problem here! Indeed, the regex I use in the blog is not general enough. I'll change it! Thanks a lot for reporting it here and seems like you solved your problem already :-)<|||||>Should be fixed here: https://github.com/huggingface/blog/pull/206<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixed
transformers
15,127
closed
[Community Event] Robust Speech Challenge
Hey everybody, We have organized a new speech recognition community event and would love to have you there! During the event we want to teach you how to build robust speech recognition systems in your favorite language 🌍🌎🌏. The event will focus on [XLS-R](https://huggingface.co/facebook/wav2vec2-xls-r-1b), [N-gram boosted Decoding](https://huggingface.co/blog/wav2vec2-with-ngram), and [Common Voice 7,8](https://commonvoice.mozilla.org/en/datasets). For more information and to join the event, please take a look at this [**forum post**](https://discuss.huggingface.co/t/open-to-the-community-robust-speech-recognition-challenge/13614). Open Sourcely, the HuggingFace team! 🤗
01-12-2022 17:51:49
01-12-2022 17:51:49
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,126
closed
Text generation with TAPAS as encoder
Hi @NielsRogge, Thanks for the TAPAS implementation! I'm trying to to train the model to perform text generation conditioned on tables. Since TAPAS can encode the semi-structured meaning in tables, I guessed it was a good choice to use it as an encoder and say GPT2 as a decoder. I however encountered a problem when trying to generate from that EncoderDecoder model, this: ![image](https://user-images.githubusercontent.com/4630195/149194029-d7a664b6-374d-4dc9-a2dd-fb3bf007cd93.png) results in the following error: ![image](https://user-images.githubusercontent.com/4630195/149193959-601a3129-7bc8-49ad-aa72-2463158f7202.png) I guess this is since model.generate() for EncoderDecoder does not expect to have the extra token_type_ids that TAPAS has. Can you think of a way I can make this work? Thanks!
01-12-2022 17:50:03
01-12-2022 17:50:03
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This question has been answered on the [forum](https://discuss.huggingface.co/t/using-generate-with-tapas-as-encoder-in-encoderdecoder/13655).
transformers
15,125
closed
mBART support for run_summarization.py
Added support for multilingual tokenizer and mBART for run_summarization.py -> run_summarization.py not working with mBART
01-12-2022 14:30:19
01-12-2022 14:30:19
So, it appears that this PR introduced a new issue by forcing all models to include `--lang`, @banda-larga would you like to fix it in a new PR and assert on lack of `--lang` only with mbart model type? see: https://github.com/huggingface/transformers/pull/15150#issuecomment-1012553262 and while at it to integrate a better error message as proposed here https://github.com/huggingface/transformers/pull/15150 will then also need to revert this https://github.com/huggingface/transformers/pull/15149 as part of the new PR. and note to self: make sure ``` RUN_SLOW=1 pytest tests/deepspeed/test_model_zoo.py::TestDeepSpeedModelZoo::test_zero_to_fp32_zero2_sum_pegasus ``` doesn't fail as this PR broke it. please tag me to the new PR and I will do the checking - you don't need to figure this part out. Bonus points: adding a new examples test that should have failed with this PR - are we not testing `run_summarization.py` in torch_examples CI? Thanks.<|||||>This is a tiny bit urgent, so not waiting to remove the two lines that break every existing command using `run_summarization`. I fixed that in [this commit](https://github.com/huggingface/transformers/commit/96881729ce83cfc8e5fa04c903ee4296ad17cfbb).<|||||>ok, so nothing else needs to be done. @banda-larga please ignore my comments above.
transformers
15,124
closed
[Fix doc example] - ProphetNetDecoder
# What does this PR do? The doc example in `ProphetNetDecoder` will fail https://github.com/huggingface/transformers/blob/68cc4ccde25b1ed8a9b77fdf6f78c833bdff0e9c/src/transformers/models/prophetnet/modeling_prophetnet.py#L1473-L1474 ProphetNetDecoder never set `is_decoder = True`. It is set only at the level at `ProphetNetModel` and `ProphetNetForCausalLM`. This PR removes that assertion.
01-12-2022 13:56:11
01-12-2022 13:56:11
transformers
15,123
closed
Error while converting distilbart-mnli-12-1 model to ONNX
After converting `distilbart-mnli-12-1` to ONNX, while testing the onnx model, I get this issue: ``` onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: \[ONNXRuntimeError\] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'Expand_74' Status Message: invalid expand shape ``` After lots of investigation I understand that the problem is existed with `shift_tokens_right` function in `modeling_bart.py` code. I edit the function to this: ``` def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int): """ Shift input ids one token to the right. """ shifted_input_ids = input_ids.new_zeros(input_ids.shape) shifted_input_ids[:, 1:] = input_ids[:, :-1].clone() shifted_input_ids[:, 0] = torch.full((input_ids.shape[0],),decoder_start_token_id) assert pad_token_id is not None, "self.model.config.pad_token_id has to be defined." # # replace possible -100 values in labels by `pad_token_id` shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id) return shifted_input_ids ``` Problem totally solved. The issue is existed with ONNX converter where do not perform correctly while there is broadcasting. Is it possible to edit the repository and merge these changes to yours?
01-12-2022 13:34:13
01-12-2022 13:34:13
cc @lewtun @michaelbenayoun <|||||>Hi @farzanehnakhaee70 and thank you for raising this issue! FYI we recently merged a major overhaul of the ONNX export for BART in #14700 which we've tested for various topologies / tasks, e.g. this works: ```bash # Install from source with extra ONNX dependencies pip install 'git+https://github.com/huggingface/transformers#egg=transformers[onnx]' # Export model with default features (i.e. just `BartModel`) python -m transformers.onnx --model=valhalla/distilbart-mnli-12-1 onnx/ ``` Does installing from `master` solve your problem? If not, can you please provide the explicit command you are using to export the model?<|||||>Welcome and thanks a lot for your consideration! I see your major changes in your configuration which largely improves usability for other tasks. But this error will not be solved without changing the code as I mentioned. The major issue is that although we add `dynamic_axis` in conversion script, but due to the error of broadcasting, the output of this function became fixed with regard to the batch_size of the dummy input. Therefore, when running the model after conversion with batch size different from the batch size of the dummy input, this error will raise.<|||||>Thank you for the extra context about the batch size :) However, I am not able to reproduce the problem you reported. For example, suppose we export the model using the command I used in my previous comment: ```bash # Export model with default features (i.e. just `BartModel`) python -m transformers.onnx --model=valhalla/distilbart-mnli-12-1 onnx/ ``` We can then load this model into an ONNX Runtime `InferenceSession` as follows: ```python from transformers import AutoTokenizer, AutoModel model_ckpt = "valhalla/distilbart-mnli-12-1" tokenizer = AutoTokenizer.from_pretrained(model_ckpt) bs = 16 # batch size ort_session = ort.InferenceSession("onnx/model.onnx") onnx_named_outputs = ["last_hidden_state"] inputs = tokenizer(["Hello, my name is Lewis"] * bs, return_tensors="np") decoder_inputs = tokenizer(["Hello"] * bs, return_tensors="np") all_inputs = { "input_ids": inputs["input_ids"], "attention_mask": inputs["attention_mask"], "decoder_input_ids": decoder_inputs["input_ids"], "decoder_attention_mask": decoder_inputs["attention_mask"], } onnx_outputs = ort_session.run(onnx_named_outputs, all_inputs) ``` This runs without error using the source install of `transformers`. For comparison, we can find the batch size used in the dummy inputs during the conversion as follows: ```python from transformers.models.bart import BartConfig, BartOnnxConfig config = BartConfig.from_pretrained(model_ckpt) onnx_config = BartOnnxConfig(config) dummy_inputs = onnx_config.generate_dummy_inputs(tokenizer, framework=TensorType.NUMPY) # Returns (batch_size, seq_len) = (2,8) dummy_inputs["input_ids"].shape ``` So you can see that the dummy inputs have a batch size of 2, while the inference example I created uses a batch size of 16. Could you please share a minimal reproducible example with the problem you're facing (e.g. a Colab notebook)? <|||||>Thanks a lot for your complete consideration. I convert one model for `sentence classification` task and it doesn't have any `decoder_input_ids` and `decoder_attention_mask` as input. The only inputs are `input_ids` and `attentio_mask` which is shown by netron. If these inputs are availabe for the model, then we do not have any problem because the `shift_tokens_right` function will no be used any more. Would you please tell me how I can convert my model that these two inputs are also defined as the input (the same as what you have done)?<|||||>Ah, now I am able to reproduce the problem - the missing step was to specify explicitly that we should use the `sequence-classification` feature 😄 For example, the following fails: ```python import onnxruntime as ort from transformers import AutoTokenizer, AutoModel # Export the model with the `sequence-classification` topology model_ckpt = "valhalla/distilbart-mnli-12-1" onnx_path = f"onnx/bart-large-clf/" !python -m transformers.onnx --model={model_ckpt} --feature="sequence-classification" {onnx_path} # Run with ONNX Runtime ort_session = ort.InferenceSession(f"{onnx_path}model.onnx") # Note we have `logits` for sequence classification heads onnx_named_outputs = ["logits"] # This works because the dummy inputs have batch_size=2 inputs = tokenizer(["I loved this movie!"] * 2, return_tensors="np") onnx_outputs = ort_session.run(onnx_named_outputs, dict(inputs)) # This fails - stack trace below inputs = tokenizer(["I loved this movie!"] * 3, return_tensors="np") onnx_outputs = ort_session.run(onnx_named_outputs, dict(inputs)) ``` <details open> <summary>Stack trace</summary> <br> ``` --------------------------------------------------------------------------- InvalidArgument Traceback (most recent call last) /var/folders/28/k4cy5q7s2hs92xq7_h89_vgm0000gn/T/ipykernel_8196/508920182.py in <module> 5 6 inputs = tokenizer(["I loved this movie!"] * 3, return_tensors="np") ----> 7 onnx_outputs = ort_session.run(onnx_named_outputs, dict(inputs)) ~/miniconda3/envs/transformers/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options) 186 output_names = [output.name for output in self._outputs_meta] 187 try: --> 188 return self._sess.run(output_names, input_feed, run_options) 189 except C.EPFail as err: 190 if self._enable_fallback: InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Non-zero status code returned while running Expand node. Name:'Expand_74' Status Message: invalid expand shape ``` </details> And great detective work in figuring out that `shift_tokens_right()` was the source of the problem! I think your proposal makes sense and I was able to verify that including your change fixes the problem with the export. What do you think @michaelbenayoun? If there are no negative consequences with changing `shift_tokens_right()`, my suggestion is to ask @farzanehnakhaee70 to open a PR to fix the issue. <|||||>Great. If there is anything I can help from my side, I would be happy to do it.<|||||>Hi @farzanehnakhaee70, @lewtun, Great catch @farzanehnakhaee70 !! I would say that if you have a working solution you can definitely open a PR!<|||||>Hi @farzanehnakhaee70 before we open a PR, can you please share your environment details by running the command `transformers-cli env` and copy-and-pasting its output here? I'd like to know which version of transformers this affects, the type of OS etc<|||||>Hi @lewtun Sorry for the delay. Here it is: ``` - `transformers` version: 4.15.0 - Platform: Linux-4.15.0-154-generic-x86_64-with-glibc2.29 - Python version: 3.8.7 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```<|||||>Thanks for sharing your environment @farzanehnakhaee70! I did a fresh install with ``` pip install transformers[onnxruntime]==4.15 ``` and find I am no longer able to reproduce the error (here's a [Colab notebook](https://colab.research.google.com/drive/1DZov6u3KOhoi2WE7K9quDKAjuYVRFDdn?usp=sharing) if you want to verify). This suggests that the error I saw (and possibly in your case too) is a symptom of a problematic environment. Would you mind doing a fresh install or providing a Colab notebook that reproduces the error? I'd like to be certain that the error is reproducible before we make any changes to the `transformers` codebase. Thank you!<|||||>Sure.<|||||>Hi, Really sorry for the late response. Today I was going to test this model. However, during the test this error occurs! ``` Traceback (most recent call last): File "/usr/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/usr/lib/python3.8/site-packages/transformers/onnx/__main__.py", line 22, in <module> from .features import FeaturesManager File "/usr/lib/python3.8/site-packages/transformers/onnx/features.py", line 71, in <module> class FeaturesManager: File "/usr/lib/python3.8/site-packages/transformers/onnx/features.py", line 273, in FeaturesManager def get_model_from_feature(feature: str, model: str) -> PreTrainedModel: NameError: name 'PreTrainedModel' is not defined ``` Do you also face this issue?<|||||>Hi @farzanehnakhaee70 I am unfortunately not able to reproduce your error - by the looks of it, it could be a problem with your environment. Did you run a fresh install in a clean virtual env with the command I shared above?<|||||>Thanks for your reply @lewtun I install it inside a fresh container and also with the command you provided. I will test it once more and inform you about the incidence.<|||||>Hi @lewtun I test it once more with a fresh install. As you said, there isn't any problem. Thanks a lot for your consideration.<|||||>Thanks for double-checking @farzanehnakhaee70 ! Does this mean we can close this issue?<|||||>Hi @lewtun Thanks a lot. For sure.
transformers
15,122
closed
fix: switch from slow to generic tokenizer class
# What does this PR do? This PR fixes #15077 where the slow tokenizer version is loaded which does not have a `train_new_from_iterator` method by replacing it with `AutoTokenizer` that loads the fast version by default.
01-12-2022 10:22:17
01-12-2022 10:22:17
transformers
15,121
closed
Add ONNX configuration classes to docs
# What does this PR do? This PR adds the ONNX configuration classes to the main doc. I've also removed the permalinks in the ONNX guide that we had included in #14904 to link to the configuration source code.
01-12-2022 10:16:44
01-12-2022 10:16:44
transformers
15,120
closed
Add MAE
# What does this PR do? This PR implements [MAE](https://github.com/facebookresearch/mae) (Masked Autoencoders are Scalable Vision Learners) by Facebook AI. I've implemented it as a separate model in the library, as the model is quite specific (encoder only operates on visual patches, next a mask token is added and the decoder reconstructs pixel values based on encoded visual patches + mask tokens). The model without decoder on top is called `ViTMAEModel`, the model with decoder on top is called `ViTMAEForPreTraining` (inspired by `BertForPreTraining`). After pre-training, one can load the weights directly into a `ViTForImageClassification`. * Models are on the hub: https://huggingface.co/models?other=vit_mae * Demo notebook: https://colab.research.google.com/drive/1edgG0ne4VNQrc11uAS0wzogg5HblzK2s?usp=sharing To do: - [x] fix some tests. However, these fail due to the non-deterministic behaviour of the model (a random mask is generated in each forward pass). I can make the integration test deterministic by adding `torch.manual_seed(2)`, however adding this on top of the test file doesn't make it deterministic for other tests.
01-12-2022 09:34:03
01-12-2022 09:34:03
transformers
15,119
closed
add model scaling section
# What does this PR do? This PR adds a practical scaling guide to the documentation. When starting to work with large transformer models one of the most common issues is running out of GPU memory. There are several easy to implement strategies to counteract this. This guide shows how to use gradient accumulation, gradient checkpointing, mixed precision training, and choice of optimizer to decrease the memory footprint. It shows how to do this with both the `Trainer` and `accelerate` in a hands-on fashion. ## What does this PR not do? This PR does **not** replace the in-depth explanations at ["Model Parallelism"](https://huggingface.co/docs/transformers/parallelism) and ["Performance and Scalability: How To Fit a Bigger Model and Train It Faster"](https://huggingface.co/docs/transformers/performance). Rather the proposed guide should be an easy entry point for new users whereas these other sections are for more advanced users. ## TensorFlow? If this guide resonates well it would make sense to extend it with a TensorFlow/Keras section (similar to the `accelerate` section). Maybe TensorFlow boy (aka @Rocketknight1) has some ideas?
01-12-2022 09:24:58
01-12-2022 09:24:58
We could definitely make a TF/Keras section! I believe Keras doesn't support gradient accumulation or gradient checkpointing out of the box, though, but mixed precision, Adafactor, etc. are all very doable.<|||||>Hi @sgugger I integrated your feedback, let me know if this is what you had in mind. <|||||>In the case of documentation, I don't think repetition is a bad thing since users rarely look at every page in a row. Making sure the docs are all linked to each other for further reading is important. The two doc pages reach different target audience IMO and present the techniques in different ways, so I think having both of them is good. <|||||>Repetition is a bad thing in this situation, IMHO: 1. as it'd make it more difficult to point users to the right doc. For example @LysandreJik has been pointing users to the performance doc when they have difficulties with getting their model to run memory-wise. Now you will have 2 competing docs, which one should he point users to? It shouldn't take more than a few secs for a maintainer to find where to point the user to and not needing to reread each doc and contemplate is it A or B? 2. how do you maintain 2 overlapping documents? 3. some users do read docs and now they will need to read 2 docs with confusing overlaps But the big question is this: why make things more complicated and messy when the whole performance/scalability documentation is due for a revamp anyway. It's far from great at the moment. I propose to discuss how we best serve both users and ourselves by improving the performance/scalability doc structure/layout. I'm not proposing to drop anything from the current PR but to integrate rather than creating a diverging doc. and I'd be happy to work together with @lvwerra to make it happen.<|||||>update: @lvwerra and I will discuss a revamp/integration on Fri.<|||||>Hi there, I updated the doc and this is ready for another review: - use @stas00's suggestion to initialize the measurement by loading a small tensor to the GPU - added a note about the embedding issue with BnB optimizer I disusssed with @stas00 that we could start the revamp by merging this PR and then move things around as we start adding the other documents in #15213. <|||||>Good point, I merged `scaling.mdx` and `performance.mdx`. The document starts with the new practical guide and I added the existing material to a section called `Further discussions`. There is still a bit of redundancy between the two parts but I think we can refactor it when we flesh out the other parts. @sgugger are you happy with the current state to be merged?<|||||>Thanks @sgugger, I integrated your feedback! @stas00 do you want to have a last look before merging?<|||||>Thanks, @lvwerra 1. Please revert all the header level changes in the existing doc - the menu on the right only handles ## and ### and your proposed change will practically eliminate most of the menu entries as you added an additional level 2. We discussed adding this PR's content at the end of the existing document as at the moment it's the extra and not the main document as it was laid out. 95% of this PR's material will be spread out over the new documents, and the current performance doc will remain here as a reference that the more specific docs will point to for indepth information. (e.g. mixed precision section we aren't going to repeat the full discussion about why and how in each sub-document and will point to the performance reference doc instead). We may rename the main reference doc and make performance.mdx the entry point instead. Does it make sense? If not, and we need to sync on the big picture - let's discuss it then first. <|||||>Hi @stas00 Sure, I'll revert the headers, I was not aware of that limitation. Regarding the order of the content: I know that we will change that document radically in later PRs but I thought since this is gonna be public and maybe here for a few weeks it would be nicer to have the practical guide at the beginning and the theoretical part at the end. If you feel strongly about this I can change it back. <|||||>> Hi @stas00 > > Sure, I'll revert the headers, I was not aware of that limitation. In fact I asked when the new design was added to support more than 3-levels but nothing came out of that. i.e. the menu is missing already a bunch of entries where there is a deeper nesting like the deepspeed doc. :( > Regarding the order of the content: I know that we will change that document radically in later PRs but I thought since this is gonna be public and maybe here for a few weeks it would be nicer to have the practical guide at the beginning and the theoretical part at the end. If you feel strongly about this I can change it back. Sure, that works. I think what I will do instead in the new PR is to move this original document out to a new document. I'm just concerned with broken links for any pre-existing links to this url. <|||||>Thanks a lot for working on this important part of the docs!
transformers
15,118
closed
ImportError: cannot import name 'CpmTokenizer' from 'transformers'
``` nvidia-smi +-----------------------------------------------------------------------------+ | NVIDIA-SMI 440.118.02 Driver Version: 440.118.02 CUDA Version: 10.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |===============================+======================+======================| | 0 Tesla T4 Off | 00000000:21:01.0 Off | 0 | | N/A 84C P0 61W / 70W | 8162MiB / 15109MiB | 99% Default | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 Off | 00000000:21:02.0 Off | 0 | | N/A 27C P8 9W / 70W | 11MiB / 15109MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 2 Tesla T4 Off | 00000000:21:03.0 Off | 0 | | N/A 28C P8 9W / 70W | 11MiB / 15109MiB | 0% Default | +-------------------------------+----------------------+----------------------+ | 3 Tesla T4 Off | 00000000:21:04.0 Off | 0 | | N/A 27C P8 11W / 70W | 11MiB / 15109MiB | 0% Default | +-------------------------------+----------------------+----------------------+ +-----------------------------------------------------------------------------+ | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | 0 14274 C python 8151MiB | +-----------------------------------------------------------------------------+ ``` ``` nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2019 NVIDIA Corporation Built on Wed_Oct_23_19:24:38_PDT_2019 Cuda compilation tools, release 10.2, V10.2.89 ``` ``` torch == 1.9.0 transformers == 4.15.0 ``` ``` Python 3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0] :: Anaconda, Inc. on linux Type "help", "copyright", "credits" or "license" for more information. >>> from transformers import CpmTokenizer Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'CpmTokenizer' from 'transformers' (/root/anaconda3/lib/python3.9/site-packages/transformers/__init__.py) >>> ```
01-12-2022 08:42:25
01-12-2022 08:42:25
Probably `sentencepiece` is not installed. `CpmTokenizer` depends on `sentencepiece`, so make sure you have `sentencepiece` installed. You could install it using: `pip install sentencepiece`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,117
closed
Reason for returning dequantized (fp.32) value at every layer of I-BERT.
In I-Bert(/src/transformers/models/ibert/quant_modules.py#L114) Kindly let me know the reason to return the dequantized(fp.32) value (quantized_input*scaling_factor) at every quantized-layer as seen below:- `classQuantAct(nn.Module): . . return quant_act_int * correct_output_scale, self.act_scaling_factor `class QuantLinear(nn.Module): . . return (nn.functional.linear(x_int, weight=self.weight_integer, bias=self.bias_integer) * bias_scaling_factor,bias_scaling_factor,) ` ` Why can't we simply return quant_act_int or nn.functional.linear(x_int, weight=self.weight_integer, bias=self.bias_integer) for the next layers![https://github.com/huggingface/transformers/blob/1a00863e95655c6914202c2dbf3b091dfb3f04c1/src/transformers/models/ibert/quant_modules.py#L114](url)
01-12-2022 08:05:19
01-12-2022 08:05:19
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,116
closed
Fix QA trainer to properly log eval metrics
# What does this PR do? This PR updates the custom `QuestionAnsweringTrainer` in the examples. Previously, the trainer used `self.log()`, which does not add a prefix `"eval"`. This was a problem especially when reporting metrics to WanDB. Without a prefix, WanDB assumes the metrics come from training data, not evaluation data. This problem can be solved by simply changing `self.log(metrics)` to `self.log_metrics("eval", metrics)` in both `evaluate()` methods. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. -->
01-12-2022 07:48:08
01-12-2022 07:48:08
The prefix is added in the lines above.<|||||>Oh, I guess I used the older version of the QA trainer... I feel sorry for bothering you and not fully checking the code. Actually, it was my first public PR... Anyway, I really appreciate your reply and your efforts to create and maintain this library. <|||||>No worries! Let us know if the bug persists with the last version.
transformers
15,115
closed
OOM error on Pretraining Albert with batch size 8
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Ubuntu 18.1 - Python version: 3.7 - PyTorch version (GPU?): 1.6+cuda 10.1 - Tensorflow version (GPU?): None - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes ### Who can help Models: - ALBERT @LysandreJik Library: The problem arises when using: The tasks I am working on is: Masked Language Modelling (using AlbertForMaskedLM) Training Albert model from scratch python run_mlm.py --model_type albert --num_train_epochs 300 --train_file /home/kushwanth/write_chunks/sample.txt --validation_file /home/kushwanth/write_chunks/sample.txt --tokenizer_name albert --do_train=yes --output_dir=/home/kushwanth/model --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --save_steps 3000 --logging_steps 500 --report_to tensorboard --preprocessing_num_workers 10 2>&1 | tee /home/kushwanth/log_1.txt GPU spects: V-100 Number of GPUs 8 with 16GB RAM each Trying to allocate batch size of 8 and we are getting Out of memory error and GPU utilisation is 50% on avg Model Config: "attention_probs_dropout_prob": 0, "bos_token_id": 2, "classifier_dropout_prob": 0.1, "embedding_size": 128, "eos_token_id": 3, "hidden_act": "gelu_new", "hidden_dropout_prob": 0, "hidden_size": 768, "initializer_range": 0.02, "inner_group_num": 1, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "albert", "num_attention_heads": 12, "num_hidden_groups": 1, "num_hidden_layers": 12, "pad_token_id": 0, "position_embedding_type": "absolute", "torch_dtype": "float32", "transformers_version": "4.15.0", "type_vocab_size": 1, "vocab_size": 40000 ``` File "run_mlm.py", line 442, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1332, in train tr_loss_step = self.training_step(model, inputs) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1891, in training_step loss = self.compute_loss(model, inputs) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1923, in compute_loss outputs = model(**inputs) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 156, in forward return self.gather(outputs, self.output_device) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in gather return gather(outputs, output_device, dim=self.dim) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_m ap for k in out)) File "<string>", line 7, in __init__ File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/transformers/file_utils.py", line 2294, in __post_init__ for element in iterator: File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr> for k in out)) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_m ap return Gather.apply(target_device, dim, *outputs) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 68, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/home/kushwanth/anaconda3/envs/py37/lib/python3.7/site-packages/torch/cuda/comm.py", line 166, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: CUDA out of memory. Tried to allocate 4.88 GiB (GPU 0; 15.78 GiB total capacity; 7.63 GiB already allocated; 2.08 Gi B free; 12.50 GiB reserved in total by PyTorch) ```
01-12-2022 07:13:55
01-12-2022 07:13:55
Have you read the following guide on performance? https://huggingface.co/docs/transformers/master/performance cc @sgugger @stas00 <|||||>since you have multiple GPUs most likely you want to use a proper scalability solution like [Deepspeed](https://huggingface.co/docs/transformers/master/main_classes/deepspeed#deepspeed-trainer-integration) and your OOM will be no more. Hope this helps to overcome your difficulty In general most OOM questions ideally belong to https://discuss.huggingface.co/ where you can ask your fellow users to help you with the task of tuning up your configuration to fit your hardware, since usually this task has nothing to do with the `transformers`' support, as OOM is not a bug in `transformers` (most of the time). <|||||>Running through the example for roberta-base I also see OOM when using an older 12GB Titan card. 12GB is sort of an average sized card so you might want to at least add a note to the example docs to add `--per_device_train_batch_size 4` when using smaller GPUs. That param is not part of the example script. You have to find it in the base TrainingArguments and that might not be obvious for a new user checking out the posted example.<|||||>That's an excellent suggestion, @bjascob - do you mean to the README.md files under https://github.com/huggingface/transformers/tree/master/examples/pytorch or elsewhere? And of course if it resonates with you we love receiving PRs that improve user's experience - be it code or docs. But if not please tell me where it's missing and I will add it there.<|||||>Specifically I was looking at the README at https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling though I suppose many of these examples may give OOM with the default batch size of 8 and smaller GPU. I don't have a specific recommendation, although in the README for each example is probably the easiest for people to find it. This is probably not worth a PR on its own but if someone is updating examples, it would be good to consider adding this.<|||||>Thank you for clarifying. a PR would be perfect for that. If we wait for someone to update the examples it'll be just forgotten so best to act while it's hot.<|||||>FYI, @patrickvonplaten addressed this issue here https://github.com/huggingface/transformers/pull/15596 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,114
closed
Trying to train the TFWav2Vec2ForCTC model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Colab ### Who can help: @patrickvonplaten @anton-l <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @patrickvonplaten @anton-l If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using Wav2Vec2 on TensorFlow: We are trying to use the TFWav2Vec2ForCTC. We can make the prediction but can not train the model. For this we create a random dataset just to text the fitting and give an error. This is the following code: ``` import tensorflow as tf from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") # parameters AUDIO_MAXLEN = 246000 LABEL_MAXLEN = 256 BATCH_SIZE = 1 VOCAB_SIZE = 32 LEARNING_RATE = 5e-5 def CTCLoss(y_true, y_pred): # Compute the training-time loss value batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64") input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64") label_length = tf.cast(tf.shape(y_true)[1], dtype="int64") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64") loss = tf.keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length) return loss loss_fn = CTCLoss optimizer = tf.keras.optimizers.Adam(LEARNING_RATE) model.compile(optimizer, loss=loss_fn) def create_random_dataset(): def gen(): yield ( np.random.random((1, AUDIO_MAXLEN)), np.random.randint(0, VOCAB_SIZE, LABEL_MAXLEN) ) dataset = tf.data.Dataset.from_generator( gen, output_types=(tf.float32, tf.int32), output_shapes=((1, AUDIO_MAXLEN), (LABEL_MAXLEN, )) ) return dataset train_dataset = create_random_dataset() valid_dataset = create_random_dataset() ``` The error arises when we try to ```.fit``` the model in the TF architecture: command: ``` model.fit(train_dataset, validation_data=valid_dataset, epochs=1, verbose=2, batch_size=BATCH_SIZE) ``` ``` OperatorNotAllowedInGraphError Traceback (most recent call last) <ipython-input-17-2c9e72985c37> in <module> ----> 1 model.fit(train_dataset, validation_data=valid_dataset, epochs=1, verbose=2, batch_size=BATCH_SIZE) OperatorNotAllowedInGraphError: in user code: File "/home/jovyan/.local/lib/python3.8/site-packages/keras/engine/training.py", line 878, in train_function * return step_function(self, iterator) File "/home/jovyan/.local/lib/python3.8/site-packages/keras/engine/training.py", line 867, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/home/jovyan/.local/lib/python3.8/site-packages/keras/engine/training.py", line 860, in run_step ** outputs = model.train_step(data) File "/home/jovyan/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 889, in train_step y_pred = self(x, training=True) File "/home/jovyan/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "tf_wav2_vec2_for_ctc" (type TFWav2Vec2ForCTC). in user code: File "/home/jovyan/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py", line 1557, in call * outputs = self.wav2vec2( File "/home/jovyan/.local/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "wav2vec2" (type TFWav2Vec2MainLayer). in user code: File "/home/jovyan/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py", line 1228, in call * hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices) File "/home/jovyan/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py", line 1159, in _mask_hidden_states * mask_time_indices = _compute_mask_indices( File "/home/jovyan/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_tf_wav2vec2.py", line 231, in _compute_mask_indices * num_masked_spans = max(num_masked_spans, min_masks) OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received: • input_values=tf.Tensor(shape=(1, 246000), dtype=float32) • attention_mask=None • token_type_ids=None • position_ids=None • head_mask=None • inputs_embeds=None • output_attentions=False • output_hidden_states=False • return_dict=True • training=True • kwargs=<class 'inspect._empty'> ``` ## To reproduce the error: https://colab.research.google.com/drive/10locy1XqKF4hlkJ2uCchAtxQ4oAjz4nH?usp=sharing <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I would like to find a way asap to finetune a TensorFlow model of wav2vec2, any recommendation would be great. <!-- A clear and concise description of what you would expect to happen. -->
01-11-2022 21:11:43
01-11-2022 21:11:43
Interesting! cc @anton-l and TF squad {@Rocketknight1, @gante} <|||||>Taking a look<|||||>After some exploration, here are some findings: @lucasagrizzi, thank you for the provided script, it was very helpful to debug 🙏 There were a few minor issues in the original script -- the generated random labels were not batched and with `CTCLoss` the maximum label has to be `VOCAB_SIZE-2` (see [this](https://github.com/MaybeShewill-CV/CRNN_Tensorflow/issues/69#issuecomment-383992527) answer). After making the changes above, I run into the (unresolved) issue originally found [here](https://github.com/huggingface/transformers/issues/15059), with a slightly modified error: ``` Traceback (most recent call last): File "test.py", line 57, in <module> model.fit(train_dataset, validation_data=valid_dataset, epochs=1, verbose=2, batch_size=BATCH_SIZE) File "/home/joao_huggingface_co/hf/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/joao_huggingface_co/transformers/src/transformers/modeling_tf_utils.py", line 892, in train_step self.optimizer.minimize(loss, self.trainable_variables, tape=tape) tensorflow.python.framework.errors_impl.InvalidArgumentError: Computed input depth 768 doesn't match filter input depth 48 [Op:Conv2DBackpropInput] ``` I will take a look at this issue, and will let you know of updates. For reference, here is the modified test script: <details> <summary>Modified test script</summary> <pre> ```python import numpy as np import tensorflow as tf from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") # parameters AUDIO_MAXLEN = 246000 LABEL_MAXLEN = 256 BATCH_SIZE = 1 VOCAB_SIZE = 32 LEARNING_RATE = 5e-5 def CTCLoss(y_true, y_pred): # Compute the training-time loss value batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64") input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64") label_length = tf.cast(tf.shape(y_true)[1], dtype="int64") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64") loss = tf.keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length) return loss loss_fn = CTCLoss optimizer = tf.keras.optimizers.Adam(LEARNING_RATE) model.compile(optimizer, loss=loss_fn, run_eagerly=True) def create_random_dataset(): def gen(): yield ( np.random.random((1, AUDIO_MAXLEN)), np.random.randint(0, VOCAB_SIZE-1, (1, LABEL_MAXLEN)) ) dataset = tf.data.Dataset.from_generator( gen, output_types=(tf.float32, tf.int32), output_shapes=((1, AUDIO_MAXLEN), (1, LABEL_MAXLEN)) ) return dataset train_dataset = create_random_dataset() valid_dataset = create_random_dataset() model.fit(train_dataset, validation_data=valid_dataset, epochs=1, verbose=2, batch_size=BATCH_SIZE) ``` </pre> </details> <|||||>Hi everyone I've been having similar issues trying to fine tune the TFWav2Vec2ForCTC. I'm also working on Colab, on Transformers version 4.15.0 I'm currently able to run the modified test script supplied by @gante without problems. However, I want to use the model's internal loss computation, in order to correctly handle padded inputs and labels in the CTC calculation. Further modifying the previous script as a dummy example, I get the following error when trying that approach. Is there something I'm missing? Sorry for the troubles and thank you in advance. ``` All model checkpoint layers were used when initializing TFWav2Vec2ForCTC. All the layers of TFWav2Vec2ForCTC were initialized from the model checkpoint at facebook/wav2vec2-base-960h. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFWav2Vec2ForCTC for predictions without further training. No loss specified in compile() - the model's internal loss computation will be used as the loss. Don't panic - this is a common way to train TensorFlow models in Transformers! Please ensure your labels are passed as the 'labels' key of the input dict so that they are accessible to the model during the forward pass. To disable this behaviour, please pass a loss argument, or explicitly pass loss=None if you do not want your model to compute a loss. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-1-03dc269f6ab7> in <module>() 51 valid_dataset = create_random_dataset() 52 ---> 53 model.fit(train_dataset, validation_data=valid_dataset, epochs=1, verbose=2, batch_size=BATCH_SIZE) 1 frames /usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb /usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py in train_step(self, data) 890 loss = self.compiled_loss(y, y_pred, sample_weight, regularization_losses=self.losses) 891 # Run backwards pass. --> 892 self.optimizer.minimize(loss, self.trainable_variables, tape=tape) 893 # When y_pred is a ModelOutput and y is a tf.Tensor the metrics update 894 # should be done only with the relevant ModelOutput param that is ValueError: No gradients provided for any variable: (['tf_wav2_vec2_for_ctc/wav2vec2/masked_spec_embed:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.0/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.0/layer_norm/gamma:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.0/layer_norm/beta:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.1/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.2/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.3/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.4/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.5/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_extractor/conv_layers.6/conv/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_projection/layer_norm/gamma:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_projection/layer_norm/beta:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_projection/projection/kernel:0', 'tf_wav2_vec2_for_ctc/wav2vec2/feature_projection/projection/bias:0', 'tf_wav2_vec2_for_ctc/wav2vec2/encoder/pos_conv_embed/conv/weight_v:0', 'tf_wav2_vec2_for_ctc/wav2vec2/encoder/pos_conv_embed/conv/weight_g:0', 'tf_wav2_vec2_for_ctc/wav2vec2/encoder/pos_conv_embed/conv/bias:0', 'tf_wav2_vec2_for_ctc/wav2vec2/encoder/layer_norm/gamma:0', 'tf_wav2_vec2_for_ctc/wav2vec2/encoder/layer_norm/beta:0', 'tf_wav2_vec2_for_ctc/wav2vec2/encoder/layers... ``` This is the script I used: ```python import numpy as np import tensorflow as tf from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") # parameters AUDIO_MAXLEN = 246000 LABEL_MAXLEN = 256 BATCH_SIZE = 1 VOCAB_SIZE = 32 LEARNING_RATE = 5e-5 def CTCLoss(y_true, y_pred): # Compute the training-time loss value batch_len = tf.cast(tf.shape(y_true)[0], dtype="int64") input_length = tf.cast(tf.shape(y_pred)[1], dtype="int64") label_length = tf.cast(tf.shape(y_true)[1], dtype="int64") input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64") label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64") loss = tf.keras.backend.ctc_batch_cost(y_true, y_pred, input_length, label_length) return loss loss_fn = CTCLoss optimizer = tf.keras.optimizers.Adam(LEARNING_RATE) # model.compile(optimizer, loss=loss_fn, run_eagerly=True) model.compile(optimizer, run_eagerly=True) # No loss is passed according for using internal loss, according to docs # Modified as to have input dicts with a 'labels' key for intenal loss calculation def create_random_dataset(): def gen(): yield { 'input_values': np.random.random((1, AUDIO_MAXLEN)), 'labels': np.random.randint(0, VOCAB_SIZE-1, (1, LABEL_MAXLEN)) } dataset = tf.data.Dataset.from_generator( gen, output_types={'input_values': tf.int32, 'labels': tf.int32}, output_shapes={'input_values': (1, AUDIO_MAXLEN), 'labels': (1, LABEL_MAXLEN)} ) return dataset train_dataset = create_random_dataset() valid_dataset = create_random_dataset() model.fit(train_dataset, validation_data=valid_dataset, epochs=1, verbose=2, batch_size=BATCH_SIZE) ```<|||||>Update: the script I shared above works on GPU, but fails on CPU due to upstream problems in Keras -- TF does not support backpropagation of grouped convolutions on CPU. (@lucasagrizzi) The PR mentioned above adds an informative error message for these situations.<|||||>@dmurillo976s the script I shared (and yours) is missing a `tf.nn.softmax()` in the logits. I.e. the loss function should return ` loss = tf.keras.backend.ctc_batch_cost(y_true, tf.nn.softmax(y_pred), input_length, label_length)`<|||||>@dmurillo976s I am still facing this problem when I try to use fit() with the internal loss. Any leads or do you know how to fix it? Any help would be much appreciated! @gante any help on how to use the internal CTC loss with the fit() method? Thank You!<|||||>@Sreyan88 sorry, it's been a while since I tested this. Unfortunately I couldn't get it working at the time, and continued using the pytorch implementation. I believe the main problem was that a dummy loss function was being used at the training step, instead of the internal loss of the model. But again, that was just my assessment at the time. The best thing to do now should be to open a new issue. <|||||>Hey @Sreyan88 👋 We have very recently pushed a bug change about how we process loss functions, can you try it out with `tranformers==4.20.0.dev0` (i.e. from the `main` branch)?<|||||>Hi @gante , Thank You for your reply. On tranformers==4.20.0.dev0 I get: ``` UnknownError: Exception encountered when calling layer "conv" (type Conv1D). Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [Op:Conv2D] Call arguments received: • inputs=tf.Tensor(shape=(1, 16000, 1), dtype=float32) ``` just by calling `model = TFWav2Vec2ForCTC.from_pretrained(model_checkpoint,apply_spec_augment=False, from_pt = True)` I use` tensorflow==2.7.0`, the same as I was using for `transformers==4.19.2`.<|||||>@Sreyan88 that's because TFWav2Vec2 does not work without a GPU, due to an upstream TF limitation -- see https://github.com/huggingface/transformers/pull/15612 With a GPU, I was able to run the command you shared without errors :)<|||||>Hi @gante , Thank you so much for your reply, it was not a GPU but a cudnn problem, I am now trying to train the model with internal loss but I get a negative ctc loss: ``` Epoch 1/5 28/3859 [..............................] - ETA: 47:03 - loss: -0.5141 ``` ​by `next(iter(train))` looks like this: ``` (<tf.Tensor: shape=(2, 88480), dtype=float32, numpy= array([[ 0.06280634, 0.07749736, 0.06458706, ..., -0.17759225, -0.18961217, -0.19361882], [-0.01641779, 0.01576709, 0.03577391, ..., 0. , 0. , 0. ]], dtype=float32)>, <tf.Tensor: shape=(2, 78), dtype=int64, numpy= array([[ 27, 11, 10, 23, 5, 23, 27, 20, 13, 5, 0, 20, 1, 10, 25, 24, 2, 19, 2, 24, 5, 12, 15, 5, 10, 2, 19, 14, 11, 1, 23, 5, 4, 20, 8, 5, 23, 27, 11, 23, 5, 23, 27, 2, 19, 2, 5, 25, 10, 5, 1, 20, 1, 2, 5, 17, 25, 9, 2, 5, 27, 25, 12, 5, 13, 22, 20, 1, 5, 23, 27, 2, 5, 2, 11, 19, 23, 27], [ 11, 1, 24, 5, 3, 27, 25, 1, 5, 25, 5, 11, 10, 9, 2, 24, 5, 27, 25, 12, 5, 3, 27, 11, 23, 5, 27, 2, 5, 3, 11, 10, 5, 11, 8, 20, 13, 23, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]])>) ```<|||||>@Sreyan88 could you please share a script (or a colab) with your issue? :)<|||||>@gante , here is a bit dirty version of the notebook: https://colab.research.google.com/drive/1HXOdDhaIWcLF_4xF-zKZ_gYRf-sMfHkL?usp=sharing My main issue is I think I am passing valid inputs but the loss output is negative which is not possible in case of CTC.<|||||>I changed my dataset into a dictionary and the negative CTC losses persist: ``` {'input_values': <tf.Tensor: shape=(2, 154480), dtype=float32, numpy= array([[ 0.01052614, 0.02730851, 0.02783296, ..., 0.01052614, 0.0089528 , 0.00056161], [-0.00101706, -0.00101706, -0.00101706, ..., 0. , 0. , 0. ]], dtype=float32)>, 'labels': <tf.Tensor: shape=(2, 122), dtype=int64, numpy= array([[ 10, 25, 19, 5, 24, 13, 24, 17, 2, 15, 5, 24, 25, 26, 26, 2, 10, 5, 10, 25, 19, 5, 4, 20, 27, 1, 5, 2, 17, 17, 25, 20, 23, 5, 10, 25, 19, 5, 23, 27, 20, 12, 11, 10, 5, 3, 2, 1, 23, 3, 20, 19, 23, 27, 5, 12, 25, 10, 23, 2, 19, 5, 10, 2, 17, 24, 2, 1, 5, 11, 1, 24, 5, 12, 25, 10, 23, 2, 19, 5, 22, 15, 12, 5, 11, 1, 25, 12, 11, 23, 2, 24, 5, 3, 25, 23, 27, 5, 11, 5, 3, 11, 19, 12, 5, 19, 2, 26, 11, 19, 24, 5, 23, 20, 5, 17, 25, 8, 2, 19, 23, 15], [ 23, 27, 2, 5, 25, 24, 2, 11, 5, 20, 7, 5, 2, 14, 2, 19, 15, 5, 12, 20, 24, 2, 5, 25, 1, 5, 3, 27, 25, 0, 27, 5, 23, 27, 2, 5, 27, 13, 12, 11, 1, 5, 8, 20, 24, 15, 5, 25, 10, 5, 11, 7, 7, 2, 0, 23, 2, 24, 5, 8, 15, 5, 2, 6, 23, 2, 19, 1, 11, 17, 5, 8, 20, 24, 25, 2, 10, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]])>} ```<|||||>@gante , When I change the above dictionary key from `labels` to `label` and it throws me this error: ``` Epoch 1/5 --------------------------------------------------------------------------- TypeError Traceback (most recent call last) /tmp/ipykernel_33163/3396866883.py in <module> 3 tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) 4 ----> 5 model.fit(train, validation_data = validation, epochs=5) ~/anaconda3/envs/gsoc-2/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~/anaconda3/envs/gsoc-2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in train_step(self, data) 1058 1059 # Run backwards pass. -> 1060 self.optimizer.minimize(loss, self.trainable_variables, tape=tape) 1061 1062 self.compiled_metrics.update_state(y, y_pred, sample_weight) TypeError: Argument `target` should be a list or nested structure of Tensors, Variables or CompositeTensors to be differentiated, but recieved None. ``` Can you please help me with where I am going wrong?<|||||>Hi @gante , I think this is a bug. When my input dataset looks like the above, no labels are passing through the model forward pass, it is `None` when I printed it. Can you please look into it?<|||||>Thank you for sharing a script @Sreyan88 :D I'm reopening the issue to track it<|||||>Hi @gante , Can adding `@unpack_inputs` solve the problem? Solved the problem for me, which lead me to another error: ``` InvalidArgumentError Traceback (most recent call last) /tmp/ipykernel_33658/3396866883.py in <module> 3 tf.compat.v1.logging.set_verbosity(tf.compat.v1.logging.ERROR) 4 ----> 5 model.fit(train, validation_data = validation, epochs=5) ~/anaconda3/envs/gsoc-2/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs) 65 except Exception as e: # pylint: disable=broad-except 66 filtered_tb = _process_traceback_frames(e.__traceback__) ---> 67 raise e.with_traceback(filtered_tb) from None 68 finally: 69 del filtered_tb ~/anaconda3/envs/gsoc-2/lib/python3.7/site-packages/transformers/modeling_tf_utils.py in train_step(self, data) 1024 1025 if self._using_dummy_loss: -> 1026 loss = self.compiled_loss(y_pred.loss, y_pred.loss, sample_weight, regularization_losses=self.losses) 1027 else: 1028 loss = None InvalidArgumentError: slice index 0 of dimension 0 out of bounds. [Op:StridedSlice] name: strided_slice/ ``` which I figure can be fixed with `loss = tf.reshape(loss, (1,))` after CTC loss calculation. Please let me know if these sound good.<|||||>Hi @gante , Hoping you are doing good! Do we have an update on this fix? Do you want me to create a pull request since I have tested with the above fixes and it works perfectly for me!<|||||>Hi @Sreyan88 👋 I do not have an update for this -- it's on my queue, but the queue is large. If you have a fix then yes, please open a PR 🙏 It would be greatly appreciated!<|||||>@gante you can close this as it was fixed in PR 18014.<|||||>Awesome! Thank you very much for your contribution @Sreyan88 <3
transformers
15,113
closed
Add TFSpeech2Text
# What does this PR do? This PR adds a TF port of Speech2Text. A summary of the changes: - This model borrows a lot of code from TFBart, just like Speech2Text borrowed from Bart; - Tried to follow the changes in other PRs to enable smooth interoperation with other parts of transformers (e.g. auto classes), might be missing a few things 👼 ; - This seems to be the first TF model with speech as input, so I had to touch common TF code to enable correct data piping and misc operations (e.g. enable loading Conv1D PT weights into TF); - Likewise, there were a few tests in `test_modelling_tf_common.py` that didn't quite fit this new kind of model. TODO: - [x] create TF version of the weights, so we can load a model without `from_pt=True`
01-11-2022 20:57:12
01-11-2022 20:57:12
Tagging @sgugger as a core dev, @patil-suraj as a core dev + original creator of our `Speech2Text`, and @Rocketknight1 as the TensorFlow boy. Feel free to redirect the reviews if you know of better people to review. Some pipeline tests are failing almost surely due to the changes in `generation_tf_utils.py`, as some models expect `(encoder_outputs, past)` in the `past` variable and others don't -- having a look, but open to suggestions.<|||||>Cloned your branch and did some experimentation and LGTM! I noticed one issue - `model.save()` seems to encounter problems, but I'm not totally sure of why, or whether it's limited to this model or not. `save_pretrained` and `save_weights` worked correctly, and saving in `SavedModel` format has always been a bit shaky for us, so this isn't critical.<|||||>Pending the results of automated tests, this should be the last planned commit. We already have `facebook/s2t-small-librispeech-asr` as a TF model, and I will upload the TF version of the others today. @patrickvonplaten -- One important final change that I'd like to ask for a double-check is the removal of the positional embeddings weights from the `nn.Parameter()` wrapper, in the PT model. It is a constant that was not being saved nor loaded, and was causing issues in the `pt_tf` tests (the TF model had no such variable, and technically it is not a parameter). To check the changes, I've run: ``` RUN_SLOW=1 pytest tests/test_modeling_tf_vision_encoder_decoder.py RUN_SLOW=1 pytest tests/test_modeling_tf_bart.py RUN_SLOW=1 pytest tests/test_modeling_tf_t5.py RUN_SLOW=1 pytest tests/test_modeling_tf_rag.py RUN_SLOW=1 pytest tests/test_modeling_speech_to_text_2.py RUN_PT_TF_CROSS_TESTS=1 RUN_SLOW=1 pytest tests/test_modeling_speech_to_text.py RUN_PT_TF_CROSS_TESTS=1 RUN_SLOW=1 pytest tests/test_modeling_tf_speech_to_text.py ``` EDIT: TF models uploaded.<|||||>@patrickvonplaten reverted the previous change and added the embedding weights as named variables in TF
transformers
15,112
closed
How to efficiently tokenize unknown tokens in GPT2
I am trying to train a dialog system using GPT2. For tokenization, I am using the following configuration for adding the special tokens. ``` from transformers import ( AdamW, AutoConfig, AutoTokenizer, PreTrainedModel, PreTrainedTokenizer, get_linear_schedule_with_warmup, ) SPECIAL_TOKENS = { "bos_token": "<|endoftext|>", "eos_token": "<|endoftext|>", "pad_token": "[PAD]", "additional_special_tokens": ["[SYS]", "[USR]", "[KG]", "[SUB]", "[PRED]", "[OBJ]", "[TRIPLE]", "[SEP]", "[Q]","[DOM]"] } tokenizer = AutoTokenizer.from_pretrained(args.model_name_or_path) tokenizer.add_special_tokens(SPECIAL_TOKENS) ``` Next, when I am trying to tokenize a sequence(dialog's utterance) and later convert into ids, some of the special tokens in my sequence are getting mapped as unknown tokens, since the ids of these special tokens becomes the same as bos and eos as they all map to <|endoftext|> as in the GPT2's [source code][1]. Here is a working example - ``` tokenized_sequence = ['[PRED]', 'name', '[SUB]', 'frankie_and_bennys', '[PRED]', 'address', '[SUB]', 'cambridge_leisure_park_clifton_way_cherry_hinton', '[PRED]', 'area', '[SUB]', 'south', '[PRED]', 'food', '[SUB]', 'italian', '[PRED]', 'phone', '[SUB]', '01223_412430', '[PRED]', 'pricerange', '[SUB]', 'expensive', '[PRED]', 'postcode', '[SUB]', 'cb17dy'] special_tokens = ['frankie_and_bennys','cambridge_leisure_park_clifton_way_cherry_hinton','italian','postcode', 'cb17dy'] tokens_to_ids = [50262, 3672, 50261, 50256, 50262, 21975, 50261, 50256, 50262, 20337, 50261, 35782, 50262, 19425, 50261, 50256, 50262, 4862, 50261, 50256, 50262, 50256, 50261, 22031, 50262, 50256, 50261, 50256] ids_to_tokens = [PRED]name[SUB]<|endoftext|>[PRED]address[SUB]<|endoftext|>[PRED]area[SUB]south[PRED]food[SUB]<|endoftext|>[PRED]phone[SUB]<|endoftext|>[PRED]<|endoftext|>[SUB]expensive[PRED]<|endoftext|>[SUB]<|endoftext|> ``` As you can see the special_tokens are being mapped to the id 50256 (that is to |endoftext|), the model fails to see and learn these important tokens and hence generate very poor and often hallucinated responses. What could be a quick and efficient fix for this issue? Note - I have a large set of such special tokens in my corpus. [1]: https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/gpt2/tokenization_gpt2.py#L104
01-11-2022 19:56:45
01-11-2022 19:56:45
Hi, @soumya-ranjan-sahoo This is a more general question, so would be awesome if you could ask this on the [forum ](https://discuss.huggingface.co/). We use issues for bug reports or feature requests. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,111
closed
Update TF test_step to match train_step
null
01-11-2022 17:36:43
01-11-2022 17:36:43
transformers
15,110
closed
Build dev doc
null
01-11-2022 17:33:03
01-11-2022 17:33:03
This should be nearly good to go, only need to check that on close it doesn't actually regenerate the modifications of the PR. Will take a stab at that tomorrow morning cc @sgugger @mishig25 <|||||>Will only need to change the target of the `git add` directive and it should be good to go<|||||>As said in private, this won't work for branches that have a different origin than `transformers`, since those won't get access to the secrets for security reasons.
transformers
15,109
closed
Why is Marian to Torch converter hardcoded for tied vocab ?
I see the following condition: https://github.com/huggingface/transformers/blob/16f0b7d72c6d4e122957392c342b074aa2c5c519/src/transformers/models/marian/convert_marian_to_pytorch.py#L462 While training my Marian model, I do not want to tie my source and target embeddings. How do I convert such a model? (This is a very common thing in NMT) I see that in `MarianConfig` itself, this is not supported: https://github.com/huggingface/transformers/blob/16f0b7d72c6d4e122957392c342b074aa2c5c519/src/transformers/models/marian/configuration_marian.py#L46-L49 Can this be considered a **feature request** to make it generic? --- Also, why is the `hidden-dim` required to be `512` in the converter? https://github.com/huggingface/transformers/blob/16f0b7d72c6d4e122957392c342b074aa2c5c519/src/transformers/models/marian/convert_marian_to_pytorch.py#L478 What if I train transformer-big models?
01-11-2022 17:25:22
01-11-2022 17:25:22
I understand that this was created only to add support for [baseline models released from Tatoeba Challenge](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models). But it would be great if we can generalize it. Thanks!<|||||>cc @patil-suraj <|||||>Hi @sshleifer Just saw your comment on this thread: https://github.com/marian-nmt/marian-dev/issues/756#issuecomment-724738421 , so probably felt you can help. Can you please let me know if any thoughts on the above issue? Thanks!<|||||>Hi @jorgtied Is there anyway we can convert Marian models (to HF) that are trained with `--tied-embeddings-all=false` and `--tied-embeddings-src=false` ? For Tatoeba challenge models, I see that you are first creating SPMs specific to src and tgt langs, tokenizing the datasets, and finally concatenating the vocabs using `marian-vocab` so that the model can be trained using a shared vocab. Have you tried with different src and tgt vocabs to convert to PyTorch? Thanks!<|||||>No, I haven't tried that yet and I agree that it would be great to also support separate vocabs in conversion. Why hidden-size and dim_emb is hard-coded to 512 I also don't really understand. Let's see if people at HF can help to answer those questions ...<|||||>hi @GokulNC , @jorgtied > why is the hidden-dim required to be 512 in the converter? Not sure why it was done this way, but yes we can generalize it. > I agree that it would be great to also support separate vocabs in conversion. It should be possible to add this. Are there any officially released checkpoints with separate vocabs? <|||||>OK - nice. Can the condition about dimensionality simply be taken away? Or does that impact anything else? About a release with 2 separate vocabs: We could use this one as a test case (English-Korean): https://object.pouta.csc.fi/Tatoeba-MT-models/eng-kor/opusTCv20210807+bt-2021-11-10.zip It has 2 separate vocab files for source and target. One minor complication, the vocabs here are stored as plain text lists of vocab items instead of using a yaml file. But it would be straightforward to yamlify it and I could add those as well if needed. The items are simply numbered in the same order they appear.<|||||>> Can the condition about dimensionality simply be taken away? Or does that impact anything else? We can simply remove it. > It has 2 separate vocab files for source and target. So the model does share the embeddings between encoder and decoder?<|||||>I thought that they were not but now looking at the model they are actually tied. I didn't know that this is possible with two vocabs and then I don't really know what happens internally. I need to check that again and, in that case, maybe this is just another test case of a model to be converted (but not really the one I was thinking of ...)<|||||>I have uploaded another model that has separate vocabs and no tied source/target embeddings: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+nopar+ft95-sepvoc_transformer-align_2022-01-28.zip<|||||>> I thought that they were not but now looking at the model they are actually tied. if they are tied that means they use shared vocab, right? > I have uploaded another model that has separate vocabs and no tied source/target embeddings: Awesome! I will use this for the tests. One more question: For this model, are the decoder(target) embeddings tied with the `lm_head` or not? <|||||>The eng-kor model was trained with marian parameters ``` [2021-11-03 16:34:05] [config] tied-embeddings: false [2021-11-03 16:34:05] [config] tied-embeddings-all: true [2021-11-03 16:34:05] [config] tied-embeddings-src: false ``` and the fin-eng model is trained with ``` [2022-01-23 02:10:50] [config] tied-embeddings: true [2022-01-23 02:10:50] [config] tied-embeddings-all: false [2022-01-23 02:10:50] [config] tied-embeddings-src: false ``` Both of them are provided with separate vocab files but it could be that the vocabs are concatenated in the eng-kor case as the embeddings are tied (but I don't know). What it says about the optons in marian (sorry, it's a bit black-box for me): ``` --tied-embeddings Tie target embeddings and output embeddings in output layer --tied-embeddings-src Tie source and target embeddings --tied-embeddings-all Tie all embedding layers and output layer ```<|||||>Another unrelated question: I happen to have models that have different activation functions in ffn (relu) and aan (swish). The conversion script now checks that they are equal. Could that also be relaxed? ... and also different dimensions in aan and ffn ....<|||||>>Both of them are provided with separate vocab files but it could be that the vocabs are concatenated in the eng-kor case as the embeddings are tied (but I don't know) My guess is also that for eng-kor, vocabs are concatenated since `tied-embeddings-all` is `True` which ties src, target and output embeddings. > I happen to have models that have different activation functions in ffn (relu) and aan (swish). The conversion script now checks that they are equal. Could that also be relaxed? ... and also different dimensions in aan and ffn Yes! Could you share the checkpoint? I will use that for test and make the necessary changes in the modeling file to support this :) <|||||>Here you go: https://object.pouta.csc.fi/Tatoeba-MT-models/fin-eng/opusTCv20210807+bt-2021-12-08.zip<|||||>Thank you!<|||||>One more issue when converting the HF-Marian model to the corresponding HF Tensorflow class (not sure if it is relevant here). After [converting a Marian model to HF (Torch)](https://github.com/huggingface/transformers/blob/16f0b7d72c6d4e122957392c342b074aa2c5c519/src/transformers/models/marian/convert_marian_to_pytorch.py), this works fine: ```py model = MarianMTModel.from_pretrained(MODEL_DIR) ``` But this does not work: ```py model = TFMarianMTModel.from_pretrained(MODEL_DIR, from_pt=True) ``` It says: ``` Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFMarianMTModel: ['lm_head.weight'] - This IS expected if you are initializing TFMarianMTModel from a PyTorch model trained on another task or with another architecture (e.g. initializing a TFBertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFMarianMTModel from a PyTorch model that you expect to be exactly identical (e.g. initializing a TFBertForSequenceClassification model from a BertForSequenceClassification model). Some weights or buffers of the TF 2.0 model TFMarianMTModel were not initialized from the PyTorch model and are newly initialized: ['model.encoder.embed_positions.weight', 'model.decoder.embed_positions.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` Can you please check if the conversion works for you? --- However, I don't face this issue for the already available models on HF, like: ```py model = TFMarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-zh', from_pt=True) ``` --- OK, probably it's downloading an already uploaded old TF checkpoint by HF (eventhough I am passing `from_pt=True`). This throws same logs as reported above, hence the issue is reproducible: ```py model = MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-zh') model.save_pretrained("tmp") del model model = TFMarianMTModel.from_pretrained("tmp", from_pt=True) # Same errors ``` --- **NEVERMIND, WE CAN JUST IGNORE THOSE WARNINGS.** It works using TF.<|||||>Also, conversion of the HF Marian model to TorchScript does not work. Sample code: ```py class MarianMTGenerator(torch.nn.Module): def __init__(self, model): super().__init__() self.model = model.eval() def forward(self, input_ids, attention_mask): return self.model.generate(input_ids=input_ids, attention_mask=attention_mask) model = MarianMTModel.from_pretrained(MODEL_DIR, torchscript=True) generator = MarianMTGenerator(model) torchscript_model = torch.jit.script(generator) ``` The errors were because of type-checking issues encountered by the TorchScript compiler in [`modeling_marian.py`](https://github.com/huggingface/transformers/blob/7732d0f/src/transformers/models/marian/modeling_marian.py). I tried fixing a few things, but I was unable to make it after some point. Can you please check this too? Thanks! --- BTW, although converting to TorchScript in tracing mode works, it flattens out the decoding loop for a fixed no. of iterations (conditioned on the example input passed), hence does not work for larger sizes of input during runtime. Sample code: ```py inputs = tokenizer(["Testing"], return_tensors="pt", padding=True) # Max pad batch_size, seq_length = inputs['input_ids'].shape input_ids_padding = torch.full((batch_size, model.config.max_length-seq_length), tokenizer.pad_token_id, dtype=torch.int64) inputs['input_ids'] = torch.cat([inputs['input_ids'], input_ids_padding], dim=1) attention_mask_padding = torch.zeros((batch_size, model.config.max_length-seq_length), dtype=torch.int64) inputs['attention_mask'] = torch.cat([inputs['attention_mask'], attention_mask_padding], dim=1) torchscript_model = torch.jit.trace(generator, [inputs['input_ids'], inputs['attention_mask']]) ``` Although one can pass a very large text covering the maximum encoder sequence length and ensure that the decoder loop is unrolled for a very large number of iterations, during inference time, this is very inefficient. Hence for auto-regressive models, I think it might be best to use `jit.script` mode. Please let me know if you have any other alternate thoughts. Thanks!<|||||>Hey @jorgtied @GokulNC We just merged https://github.com/huggingface/transformers/pull/15752 which now allows to convert the models which don't share embeddings between encoder and decoder and uses separate vocabs. The conversion script is also updated. For models with two vocab files it looks for two yaml fils which should be named like `*src.vocab.yml`, `*trg.vocab.yml`. The conditions for dimentionality check are also relaxed. Let me know if you try it and notice any issues. Thanks!<|||||>@GokulNC does your torchscript example using ``` torchscript_model = torch.jit.script(generator) ``` work for you now? i tried it and it and it gave me some errors.<|||||>No Matthew, this is still unresolved: https://github.com/huggingface/transformers/issues/15109#issuecomment-1040233897 Perhaps you can raise a separate issue for that. This issue was more regarding getting different src & tgt vocabs supported in HF's MarianModel.<|||||>@GokulNC thank you.
transformers
15,108
closed
Support custom StoppingCriteria in model.generate
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> It could be helpful to support custom StoppingCriteria in `model.generate`. Currently, `model.generate` implemented in generation_utils.py ([permlink](https://github.com/huggingface/transformers/blob/68d925195ede14826dd0e83258b64c2222133988/src/transformers/generation_utils.py#L747)) does not support supplying a custom StoppingCriteria in function arguments. We could add a `stopping_criteria` argument in the function, and use the supplied criteira to control the generation progress. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> I am a NLP researcher and recently I have been using GPT-J provided by transformers to follow the Codex research conducted by OpenAI (Codex [paper](https://arxiv.org/abs/2107.03374)), where I found it useful to supply a custom StoppingCriteria when generating source code (stop when encountering a new function/class definition). ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> This may be as simple as appending the user-supplied criteria [here](https://github.com/huggingface/transformers/blob/68d925195ede14826dd0e83258b64c2222133988/src/transformers/generation_utils.py#L1109).
01-11-2022 16:41:21
01-11-2022 16:41:21
Hi @Linyxus , passing custom criteria is now supported, see #14779<|||||>Thank you! :D
transformers
15,107
closed
Weird evaluation result when using distributed training
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-4.15.0-20-generic-x86_64-with-debian-buster-sid - Python version: 3.7.11 - PyTorch version (GPU?): 1.10.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten, @anton-l ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I use the script [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/68d925195ede14826dd0e83258b64c2222133988/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) with my custom dataset but have the same format as the Commonvoice official. In two cases single GPU and multi GPUs, the training loss is fine but the evaluation result is very weird. After training the model with multi GPUs, I take the last checkpoint to evaluate. The result WER is totally good. So I thought when combining evaluation results from multi GPUs had something wrong. 1. Trained with multi GPUs. <img width="1282" alt="Screen Shot 2022-01-11 at 17 21 03" src="https://user-images.githubusercontent.com/8703196/148981486-5306a582-8c40-4d3e-b1c1-aa3e31f52a19.png"> 2. Trained with single GPU (in progress) <img width="1252" alt="Screen Shot 2022-01-11 at 17 32 07" src="https://user-images.githubusercontent.com/8703196/148982838-1b8b1568-d475-43c3-ae65-8a6667dddb58.png"> <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
01-11-2022 16:40:13
01-11-2022 16:40:13
Hey @nguyenvulebinh, Could you please share the bash command that you used to start your training? Also how many gpus do you use? 2?<|||||>BTW, we'll announce an event tomorrow where we'll teach you how to train Wav2Vec2 - see: https://github.com/huggingface/transformers/tree/master/examples/research_projects/xls_r. Watch out for the sign-up form if you would like to train wav2vec2 models with us :-)<|||||>Here is the command I used to train the model on 2 GPUs. The arguments of the python script I put into the python file. ```bash CUDA_VISIBLE_DEVICES=2,4 python -m torch.distributed.launch --nproc_per_node 2 run_speech_recognition_ctc.py ``` ```python # "--dataset_name", "common_voice", # "--dataset_config_name", "vi", "--data_processing_cache_folder", "./data-bin/processed/cache", "--preprocessing_num_workers", "30", "--model_name_or_path", "./model-bin/wav2vec_pretrained/large/", "--output_dir", "./wav2vec2-large-vlsp2020", "--logging_dir", "./wav2vec2-large-vlsp2020/log", "--logging_steps", "100", "--overwrite_output_dir", "--num_train_epochs", "50", "--per_device_train_batch_size", "48", "--gradient_accumulation_steps", "1", "--learning_rate", "1e-4", "--warmup_ratio", "{}".format(1/20), "--evaluation_strategy", "steps", "--text_column_name", "sentence", "--save_steps", "5000", "--eval_steps", "2500", "--warmup_steps", "5000", "--layerdrop", "0.1", "--hidden_dropout", "0.3", "--save_total_limit", "3", "--freeze_feature_encoder", "--delay_epoch_finetune_wav2vec", "1", "--gradient_checkpointing", "--fp16", #"--preprocessing_only", "--metric_for_best_model", "wer", "--greater_is_better", "False", "--group_by_length", "--length_column_name", "input_length", "--dataloader_num_workers", "10", "--do_train", "--do_eval", "--ignore_data_skip" ```<|||||>I've never seen `delay_epoch_finetune_wav2vec` before - are you using a custom loop?<|||||>Yes, it is a little custom for freezing the wav2vec layer for one epoch before fine-tuning all layers. I do that thing by using a Callback. I don't think it is the problem.<|||||>Hmm, this makes it very difficult to guess possible errors here though if it's a costum loop. Could you maybe try to ask help on the forum: https://discuss.huggingface.co/ instead?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,106
closed
Add "open in hf spaces" gradio button issue #73
# What does this PR do? Closes https://github.com/huggingface/doc-builder/issues/73 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> adding Huggingface Spaces links + badges to model summary docs ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-11-2022 16:14:45
01-11-2022 16:14:45
@AK391 feel free to merge the PR<|||||>@mishig25 thanks merged
transformers
15,105
closed
Doc styler tip
# What does this PR do? This PR makes sure the doc styler considers the start and end of Tip blocks as paragraph-breakers. For instance ```py """ <Tip> Short tip with no new line </Tip> """ ``` is currently restyled on one line (which may or may not work with the frontend). With this PR, it's restyled like this: ``` """ <Tip> Short tip with no new line </Tip> """ ``` I can remove the part that adds new lines if you think it's too much.
01-11-2022 15:35:10
01-11-2022 15:35:10
transformers
15,104
closed
Fix failing W2V2 test
This PR fixes a failing test due to the two lists not being ordered. We're checking the content of the list, not the order.
01-11-2022 13:24:17
01-11-2022 13:24:17
Thank you!
transformers
15,103
closed
Add PerceiverForTokenClassification
# What does this PR do? You can also use the Perceiver to do named-entity recognition! Training for 3 epochs on CoNLL 2003 gives me the following result (using the official example script): ``` ***** eval metrics ***** epoch = 3.0 eval_LOC_f1 = 0.8895 eval_LOC_number = 11884 eval_LOC_precision = 0.8934 eval_LOC_recall = 0.8856 eval_MISC_f1 = 0.8058 eval_MISC_number = 6078 eval_MISC_precision = 0.8182 eval_MISC_recall = 0.7938 eval_ORG_f1 = 0.8215 eval_ORG_number = 8869 eval_ORG_precision = 0.8187 eval_ORG_recall = 0.8243 eval_PER_f1 = 0.8477 eval_PER_number = 10479 eval_PER_precision = 0.8598 eval_PER_recall = 0.8359 eval_loss = 0.1479 eval_overall_accuracy = 0.9594 eval_overall_f1 = 0.848 eval_overall_precision = 0.8539 eval_overall_recall = 0.8421 eval_runtime = 0:01:25.54 eval_samples = 3250 eval_samples_per_second = 37.99 eval_steps_per_second = 4.757 ``` This PR includes the tweaked example script to make it work for the Perceiver. I can remove it if required, or move it to research-projects directory. Fixes #14971
01-11-2022 13:18:43
01-11-2022 13:18:43
cc'ing @Narsil as the token classification pipeline currently doesn't work with the Perceiver. Shoud I remove it from the Auto API?<|||||>@NielsRogge What does happen when it doesn't work ? Is it taking text only ? then it should be fixable, no ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,102
closed
Print out durations of all scheduled tests
Adds an entry in all scheduled tests to print out the durations, so that we may have better visibility over the slow tests.
01-11-2022 13:06:47
01-11-2022 13:06:47
transformers
15,101
closed
Cleanup load_weight_prefix in TFEncoderDecoderModel
# What does this PR do? ## Short In `TFEncoderDecoderModel`, there is `tf_encoder_decoder_model_1` (with the strange ending `_1`) https://github.com/huggingface/transformers/blob/68810aa26c083fd97d976cef7ac65fdd9cc9b520/src/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py#L158 This PR change it to `tf_encoder_decoder_model` to avoid confusion ## Long I originally looked `TFRagModel` code during a (long) process to make checkpoint loading work. While it works, it gave me the wrong impression that `_1` is necessary. As @LysandreJik and @NielsRogge both asked this question https://github.com/huggingface/transformers/pull/13222#discussion_r725279027 https://github.com/huggingface/transformers/pull/14148#discussion_r780361966 I gave it a try and also reviewed the code https://github.com/ydshieh/transformers/blob/e68c3756fea7c811d02b8470539ae17ec3ec0e71/src/transformers/modeling_tf_utils.py#L496 It turns out that the `_1` is not necessary, and therefore the recent model `TFVisionEncoderDecoderModel` doesn't use `_1`. To avoid further confusion (and avoid this confusion being copied in future TF composite models), I think it's better to clean it up here. ## Who can review? -->
01-11-2022 10:10:47
01-11-2022 10:10:47
Thank you @ydshieh for your contribution! From recent conversations with @patrickvonplaten and @Rocketknight1, I think this is actually important and cannot be removed, right? I'll let them chime in :)<|||||>I'm not aware of any reason to keep the `_1`, but then again I don't know why it was there in the first place! <|||||>> I'm not aware of any reason to keep the `_1`, but then again I don't know why it was there in the first place! Before these TF encoder-decoder models, the only place that has `load_weight_prefix` with ending `_1` is in `TFRagModel`. And when I worked on TF enc-dec models, I copied the idea from it in order to make the weight loading work correctly - and it somehow gave me wrong impression of the necessity of `_1` - and this is why it was there in the first place. <|||||>In short we can safely remove the `_1` - it won't change anything. It actually doesn't matter at how we call the first prefix of the weight names, as we always remove them later. For more context and to maybe understand a bit better how to load TF models you can do the following: 1. Download a raw `h5` file from the hub, *e.g.*: ```bash wget https://huggingface.co/ydshieh/vit-gpt2-coco-en/resolve/main/tf_model.h5 ``` 2. Now run the following code. This is essentially what we are doing in TF's `from_pretrained(...)` to retrieve the weight names: ```python import h5py from tensorflow.python.keras.saving import hdf5_format with h5py.File("./tf_model.h5", "r") as f: layer_names = set(hdf5_format.load_attributes_from_hdf5_group(f, "layer_names")) weight_names= set(hdf5_format.load_attributes_from_hdf5_group(f[layer_names.pop()], "weight_names")) print(weight_names) ``` The output looks something like this: ```bash { ..., 'tf_vision_encoder_decoder_model/encoder/vit/encoder/layer_._9/layernorm_after/beta:0', 'tf_vision_encoder_decoder_model/encoder/vit/encoder/layer_._8/attention/attention/query/bias:0'} ``` 3. Note that the very first prefix was never actually written by us. We don't call any layer or weight `tf_vision_encoder_decoder_model`, *i.e.* there is no `self.tf_vision_encoder_decoder_model = ...` statement anywhere. Those weight names are generated automatically by TF's saving method. I believe that in previous TF versions (I'm using 2.7.0 now) - TF also automatically appended a `_0` or `_1` there. 4. Since this very first prefix is not written by us, but since it'll always automatically be created by TF, we've decided to **always** remove it when loading the model - see: https://github.com/huggingface/transformers/blob/b8810847d0576e3c142854ad3b8a607ecd3df291/src/transformers/modeling_tf_utils.py#L550 5. Now composite models like `TFEncoderDecoder` are very special because we need to give them some prefix weight name in order to correctly load. It doesn't matter though really what kind of name we give it as long as there is something that can be correctly cut away by the above line of code. => As a conclusion, I think it's a good idea to remove `_1` as it can be a bit confusing for the reader. <|||||>Ok, thanks for the clarification @patrickvonplaten! <|||||>@Rocketknight1 please do merge if you're ok with the changes.<|||||>LGTM, merging now!
transformers
15,100
closed
Fix cookiecutter
# What does this PR do? Super tiny fix, XLNetConfig shouldn't be in the CookieCutter template.
01-11-2022 09:43:39
01-11-2022 09:43:39
transformers
15,099
closed
change metric_key_prefix in seq2seq_trainer.py
# What does this PR do? In the trainer, the metric_key_prefix of the predict is set as "test", and in the seq2seq trainer, it is set as "eval", so the same result does not come out. To solve this problem, the metric_key_prefix for the seq2seq trainer was modified to "test". ## Who can review? --> trainer: @sgugger
01-11-2022 09:26:09
01-11-2022 09:26:09
transformers
15,098
closed
Get started docs
🧼 A clean commit of changes of the Get Started documentation (with most of the feedback applied) after I messed up PR #14807.
01-10-2022 17:56:17
01-10-2022 17:56:17
transformers
15,097
closed
GPT-J Tokenizer model_max_length=1024 despite n_positions=2048
On the hub the `tokenizer_config.json` states, that GPT-J has a `"model_max_length": 1024`, despite the fact that it can generate up to `2048` tokens, as per `"n_positions": 2048` according to the config. Is this intended? Tagging due to knowledge of the original issue: @StellaAthena @EricHallahan @patrickvonplaten
01-10-2022 17:50:16
01-10-2022 17:50:16
This sounds like its a typo. If you change it to 2048, does the model run without issue?<|||||>Yes, that’s why I raised the issue 👍🏻<|||||>Hi, good catch! This is because GPT-J uses `GPT2Tokenizer`which has `model_max_length` set to 1024. Here we could directly update`tokenizer_config.json` file on the hub. @StellaAthena would it be okay if I update the file in the `tokenizer_config` repo with this change?<|||||>@patil-suraj I made the change :)<|||||>Thanks a lot!
transformers
15,096
closed
Add test to check reported training loss
# What does this PR do? This PR adds a test to check the reported training loss is correct, with various logging_steps values.
01-10-2022 17:34:59
01-10-2022 17:34:59
transformers
15,095
closed
Take gradient accumulation into account when defining samplers
# What does this PR do? This PR takes the gradient accumulation steps into account when defining samplers that use the batch size (like the `LengthGroupedSampler`) so that training with large batch size or training with smaller batch size and gradient accumulation (e.g. batch size 64 or batch size 8 and gradient accumulation steps of 8) yield the same results. Fixes #14638
01-10-2022 17:06:28
01-10-2022 17:06:28
transformers
15,094
closed
Happy New Year!
# What does this PR do? This PR updates the Copyright years to 2022 in the templates.
01-10-2022 16:52:42
01-10-2022 16:52:42
transformers
15,093
closed
[DOC] fix doc examples for bart-like models
# What does this PR do? Fixes example docs in Bart-like models.
01-10-2022 16:40:47
01-10-2022 16:40:47
transformers
15,092
closed
[Fix doc example] Speech2TextForConditionalGeneration
# What does this PR do? This fails for `Speech2TextForConditionalGeneration` ``` >>> generated_ids = model.generate(input_ids=input_features) ``` I changed it to `model.generate(inputs=input_features)`. ## Who can review? @patrickvonplaten
01-10-2022 16:27:39
01-10-2022 16:27:39
transformers
15,091
closed
Add YOSO
# What does this PR do? This PR adds the YOSO Transformer model to the repository. Paper: [You Only Sample (Almost) Once: Linear Cost Self-Attention Via Bernoulli Sampling](http://proceedings.mlr.press/v139/zeng21a/zeng21a.pdf) Code: [Official code](https://github.com/mlpen/YOSO) Checkpoints: [Uploaded to hub](https://huggingface.co/uw-madison/yoso-4096) In this paper, the authors introduce a an efficient self-attention mechanism based on Locality Sensitive Hashing. They also implemented custom CUDA kernels in their [code](https://github.com/mlpen/YOSO/blob/6000487d9cd8e34519aa47650011f458ca0db64c/encoders/backbones/efficient_attentions/yoso/yoso_v1/kernel.py#L15). I've incorporated this in the modeling file, but some code qaulity checks fail. ## Who can review? @NielsRogge @patrickvonplaten
01-10-2022 16:08:51
01-10-2022 16:08:51
Hey @novice03, Thanks a lot for adding this complicated model! Looks super cool :-) Also great to see a revival of the LSH mechanism for long-range sequence modeling after Reformer. Left a couple of suggestions above. It would be great if: - We could remove the whole `use_cache` logic as the model is not able to use it - Add one integration test with long inputs so that we can be sure that the model can process long inputs on a single GPU<|||||>Thank you for your feedback! I've pushed a few commits with the appropriate changes. <|||||>Oh, I ran `doc-builder convert` on the file and assumed that it fixed everything. Turns out there were still some parts of the code that weren't converted. Should be fixed now.<|||||>Looks good for merge to me! <|||||>Thanks for all your work on this! Merging.
transformers
15,090
closed
Adding Tensorflow Perceiver Model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add tensorflow Perceiver model. There are still lots of classes and tests that should be implemented :D (**I'm working on it**) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - tensorflow: @LysandreJik, @NielsRogge Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-10-2022 15:11:15
01-10-2022 15:11:15
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @elk-cloner! Can we help with anything? :)<|||||>> Hey @elk-cloner! Can we help with anything? :) absolutely 🙏, I implemented most of the parts we needed but have some problems with tests. I close this PR and open new one so you can see the changes.<|||||>@LysandreJik can you check this #15778 ?
transformers
15,089
closed
Trainer not keeping best model checkpoint with save_total_limit=1
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0 - Platform: Linux-5.4.109+-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.10.0+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger ## Information Model I am using (Bert, XLNet ...): BERT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a trainer with `save_total_limit=2` and `load_best_model_at_end=True` 2. Train the model After each evaluation, the trainer will save the most recent checkpoint and delete the previous one to keep the save total limit, even if the previous one was better. That is not what I expected, considering [this comment](https://discuss.huggingface.co/t/save-only-best-model-in-trainer/8442/5?u=erickrf). <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I'd expect the best model to be always kept on disk.
01-10-2022 14:42:30
01-10-2022 14:42:30
I am unsure what behavior you are seeing, but `load_best_model_at_end=True` makes sure the best model checkpoint is always kept. That means the absolute best model checkpoint, so if at step 500, you get a model worse then at step 450, and the best model checkpoint was at step 350, the Trainer will delete the checkpoint at step 450 indeed, and only keep the checkpoint at step 350 for the best model.<|||||>I see it now, it was actually my fault. I forgot to provide the `metric_for_best_model` and the trainer was only considering the loss. Sorry for the misunderstanding!<|||||>No problem!<|||||>I'm still confused by this. I'm not able to use your example of `save_total_limit=2` and `load_best_model_at_end=True`, because it fails with: ``` ValueError: --load_best_model_at_end requires the save and eval strategy to match, but found - Evaluation strategy: IntervalStrategy.NO - Save strategy: IntervalStrategy.STEPS ``` Ideally, I'd like to save the N best checkpoints, but I can't find a way to do that. I'm on transformers 4.18.0. UPDATE: Also tried on 4.19.0.dev0<|||||>What don't you understand in the error message? As stipulated, you need to have the same evaluation and save strategy when activating `load_best_model_at_end`.<|||||>Of course, the error message is clear. What I was confused about is how to find a combination of settings that would do what I was wanting to do. Ideally, I wanted to save the N best checkpoints at a given frequency of steps (e.g., 100 steps). It's helpful, particularly as my model gets closer to converging, to be able to `scp` checkpoints out to my laptop to try in the context of my actual application. And I often do that while the model's still training on my server, so it's nice to have frequent saves where I can quickly find a recent minimal loss. Anyway, I'm now using the settings suggested here: https://discuss.huggingface.co/t/save-only-best-model-in-trainer/8442/8?u=jbmaxwell, which is fine: ``` save_total_limit = 2 save_strategy = “no” load_best_model_at_end=False ``` Though honestly it's still quite counterintuitive, as from the settings alone it looks like this will save only the 2 most recent checkpoints, while apparently it will save the most recent and the best.<|||||>@sgugger Is it possible update the documentation of the `save_total_limit` parameter of `Trainer`? What I think would make the documentation better is to document the following behaviour: - What happens when `load_best_model_at_end` is True, but `save_total_limit=1`? - In the documentation this is confusing as `save_total_limit` is mentioning that it keeps the models based recency. Instead perhaps the documentation there should also be updated to reflect that it's behaviour changes depending on other parameters. - What happens if you select `save_total_limit>=2` and `load_best_model_at_end` is set to True. Maybe this will help others understand the behaviour straight from the docs. Side note: the docs are generally awesome!<|||||>@gitjoop Feel free to open a PR, it would indeed be awesome to have all of this in the doc :-)<|||||>I tried running run_mlm.py in examples, with MLFlow Callback but apparently no checkpoint is logged in mlflow. I used the following config. ``` save_total_limit = 2 save_strategy = “no” load_best_model_at_end=False ``` and set the HF_MLFLOW_LOG_ARTIFACTS to 1. I can confirm it does log checkpoint but does not log the with the above configuration.
transformers
15,088
closed
Change assignee for tokenizers
Changes the assignee for tokenizers bug reports.
01-10-2022 14:21:23
01-10-2022 14:21:23
transformers
15,087
closed
optimum | ModuleNotFoundError: No module named 'optimum.intel.lpot'
Kernel: `conda_pytorch_p36` . --- Installations: ``` pip install optimum ``` OR ``` ! pip install datasets transformers optimum[intel] ``` Both provide same Traceback: ``` Requirement already satisfied: optimum in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (0.1.3) Requirement already satisfied: transformers>=4.12.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum) (4.15.0) Requirement already satisfied: coloredlogs in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum) (15.0.1) Requirement already satisfied: torch>=1.9 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum) (1.10.1) Requirement already satisfied: sympy in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from optimum) (1.8) Requirement already satisfied: typing-extensions in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from torch>=1.9->optimum) (3.10.0.0) Requirement already satisfied: dataclasses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from torch>=1.9->optimum) (0.8) Requirement already satisfied: numpy>=1.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (1.19.5) Requirement already satisfied: packaging>=20.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (21.3) Requirement already satisfied: pyyaml>=5.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (5.4.1) Requirement already satisfied: sacremoses in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (0.0.46) Requirement already satisfied: tqdm>=4.27 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (4.62.3) Requirement already satisfied: regex!=2019.12.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (2021.4.4) Requirement already satisfied: requests in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (2.25.1) Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (0.2.1) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (0.10.3) Requirement already satisfied: importlib-metadata in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (4.5.0) Requirement already satisfied: filelock in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from transformers>=4.12.0->optimum) (3.0.12) Requirement already satisfied: humanfriendly>=9.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from coloredlogs->optimum) (10.0) Requirement already satisfied: mpmath>=0.19 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sympy->optimum) (1.2.1) Requirement already satisfied: pyparsing!=3.0.5,>=2.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from packaging>=20.0->transformers>=4.12.0->optimum) (2.4.7) Requirement already satisfied: zipp>=0.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from importlib-metadata->transformers>=4.12.0->optimum) (3.4.1) Requirement already satisfied: idna<3,>=2.5 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests->transformers>=4.12.0->optimum) (2.10) Requirement already satisfied: certifi>=2017.4.17 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests->transformers>=4.12.0->optimum) (2021.5.30) Requirement already satisfied: chardet<5,>=3.0.2 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests->transformers>=4.12.0->optimum) (4.0.0) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from requests->transformers>=4.12.0->optimum) (1.26.5) Requirement already satisfied: joblib in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers>=4.12.0->optimum) (1.0.1) Requirement already satisfied: click in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers>=4.12.0->optimum) (8.0.1) Requirement already satisfied: six in /home/ec2-user/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages (from sacremoses->transformers>=4.12.0->optimum) (1.16.0) Note: you may need to restart the kernel to use updated packages. ``` --- ```python from optimum.intel.lpot.quantization import LpotQuantizerForSequenceClassification # Create quantizer from config quantizer = LpotQuantizerForSequenceClassification.from_config( "echarlaix/quantize-dynamic-test", "quantization.yml", model_name_or_path="textattack/bert-base-uncased-SST-2", ) model = quantizer.fit_dynamic() ``` Traceback: ``` --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-6-9dcf25f181ea> in <module> ----> 1 from optimum.intel.lpot.quantization import LpotQuantizerForSequenceClassification 2 3 # Create quantizer from config 4 quantizer = LpotQuantizerForSequenceClassification.from_config( 5 "echarlaix/quantize-dynamic-test", ModuleNotFoundError: No module named 'optimum.intel.lpot' ``` ```python from optimum.intel.lpot.pruning import LpotPrunerForSequenceClassification # Create pruner from config pruner = LpotPrunerForSequenceClassification.from_config( "echarlaix/magnitude-pruning-test", "prune.yml", model_name_or_path="textattack/bert-base-uncased-SST-2", ) model = pruner.fit() ``` Traceback: ``` --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-7-e9872c164aee> in <module> ----> 1 from optimum.intel.lpot.pruning import LpotPrunerForSequenceClassification 2 3 # Create pruner from config 4 pruner = LpotPrunerForSequenceClassification.from_config( 5 "echarlaix/magnitude-pruning-test", ModuleNotFoundError: No module named 'optimum.intel.lpot' ``` ```python from optimum.graphcore import IPUTrainer from optimum.graphcore.bert import BertIPUConfig from transformers import BertForMaskedLM, BertTokenizer from poptorch.optim import AdamW # Allocate model and tokenizer as usual tokenizer = BertTokenizer.from_pretrained("bert-base-cased") model = BertForMaskedLM.from_pretrained("bert-base-cased") # Trainer + poptorch custom configuration optional ipu_config = BertIPUConfig() trainer = IPUTrainer(model, trainings_args, config=ipu_config) optimizer = AdamW(model.parameters) # This is hidden from the user, it will be handled by the Trainer with trainer.compile(some_data_loader) as model_f: for steps in range(10): # ! outputs = trainer.step(optimizer) # Save the model and/or push to hub model.save_pretrained("...") model.push_to_hub("...") ``` Traceback: ``` --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-8-921e03245390> in <module> ----> 1 from optimum.graphcore import IPUTrainer 2 from optimum.graphcore.bert import BertIPUConfig 3 from transformers import BertForMaskedLM, BertTokenizer 4 from poptorch.optim import AdamW 5 ModuleNotFoundError: No module named 'optimum.graphcore' ``` Please let me know if there's anything else I can add to post. [1]: https://huggingface.co/hardware
01-10-2022 10:27:57
01-10-2022 10:27:57
cc @mfuntowicz @michaelbenayoun @lewtun @echarlaix <|||||>Hi @danielbellhv, I think you are making reference to our hardware [page](https://huggingface.co/hardware), which needs to updated, thanks for pointing that out. The library previously named LPOT has been renamed to Intel Neural Compressor (INC), which resulted in a change in the name of our subpackage from lpot to neural_compressor. The correct way to import would now be `from optimum.intel.neural_compressor.quantization import IncQuantizerForSequenceClassification` Concerning the graphcore subpackage, you need to install it first with `pip install optimum[graphcore]` Furthermore you'll need to have access to an IPU in order to use it.<|||||>Thanks for getting back, @echarlaix ! To clarify, instead of: ```python from optimum.intel.lpot.quantization import LpotQuantizerForSequenceClassification from optimum.intel.lpot.pruning import LpotPrunerForSequenceClassification ``` ```python from optimum.intel.neural_compressor.quantization import IncQuantizerForSequenceClassification from optimum.intel.neural_compressor.pruning import IncPrunerForSequenceClassification ```<|||||>Yes exactly, also you can find more usage examples [here](https://github.com/huggingface/optimum/tree/main/examples/inc/pytorch).<|||||>I am getting parameter errors on respective `from_config()` methods. Would you be so kind as to link the relevant library code for, so as I can debug, please? I get lost when trying to find it ;( Cell: ```python from optimum.intel.neural_compressor.quantization import IncQuantizerForSequenceClassification # Create quantizer from config quantizer = IncQuantizerForSequenceClassification.from_config( "echarlaix/quantize-dynamic-test", "quantization.yml", model_name_or_path="textattack/bert-base-uncased-SST-2", ) model = quantizer.fit_dynamic() ``` Traceback: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-c897387f4918> in <module> 5 "echarlaix/quantize-dynamic-test", 6 "quantization.yml", ----> 7 model_name_or_path="textattack/bert-base-uncased-SST-2", 8 ) 9 TypeError: from_config() got multiple values for argument 'model_name_or_path' ``` Cell: ```python from optimum.intel.neural_compressor.pruning import IncPrunerForSequenceClassification # Create pruner from config pruner = IncPrunerForSequenceClassification.from_config( "echarlaix/magnitude-pruning-test", "prune.yml", model_name_or_path="textattack/bert-base-uncased-SST-2", ) model = pruner.fit() ``` Traceback: ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-7-b86a035cd2e0> in <module> 5 "echarlaix/magnitude-pruning-test", 6 "prune.yml", ----> 7 model_name_or_path="textattack/bert-base-uncased-SST-2", 8 ) 9 TypeError: from_config() got multiple values for argument 'model_name_or_path' ```<|||||>You can find more information concerning IncQuantizer [here](https://github.com/huggingface/optimum/blob/main/optimum/intel/neural_compressor/quantization.py#L56). And some example usages [here](https://github.com/huggingface/optimum/blob/main/examples/inc/pytorch/text-classification/run_glue.py#L634) for text classification tasks. The code snippet of the hardware page should be updated soon to: ```python quantizer = IncQuantizerForSequenceClassification.from_config( "echarlaix/bert-base-dynamic-quant-test", config_name="quantization.yml", eval_func=eval_func, ) model = quantizer.fit_dynamic() ```
transformers
15,086
closed
UserWarning: __floordiv__ is deprecated
Dear All, I am trying to Fine-tune XLSR for Speech Recognition "facebook/wav2vec2-base" model for Urdu Language. However when I run the trainer, I am getting the following warning: ``` /home/e-team/anaconda3/lib/python3.9/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py:704: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). return (input_length - kernel_size) // stride + 1 ``` Also Consequently my WER is 1.00 which means training is not being done correctly. Can anyone guide ? What changes to make and where? Torch Version: 1.10.1+cu102 Python Version: 3.9.7 CUDA Version: 11.2 Tensorflow Version: 2.7.0 ![image](https://user-images.githubusercontent.com/42676982/148747322-aee21356-a4ee-4b11-a81d-8176ccf214ae.png) Best regards, Yasir
01-10-2022 09:56:25
01-10-2022 09:56:25
The same warning is shown for LayoutLMv2, see #14577 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Fixed by https://github.com/huggingface/transformers/pull/15180<|||||>Running in the same issue for the `DetrModel` model: https://github.com/huggingface/transformers/blob/c962c2adbff678ae6d2e98378bed5b8d1a9831d9/src/transformers/models/detr/modeling_detr.py#L422 Happy to submit a quick PR using the `torch_int_div` function added in https://github.com/huggingface/transformers/pull/15180: ```python dim_t = self.temperature ** (2 * torch_int_div(dim_t, 2) / self.embedding_dim) ``` EDIT: I've submitted the PR (#15702) since it was a quick fix.
transformers
15,085
closed
Add Swin Transformer
# What does this PR do? This PR adds the Swin Transformer model to the repository. Paper: [https://arxiv.org/abs/2103.14030](https://arxiv.org/abs/2103.14030) Code: [Official code](https://github.com/microsoft/Swin-Transformer) and [timm implementation](https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/swin_transformer.py) Checkpoints: [Image classification](https://github.com/microsoft/Swin-Transformer) Fixes #14760 My code is modeled after the ViT implementation. There are a few tasks to do: - [x] Fix `TypeError: Object of type type is not JSON serializable` when saving `SwinConfig` - [x] Fix tests in test_modeling_swin.py ## Who can review? @NielsRogge
01-10-2022 08:38:01
01-10-2022 08:38:01
Thanks for your review. I overlooked the fact the reviewer is made a co-author by GitHub. I'll keep that in mind going forward.<|||||>> I overlooked the fact the reviewer is made a co-author by GitHub. I'll keep that in mind going forward. I'm pretty sure you didn't know, which is why I'm telling you :-)<|||||>Hi, could you please also add swin transformer for semantic segmentation? Right now, I saw it's only for image classification. Thank you.<|||||>Hi @vaneshieh, I am not sure if Swin for sematic segmentation would be feasible. This would require incorporation of other models such as UPerNet into huggingface, which may reequire a lot of extra work.<|||||>Hi, `SwinForSemanticSegmentation` should be relatively easy to add since it uses the same head (UperNet) as `BeitForSemanticSegmentation`, which I already added. Hence, it would mostly be a copy-paste, the only thing to write is the conversion script.<|||||>Oh, I wasn't aware of that. I can take this up in ~ 2 weeks if no one else does.<|||||>> Oh, I wasn't aware of that. I can take this up in ~ 2 weeks if no one else does. Will or where will you update this after you created it? Thank you so much!<|||||>I can leave a message here after it's done.
transformers
15,084
closed
electra is added to onnx supported model
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-10-2022 04:44:01
01-10-2022 04:44:01
> Thank you for this very clean PR @arron1227 🔥 ! > > Overall it looks great 😃. Could you please: > > * add an Electra checkpoint to `test_onnx_v2.py` [here](https://github.com/huggingface/transformers/blob/a54961c5f70ff01ca3d62a56ece083096b7c1a7d/tests/test_onnx_v2.py#L171). I would try something like [`google/electra-base-generator`](https://huggingface.co/google/electra-base-generator) > * check the "slow" tests pass by running > ``` > RUN_SLOW=1 pytest tests/test_onnx_v2.py -k "electra" -rp > ``` @lewtun Please check if it works. I handled what you commented. Thank you!<|||||>@lewtun I've done rebasing master. Thank you for your help in reviewing this PR.<|||||>Hey @arron1227 thanks for the rebase! It seems like we've picked up a lot of extra changes - do you mind if I fix this by making a commit on your branch?<|||||>@lewtun It's okay, feel free to fix and commit, Thank you so much!
transformers
15,083
closed
[Wav2Vec2 Speech Event] Add speech event v2
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds docs for the new upcoming speech event. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-09-2022 22:38:35
01-09-2022 22:38:35
transformers
15,082
closed
Pytorch T5 pre-training script request
# 🚀 Feature request Any plan to release a pytorch pre-training script for T5 mlm? @patrickvonplaten
01-09-2022 13:18:56
01-09-2022 13:18:56
Hi! For T5 MLM training the most important thing is the Data collator used for T5 span-masked language modeling, which is available here, https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py#L279 It's written with numpy so should be easy to port it to PyTorch. Then the rest of the script would be pretty similar to the translation or summarization example. Would you be interested in opening a PR to add this example? Happy to help with it :) <|||||>Sure I am glad to add it. Thank you for the clarification ;)<|||||>Hi @patil-suraj , the summarization example [run_summarization_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization_no_trainer.py) throws an error when loading datasets. The running command was ```bash python run_summarization_no_trainer.py \ --model_name_or_path t5-small \ --dataset_name cnn_dailymail \ --dataset_config "3.0.0" \ --source_prefix "summarize: " \ --output_dir ./tst-summarization ``` The error output was: ```bash --------------------------------------------------------------------------- NotADirectoryError Traceback (most recent call last) <ipython-input-16-b3897e385817> in <module>() 617 618 if __name__ == "__main__": --> 619 main() <ipython-input-16-b3897e385817> in main() 343 if args.dataset_name is not None: 344 # Downloading and loading a dataset from the hub. --> 345 raw_datasets = load_dataset(args.dataset_name, args.dataset_config_name) 346 else: 347 data_files = {} /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, revision, use_auth_token, task, streaming, script_version, **config_kwargs) 1697 ignore_verifications=ignore_verifications, 1698 try_from_hf_gcs=try_from_hf_gcs, -> 1699 use_auth_token=use_auth_token, 1700 ) 1701 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 594 if not downloaded_from_gcs: 595 self._download_and_prepare( --> 596 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 597 ) 598 # Sync info /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 659 split_dict = SplitDict(dataset_name=self.name) 660 split_generators_kwargs = self._make_split_generators_kwargs(prepare_split_kwargs) --> 661 split_generators = self._split_generators(dl_manager, **split_generators_kwargs) 662 663 # Checksums verification /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py in _split_generators(self, dl_manager) 253 def _split_generators(self, dl_manager): 254 dl_paths = dl_manager.download_and_extract(_DL_URLS) --> 255 train_files = _subset_filenames(dl_paths, datasets.Split.TRAIN) 256 # Generate shared vocabulary 257 /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py in _subset_filenames(dl_paths, split) 154 else: 155 logger.fatal("Unsupported split: %s", split) --> 156 cnn = _find_files(dl_paths, "cnn", urls) 157 dm = _find_files(dl_paths, "dm", urls) 158 return cnn + dm /root/.cache/huggingface/modules/datasets_modules/datasets/cnn_dailymail/3cb851bf7cf5826e45d49db2863f627cba583cbc32342df7349dfe6c38060234/cnn_dailymail.py in _find_files(dl_paths, publisher, url_dict) 133 else: 134 logger.fatal("Unsupported publisher: %s", publisher) --> 135 files = sorted(os.listdir(top_dir)) 136 137 ret_files = [] NotADirectoryError: [Errno 20] Not a directory: '/root/.cache/huggingface/datasets/downloads/1bc05d24fa6dda2468e83a73cf6dc207226e01e3c48a507ea716dc0421da583b/cnn/stories' ``` Any solution to it? <|||||>This seems to be some issue with the cache directory. Could you try inspecting the content of that directory to see if the dataset is downloaded correctly?<|||||>Also, if the error persists feel free to create a new issue.<|||||>> This seems to be some issue with the cache directory. Could you try inspecting the content of that directory to see if the dataset is downloaded correctly? The downloaded dataset file was not a directory but a web page file. Please refer to [#15130](https://github.com/huggingface/transformers/issues/15130) for the details.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Gently pinging @cyk1337 :) <|||||>Hi @patil-suraj, I've implemented a brief version. And it may take some time to update due to the time schedule ;)<|||||>Hey @cyk1337 Thanks a lot! And no worries, feel free to take your time.<|||||>Is there any recommended library devised for loading large-scale data for multi-node multi-gpu training in huggingface/PyTorch? Conditioned on that the pre-trained data can have trouble in entirely loading into the memory, like 1TB data. I've met this problem and have not found suitable solutions yet.<|||||>The `datasets` [library](https://github.com/huggingface/datasets)! You could look at the docs to know more, but this enables loading very large datasets without blowing up the memory by providing memory mapping. <|||||>Is there an example of this? Have checked the docs, and not found a distributed solution. Maybe we need to implement one for assigning different data blocks to multi-node multi-gpu settings by [loading a specific subset of the files](https://huggingface.co/docs/datasets/loading.html?highlight=distributed#:~:text=load%20a%20specific%20subset%20of%20the%20files) ?<|||||>Could you post this question on the [forum](https://discuss.huggingface.co/) someone must have shared something there already. Forum is the best place for general discussion. Also for the transformers example, a simple script should be enough. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,081
closed
Script run_mlm_no_trainer.py error
## Environment info - `transformers` version: 4.14.0.dev0 - Platform: Linux-3.10.0_3-0-0-12-x86_64-with-centos-6.3-Final - Python version: 3.7.11 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) - Jax version: 0.2.26 - JaxLib version: 0.1.75 - Using GPU in script?: Y - Using distributed or parallel set-up in script?: Y ### Who can help @patrickvonplaten @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using: roberta-base The problem arises when using: * [x] the official example scripts: [examples/pytorch/language-modeling/run_mlm_no_trainer.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm_no_trainer.py) The tasks I am working on is: * [x] an official pre-training task: run the mlm pre-training script. ## To reproduce Steps to reproduce the behavior: Following the official instruction at [python run_mlm_no_trainer.py](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling#:~:text=python%20run_mlm_no_trainer.py) ```bash python run_mlm_no_trainer.py \ --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 \ --model_name_or_path roberta-base \ --output_dir /tmp/test-mlm ``` ## Expected behavior ```bash [INFO|trainer.py:1204] 2022-01-09 20:51:14,185 >> ***** Running training ***** [INFO|trainer.py:1205] 2022-01-09 20:51:14,185 >> Num examples = 4650 [INFO|trainer.py:1206] 2022-01-09 20:51:14,185 >> Num Epochs = 3 [INFO|trainer.py:1207] 2022-01-09 20:51:14,185 >> Instantaneous batch size per device = 8 [INFO|trainer.py:1208] 2022-01-09 20:51:14,186 >> Total train batch size (w. parallel, distributed & accumulation) = 64 [INFO|trainer.py:1209] 2022-01-09 20:51:14,186 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1210] 2022-01-09 20:51:14,186 >> Total optimization steps = 219 0%| | 0/219 [00:00<?, ?it/s]Traceback (most recent call last): File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/xxx/.vscode-server/extensions/ms-python.python-2021.1.502429796/pythonFiles/lib/python/debugpy/__main__.py", line 45, in <module> cli.main() File "/home/xxx/.vscode-server/extensions/ms-python.python-2021.1.502429796/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 444, in main run() File "/home/xxx/.vscode-server/extensions/ms-python.python-2021.1.502429796/pythonFiles/lib/python/debugpy/../debugpy/server/cli.py", line 285, in run_file runpy.run_path(target_as_str, run_name=compat.force_str("__main__")) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/runpy.py", line 263, in run_path pkg_name=pkg_name, script_name=fname) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/runpy.py", line 96, in _run_module_code mod_name, mod_spec, pkg_name, script_name) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/xxx/transformers/examples/pytorch/demo/run_mlm.py", line 556, in <module> main() File "/home/xxx/transformers/examples/pytorch/demo/run_mlm.py", line 505, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/xxx/transformers/src/transformers/trainer.py", line 1325, in train tr_loss_step = self.training_step(model, inputs) File "/home/xxx/transformers/src/transformers/trainer.py", line 1884, in training_step loss = self.compute_loss(model, inputs) File "/home/xxx/transformers/src/transformers/trainer.py", line 1916, in compute_loss outputs = model(**inputs) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 161, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 171, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/_utils.py", line 428, in reraise raise self.exc_type(msg) RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/xxx/transformers/src/transformers/models/roberta/modeling_roberta.py", line 1108, in forward return_dict=return_dict, File "/home/xxx/anaconda3/envs/torch1.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/home/xxx/transformers/src/transformers/models/roberta/modeling_roberta.py", line 819, in forward buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) RuntimeError: The expanded size of the tensor (1024) must match the existing size (514) at non-singleton dimension 1. Target sizes: [8, 1024]. Tensor sizes: [1, 514] ```
01-09-2022 13:16:04
01-09-2022 13:16:04
cc @sgugger <|||||>Which command are you running exactly? The logs you produce use distributed training whereas the command you told us (which runs successfully on my side) launches the script with python.<|||||>I just rerun it on another machine but got the same issue. The exact command is: ```bash $ python run_mlm_no_trainer.py --model_name_or_path=./roberta-base --dataset_name=wikitext --dataset_config_name=wikitext-2-raw-v1 --output_dir=./test_mlm_out ``` where `./roberta-base` directory contains: ``` $ ls roberta-base/ config.json merges.txt pytorch_model.bin vocab.json ``` The output was: ```bash 01/11/2022 11:59:36 - INFO - __main__ - ***** Running training ***** 01/11/2022 11:59:36 - INFO - __main__ - Num examples = 2390 01/11/2022 11:59:36 - INFO - __main__ - Num Epochs = 3 01/11/2022 11:59:36 - INFO - __main__ - Instantaneous batch size per device = 8 01/11/2022 11:59:36 - INFO - __main__ - Total train batch size (w. parallel, distributed & accumulation) = 8 01/11/2022 11:59:36 - INFO - __main__ - Gradient Accumulation steps = 1 01/11/2022 11:59:36 - INFO - __main__ - Total optimization steps = 897 0%| | 0/897 [00:00<?, ?it/s]Traceback (most recent call last): File "run_mlm_no_trainer.py", line 566, in <module> main() File "run_mlm_no_trainer.py", line 513, in main outputs = model(**batch) File "/root/xx/workspace/env_run/accelerate_test/torch1.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/xx/workspace/env_run/accelerate_test/torch1.7/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 1106, in forward return_dict=return_dict, File "/root/xx/workspace/env_run/accelerate_test/torch1.7/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/root/xx/workspace/env_run/accelerate_test/torch1.7/lib/python3.7/site-packages/transformers/models/roberta/modeling_roberta.py", line 817, in forward buffered_token_type_ids_expanded = buffered_token_type_ids.expand(batch_size, seq_length) RuntimeError: The expanded size of the tensor (1024) must match the existing size (514) at non-singleton dimension 1. Target sizes: [8, 1024]. Tensor sizes: [1, 514] 0%| | 0/897 [00:00<?, ?it/s] ``` **Possible Solution** The issue reported was due to the last dim mismatch between the target size (1024) and tensor size (514) of`token_type_ids`. I suspect this is caused by unspecified `--max_seq_length=512`. With additional argument `--max_seq_length=512`, it works. Is it correct?<|||||>I have no idea what the content of your roberta-base folder is, but your addition is probably correct. It works with the official checkpoint, where the model specifies a max length the script then uses, maybe it's the part missing in your local checkpoint.<|||||>Yeah you are correct. The checkpoint that the official script downloaded works. There might be something mismatched in my cached roberta-base folder (just manually downloaded from AWS, probability not newest ones). Thank you for pointing out this.
transformers
15,080
closed
explicitly load local file
The from_pretrained() pipline is making me very confusing. Sometime it requires a model_id, sometime it want a local path, and the path can't be similar to the id. Why must I know if the dir I put the download files would conflicts to any model id?what's the point here? Why don't go for the simplest pipeline, just down files and specify the file path of weight, config.json and so on! And if url is need, just specify the urls! And besides there is a cache_dir parameter also messes with the model_name_or_path. Don't like make everything cloud-based, it's not making thing better but more complicated. Time to time when I tried to init a model with pretrained, the program seems to walk around between 3 places, the huggingface website, the cache path local and the path where I put download files, and that often cause troubles if we want to use a specified version or we just need the one of the 3 ways to work. It should not be forced to use 3 palces in one parameter, which is just not good for most of the users.
01-09-2022 11:59:48
01-09-2022 11:59:48
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,079
closed
[Fix doc example] Wrong checkpoint name
# What does this PR do? In a few model scripts, we have something like (MBart/Blenderbot/Marian) ``` >>> from transformers import BlenderbotTokenizer, BlenderbotForCausalLM >>> tokenizer = BlenderbotTokenizer.from_pretrained("facebook/bart-large") >>> model = BlenderbotForCausalLM.from_pretrained("facebook/bart-large", add_cross_attention=False) ``` where `bart-large` was due to `Copied from ...`, and should be changed to the correct checkpoint identifier. (otherwise, in these cases, an error will be thrown if one runs these examples) ## Who can review? Models: - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
01-09-2022 09:27:46
01-09-2022 09:27:46
The failure is coming from the fact there is conflict between style and fix-copies `style` will change ``` >>> model = BlenderbotSmallForCausalLM.from_pretrained("facebook/blenderbot_small-90M", add_cross_attention=False) ``` to ``` >>> model = BlenderbotSmallForCausalLM.from_pretrained( ... "facebook/blenderbot_small-90M", add_cross_attention=False ... ) ``` but `fix-copies` won't be happy with this change. cc @sgugger <|||||>> The failure is coming from the fact there is conflict between style and fix-copies > > `style` will change > > ``` > >>> model = BlenderbotSmallForCausalLM.from_pretrained("facebook/blenderbot_small-90M", add_cross_attention=False) > ``` > > to > > ``` > >>> model = BlenderbotSmallForCausalLM.from_pretrained( > ... "facebook/blenderbot_small-90M", add_cross_attention=False > ... ) > ``` > > but `fix-copies` won't be happy with this change. cc @sgugger Maybe store the docstring in a variable `CAUSAL_LM_EXAMPLE` and then use it with `add_end_docstrings` like we do for generation example in `BartForConditionalGeneration`<|||||>The # Copied from should be removed from the two models that need to have their code examples adapted: the change of checkpoint makes them go over the 119 char limit so `style_doc` wants to style them (using black).<|||||>> The # Copied from should be removed from the two models that need to have their code examples adapted: the change of checkpoint makes them go over the 119 char limit so `style_doc` wants to style them (using black). I am OK to remove `# Copied` - but let's see what @patrickvonplaten says<|||||>IMO we should store the example `docstr` in a variable and add it to doc using the `add_end_docstrings` annotation for all these classes. Since these are all the same models having `copied from...` would be better.<|||||>I think it is good if we can keep `copy from` as many as possible - if there is a way to do it. Unless there is other opinions or workarounds, I will try @patil-suraj 's suggestion.<|||||>+1 on @patil-suraj's suggestion<|||||>If you wait a bit, I'll have a fix on the copies script today or tomorrow (they'll apply black to the examples as well so that this kind of problem disappears).<|||||>> If you wait a bit, I'll have a fix on the copies script today or tomorrow (they'll apply black to the examples as well so that this kind of problem disappears). Sure, thanks for the information.<|||||>If you rebase to include the PR mentioned above, run `make fix-copies`, the error should disappear.<|||||>@sgugger Thanks, it works well! @patil-suraj Fix for Pegasus done as you pointed out.<|||||>Thanks a lot!
transformers
15,078
closed
pegasus
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
01-08-2022 23:19:56
01-08-2022 23:19:56
Hi, This issue does not include any info, therefore closing. Feel free to re-open.
transformers
15,077
closed
train_new_from_iterator missing from GPT2Tokenizer
## Environment info - `transformers` version: 4.15.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ### Who can help Library: - Tokenizers: @LysandreJik Examples: - research_projects/codeparrot: @lvwerra ## Information Model I am using (Bert, XLNet ...): GPT2Tokenizer The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) Was using the official research project script for codeparrto to train a new BPE tokenizer with the GPT2Tokenizer as the base: https://github.com/huggingface/transformers/blob/master/examples/research_projects/codeparrot/scripts/bpe_training.py ## To reproduce Steps to reproduce the behavior: run the following colab: https://colab.research.google.com/drive/1G34QEjP9oNfcP_w_JO6DEnJuUTOsZe_D?usp=sharing ## Expected behavior To be able to train a new GPT2Tokenizer from an iterator.
01-08-2022 22:00:00
01-08-2022 22:00:00
Just found if I use `AutoTokenizer` rather than the `GPT2Tokenizer` class it works. Is this expected? If so the codeparrot example will need to change.<|||||>The `train_new_from_iterator` works with fast tokenizers only. `GPT2Tokenizer` is a slow tokenizer, with `GPT2TokenizerFast` being its fast counterpart.<|||||>Thanks @ncoop57 for flagging and @LysandreJik for pointing out the issue. That is a bug I introduced when switching from `AutoTokenizer` to `GPT2Tokenizer` - I'll fix this shortly.<|||||>Hi @lvwerra, I hate to say that it's still giving issues. I am on kaggle (ubuntu) with transformers 4.5.1 and tokenizers 0.10.3 and so even with fast tokenizers, it still says `has no attribute train_new_from_iterator...`<|||||>Hi @maulberto3, could you share a minimal reproducible example? Does it work with the latest transformers version? **Update**: if I read the commit history correctly that feature was added in `v4.9.0` so you need to update the transformers library.<|||||>Hi @lvwerra You are right, I fixed the version and it works. Thanks.<|||||>What are the `parameters` inside `train_new_from_iterator`? Can I pass `frequency` inside it? Also which `tokenizer` does it trains? Is it `BPE `tokenizer?<|||||>Hi @pratikchhapolika, such questions are best asked in the forum [discuss.huggingface.co](discuss.huggingface.co) or you can have a look at the documentation [here](https://huggingface.co/docs/transformers/v4.20.1/en/main_classes/tokenizer#transformers.PreTrainedTokenizerFast.train_new_from_iterator).
transformers
15,076
closed
[Fix doc example] RagModel
# What does this PR do? This part fails https://github.com/huggingface/transformers/blob/623b4f7c63f60cce917677ee704d6c93ee960b4b/src/transformers/models/rag/modeling_rag.py#L308-L310 Change to `from_pretrained_question_encoder_generator` instead in this PR. ## Who can review? @patrickvonplaten
01-08-2022 12:18:27
01-08-2022 12:18:27
transformers
15,075
closed
[Benchmark]
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
01-08-2022 05:05:17
01-08-2022 05:05:17
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,074
closed
TF Bert inference - support `np.ndarray` optional arguments
# What does this PR do? This PR allows running inference with TF Bert when the optional arguments (such as `attention_mask`) are a `np.ndarray`. Although these changes were made in response to particular issues, which were related to TF Bert, other models have the same pattern (e.g. [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt2/modeling_tf_gpt2.py#L413)), so these changes may positively impact other models too. EDIT: see root cause [here](https://github.com/huggingface/transformers/pull/15074#issuecomment-1009133852) Fixes #14346 and #14404
01-07-2022 18:38:26
01-07-2022 18:38:26
Thanks for your changes, @gante! I think here we should add a TensorFlow common test to ensure that all models pass it, rather than adding a test for BERT only Any test defined in this class will be run by all models: https://github.com/huggingface/transformers/blob/9fbf7c87c37e32152428f91038bd46e04a48868e/tests/test_modeling_tf_common.py#L103<|||||>Agree with @LysandreJik - good PR that fixes an important issue, but a common TF test is better than a BERT-specific test!<|||||>That was a great suggestion, as it led me to the root cause of the issue! We already had a general test for numpy inputs (as you can see in the new diff), which was passing, but it failed if we passed the same inputs as keyword arguments. After some digging, I found out it was because Keras converts whatever is in the first argument of a layer to a `tf.Tensor` ([here](https://github.com/keras-team/keras/blob/master/keras/engine/base_layer.py#L1041)), so the packed numpy input in the test was converted to a tensor -- in other words, for Bert, `attention_mask` was a tensor in this setting. With an unpacked input, only the first input was converted, and thus the error in the issues surfaced. The updated PR tests the issue for all TF models (we might discover new issues 😬 ) and fixes the error related to `shape_list`, but we might want to reconsider how we handle inputs to HF Keras models in the future :) <|||||>This seems to be another situation where there was a difference depending on whether the input was packed in the first argument or not (see [this issue](https://github.com/huggingface/huggingface_hub/issues/582#issuecomment-1008960808))<|||||>@Rocketknight1 / @LysandreJik -- can I get a review (or better yet, an approval)? :)<|||||>I just reviewed this - it looks great to me. Catching the issue with the conversion of the first argument was well done too.
transformers
15,073
closed
[VisionTextDualEncoder] Add token_type_ids param
# What does this PR do? This line will fail since there is `token_type_ids` in `inputs` but `get_text_features` has no such param: https://github.com/huggingface/transformers/blob/ac224bb0797c1ee6522d814139f3eb0a8947267b/src/transformers/models/vision_text_dual_encoder/modeling_vision_text_dual_encoder.py#L235 Since `token_type_ids` appears in other places in `modeling_vision_text_dual_encoder.py`, I think it's reasonable to add it to `get_text_features`. ## Who can review? @patil-suraj
01-07-2022 17:52:49
01-07-2022 17:52:49
transformers
15,072
open
[JAX/FLAX]: CLM Tokenizer Training confusion
Hi, after looking at the current readme of the [CLM tokenizer training example](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling#train-tokenizer-1), there's something strange in the model configuration: The `config.json` file looks like this: ```json GPT2Config { "_name_or_path": "./", "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.0, "bos_token_id": 50256, "embd_pdrop": 0.0, "eos_token_id": 50256, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_inner": null, "n_layer": 12, "n_positions": 1024, "reorder_and_upcast_attn": false, "resid_pdrop": 0.0, "scale_attn_by_inverse_layer_idx": false, "scale_attn_weights": true, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 50 } }, "transformers_version": "4.16.0.dev0", "use_cache": true, "vocab_size": 50257 } ``` Vocab size is 50257, and `eos_token_id` is set to 50256. I think that setting `eos_token_id` is wrong, because of the following example: ```bash In [10]: tokenizer.convert_ids_to_tokens([1797, 705, 225, 50256]) Out[10]: ['hal', 'lo', 'Ġ', 'Ġgeestigheid'] ``` Id *50256* is originally set to `'Ġgeestigheid'`. I'm not 100% sure, but it should be set to 50257 (and thus outside the vocabulary), because of: ```bash In [7]: tokenizer.encode("hallo <|endoftext|>") Out[7]: [1797, 705, 225, 50257] ``` It shows that `eos_token` is set to `<|endoftext|>` and from the tokenizer part, `eos_token_id` then should be set to `50257`?! Now I'm using the official GPT-2 model as reference: It uses `"eos_token_id": 50256` in the `config.json` file, some tokenizer tests: ```bash In [6]: tokenizer.eos_token Out[6]: '<|endoftext|>' In [7]: tokenizer.eos_token_id Out[7]: 50256 In [8]: tokenizer.encode("Hello <|endoftext|>") Out[8]: [15496, 220, 50256] ``` Which is correct. And there's another issue: after looking at the `tokenizer.json` file for GPT-2, the following entry exists: ```bash "<|endoftext|>":50256} ``` which is perfect, but: for the own trained vocab this entry does not exist! I'm not sure if this is a bug in the Tokenizers library or intended :thinking:
01-07-2022 16:46:04
01-07-2022 16:46:04
transformers
15,071
closed
Pipeline ASR with LM.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-07-2022 15:51:19
01-07-2022 15:51:19
I think it's ready for final review @patrickvonplaten Not a lot of changes overall since last time: - `pipeline(model="...")` works by default, and fallbacks to regular CTC if `pyctcdecode` missing, or any other error. - the `decoder` is now a real `decoder` (easier for the fallback actually).
transformers
15,070
closed
Update Trainer code example
# What does this PR do? This PR updates the code example used in the Trainer docs. Previously, it shows how to overwrite the Trainer to do multi-label classification. However, this is not required anymore, as users can now pass the `problem_type` argument to the model's configuration (to use the appropriate loss function). Instead, I show how to overwrite the Trainer to use a weighted loss, useful when you have an imbalanced dataset.
01-07-2022 15:35:51
01-07-2022 15:35:51
Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,069
closed
How to use an unpretrained model?
By using the following codes, I can load a **pretrained** model for fine-tuning on QA tasks. I'm wondering how to load a same (same architecture with roberta-base) **unpretrained** model , and train it on my own data from scratch? ``` tokenizer = AutoTokenizer.from_pretrained("roberta-base") model = AutoModelForQuestionAnswering.from_pretrained("roberta-base") ```
01-07-2022 14:11:19
01-07-2022 14:11:19
You can initialize a model using a configuration object, like so: ``` from transformers import RobertaConfig, RobertaModel config = RobertaConfig() model = RobertaModel(config) ``` This for instance will load the default configuration of RoBERTa, as documented [here](https://huggingface.co/docs/transformers/model_doc/roberta#transformers.RobertaConfig) - which means, a `hidden_size` of 768, 12 hidden layers, etc. All weights of the model will be randomly initialized, as we're not instantiating it using the `from_pretrained()` method. To customize the architecture, you can load it as follows: ``` from transformers import RobertaConfig, RobertaModel config = RobertaConfig(num_hidden_layers=2, hidden_size=20) model = RobertaModel(config) ``` You can also load a model with randomly initialized weights, but with the architecture of a particular model on the hub, for instance, like so: ``` from transformers import AutoConfig config = AutoConfig.from_pretrained("klue/roberta-small") # note, this is equivalent to instantiating a RobertaConfig model = RobertaModel(config) ``` <|||||>Thanks a lot :)
transformers
15,068
closed
fix: #14486 do not use BertPooler in DPR
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #14486 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## What did I do? Fixed the lines we discussed about. `pytest tests/test_modeling_dpr.py` gives the same output before and after: ``` ============================= test session starts ============================== platform linux -- Python 3.7.11, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 rootdir: /people/lerner/code/transformers, configfile: setup.cfg plugins: forked-1.4.0, xdist-2.5.0, dash-2.0.0, timeout-2.0.2, hypothesis-6.34.2 collected 52 items tests/test_modeling_dpr.py .....ss...............s..ssss.s.............. [ 86%] sss..ss [100%] =============================== warnings summary =============================== ../../.local/lib/python3.7/site-packages/pandas/util/testing.py:20 /people/lerner/.local/lib/python3.7/site-packages/pandas/util/testing.py:20: DeprecationWarning: `np.bool` is a deprecated alias for the builtin `bool`. To silence this warning, use `bool` by itself. Doing this will not modify any behavior and is safe. If you specifically wanted the numpy scalar type, use `np.bool_` here. Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations from pandas._libs import testing as _testing ../../../../vol/work/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/flatbuffers/compat.py:19 /vol/work/lerner/anaconda3/envs/transformers/lib/python3.7/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/stable/warnings.html ================== 39 passed, 13 skipped, 2 warnings in 4.10s ================== ``` Ran all the other stuff as instructed in CONTRIBUTING: ``` $ make fixup Checking/fixing src/transformers/models/dpr/modeling_dpr.py All done! ✨ 🍰 ✨ 1 file left unchanged. python utils/custom_init_isort.py python utils/style_doc.py src/transformers docs/source --max_len 119 running deps_table_update updating src/transformers/dependency_versions_table.py python utils/check_copies.py python utils/check_table.py python utils/check_dummies.py python utils/check_repo.py Checking all models are included. Checking all models are public. 2022-01-07 13:53:00.515988: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64 Checking all models are properly tested. Checking all objects are properly documented. Checking all models are in at least one auto class. python utils/check_inits.py python utils/tests_fetcher.py --sanity_check $ make quality black --check examples tests src utils All done! ✨ 🍰 ✨ 1231 files would be left unchanged. isort --check-only examples tests src utils Skipped 1 files python utils/custom_init_isort.py --check_only flake8 examples tests src utils python utils/style_doc.py src/transformers docs/source --max_len 119 --check_only $ make repo-consistency python utils/check_copies.py python utils/check_table.py python utils/check_dummies.py python utils/check_repo.py Checking all models are included. Checking all models are public. 2022-01-07 13:54:31.057343: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/cuda/lib64 Checking all models are properly tested. Checking all objects are properly documented. Checking all models are in at least one auto class. python utils/check_inits.py python utils/tests_fetcher.py --sanity_check ```
01-07-2022 13:01:51
01-07-2022 13:01:51
@PaulLerner - thanks a lot for your PR - I took the liberty to fix the final failing tests (hope that was ok)<|||||>Hi, you welcome, sorry I didn’t notice that some test had failed!
transformers
15,067
closed
fix CLIP fast tokenizer and change some properties of the slow version
# What does this PR do? As discussed in Issue #12648, the fast version of the CLIP tokenizer does not give at all the same tokenization as the slow version. From my point of view, the tokenization difference is really important and I think it would be important to fix this difference quickly (and as proposed in the issue, the **best is maybe to just remove - for the moment - the fast version of this tokenizer**). ## The PR in details This PR proposes several changes to reduce this difference as much as possible. There are however several subtleties that I need to highlight and would love to hear your feedback on: 1. From my point of view, the difference between the slow and the fast version of the CLIP tokenizer comes from the fact that the original tokenizer applies the BPE algorithm on an already very pre-tokenized text ([line](https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py#L124) of the original code where this pre-tokenization happen). The original tokenizer uses a lot of pre-tokenization rules and especially split on spaces and leave them out. In the current version of the CLIP fast tokenizer there is only a ByteLevel pre-tokenization which keeps the spaces which are transformed into Ġ by the ByteLevel pre-tokenizer ([code](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_slow_tokenizer.py#L838)). I propose to discuss in more detail the solution I propose in [this comment](https://github.com/huggingface/transformers/pull/15067#discussion_r780993421). 2. Then, the original tokenizer, uses quite powerful tools to [clean the text](https://github.com/openai/CLIP/blob/main/clip/simple_tokenizer.py#L50-L53): 1) the [`ftfy.fix_text`](https://ftfy.readthedocs.io/en/latest/explain.html#ftfy.fix_text) method and the 2) the [`html.unescape` method](https://docs.python.org/3/library/html.html#html.unescape) . You will find below a copy of the docstrings of these 2 methods. These methods are therefore very similar to normalization because they will greatly modify the text that will be given as input to the model. I think that here we come across the same type of "problem" that we could have with vision for example: **at what point do we consider that it is pre-processing and that it goes beyond the scope of transformers?** Personally, I think that replicating exactly what these 2 methods do is out of the scope of transformers and that we just need to identify if there are normalization steps in these methods that would be considered as very frequent and that should therefore be applied in the normalizer. I identified 3 types of normalization and wrote a test that tests the equality of tokenization of complicated / specious texts between the slow and fast version of the tokenizer . I refer you to comments [2](https://github.com/huggingface/transformers/pull/15067/files#r780360824) and [3](https://github.com/huggingface/transformers/pull/15067/files#r780361274) where I discuss respectively these points in more details. 3. Then, I have to talk to you about the fact that the components available today in the tokenizers library do not allow to make a tokenizer fast perfectly adapted to the CLIP one. More specifically, there are two components of the backend_tokenizer that would ideally need to chain 2 sub-operations and today we can't do that (we can only apply one). It is possible that in a future version of tokenizers these problems will be solved, but it will require in any case to wait for a next version (see the 2 open issues respectively for the 2 components: [issue 1](https://github.com/huggingface/tokenizers/issues/872#issuecomment-1007239999) and [issue 2](https://github.com/huggingface/tokenizers/issues/873#issuecomment-1007236641)). This issue concerns: - **the `decoder`**. For the decoder, the CLIP tokenizer would need to chain `ByteLevel` decoder and `BPEDecoder`. Nevertheless, even if it is not possible to do it in rust today, **I have a solution that results in a 100% identical behavior**. There may be a small loss of efficiency (but it should still be more efficient than going through the slow tokenizer) and it will only concern the decode function. My hack consists in including a `ByteLevel` component in the `backend_tokenizer` then monkey patch the `self.backend_tokenizer.decode` method to perform the operation that should have been done by the `BPEDecoder` decoder. See this [comment 4](https://github.com/huggingface/transformers/pull/15067/files#r780369218) for the hack. - **the `post_processor`**. For the post_processor, the CLIP tokenizer would need to chain `ByteLevel` post-processor (for the offsets) and the `TemplateProcessing` to add special tokens to the output (one template for a single sentence and another template for a pair of sentences). Today, it is not possible to chain these 2 processors but there is a processor `RobertaProcessing` that contains the `ByteLevel` processing and that adds special tokens as desired for the template with a sentence. The difficulty here posed by the post-processor is due to the post-processing of a pair of sentences: something that the original CLIP model does not define. What the slow tokenizer proposed to do if you gave it a pair of sentences as input is to simply join the 2 sentences with the template `"bos_token" tokens_for_sentence_1 tokens_for_sentence_2 "eos_token" `. Unfortunately, roberta's template is `"bos_token" tokens_for_sentence_1 "eos_token" "eos_token" tokens_for_sentence_2 "eos_token"` (and the `token_type_ids` don't match either). So, **I made some modifications so that the slow and fast tokenizer follow the same template as Roberta for tokenizing a pair of sentences. This is the change that annoys me the most in this PR because it modifies the behavior of the slow tokenizer: I wanted to propose it anyway because I'm not sure to see the use case for tokenization of sentence pairs for CLIP.** 4. I also have other modifications in the slow version of the tokenizer because I found that some arguments in the init could not really be used. These are the following arguments: - **do_lower_case**: [This line](https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/clip/tokenization_clip.py#L179) and [this line](https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/clip/tokenization_clip.py#L316) hardcode the lowercasing of the text. - **add_prefix_space**: I don't think this tokenizer "has been trained to treat spaces as parts of tokens". Currently adding `add_prefix_space=True` to the slow tokenizer does not change the tokenization. This is because the space is added at the beginning of the sentence before applying the pre-tokenization using the regex `r"""<\|startoftext\|>|<\|endoftext\|>|'s|'t|'re|'ve|'m|'ll|'d|[\p{L}]+|[\p{N}]|[^\s\p{L}\p{N}]+"""`. So I removed all references to `add_prefix_space` and forced this argument to be `False` in the tokenizer fast components. 5. Regarding the tests, I think it is important to run all the CLIP tokenization tests with the `ftfy` dependency installed. Without the `ftfy` dependency installed, the slow tokenizer uses the `BasicTokenizer` of BERT which gives a different tokenization. So I tagged some tests as requiring with the ftfy dependency and added a new r`un_tests_tokenization_CLIP` in CircleCI. **Do you agree with this addition? Would you have a better way to do it?** 6. Without the `ftfy` dependency installed, the slow tokenizer uses the `BasicTokenizer` of BERT which gives a different tokenization. This leads to another question: **shouldn't we add the same normalizations to this default tokenizer as the normalizations chosen for the fast tokenizer?** (if yes, I think that we can do it in a next PR) 7. Then, I made 3 other modification: - I removed the `pad_token_id` property hardcoded to 0. I removed the custom test for CLIP `test_pretokenized_inputs` (it was not executed before) after this change. **Was there a reason for hardcoding it to token 0? Isn't it better to leave it equal to the value of the pad_token id set in the init?** If I have missed something and it is necessary to hardcode this value, I can absolutely revert the changes - I modified the vocabulary and the merges in the tests because before the merge list was using tokens that were not in the vocabulary (and that prevented to initialize a tokenizer fast) - I have remove `errors` from the `CLIPTokenizerFast` docstring because I don't think that it's an argument that can be used for this object. ## Documentation extracts ### ftfy.fix_text method > Given Unicode text as input, fix inconsistencies and glitches in it, such as mojibake (text that was decoded in the wrong encoding). ### html.unescape > html.unescape(s) > Convert all named and numeric character references (e.g. `&gt;`, `&#62;`, `&#x3e;`) in the string s to the corresponding Unicode characters. This function uses the rules defined by the HTML 5 standard for both valid and invalid character references, and the list of HTML 5 named character references. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. It would be really great if @patil-suraj , @LysandreJik and @sgugger you could share your opinion :hugs: : - @patil-suraj in particular because I propose to modify the slow version of the tokenizer - @LysandreJik and @sgugger in particular : - to know your opinion about the post-processor template with sentence pairs (as for me this is my least favorite point in my PR) - to know your opinion about the way to test with `ftfy` dependency - and if you have time, I'd be really happy to know your opinion about all the other points Thanks a lot in advance :hugs:
01-07-2022 12:24:00
01-07-2022 12:24:00
@sgugger , your comment makes a lot of sense! Indeed, now that you share the idea of the `tokenizer_vxxx.json` file, Lysandre showed me this a while ago. Have you already implemented this kind of solution for another part of the library (to try to propose something similar)? Also, have you ever had a similar case of an object not behaving as expected? Do you have any ideas how to warn users if we keep the old `CLIPTokenizerFast` object (if we create several objects, maybe it's just to leave a warning log in the first object)?<|||||>@n1t0 will know if it was already used for some tokenizers or not.<|||||>> - to know your opinion about the post-processor template with sentence pairs (as for me this is my least favorite point in my PR) It's fine for me as I'm not convinced it's used for sentence pairs either, but maybe @patil-suraj or @NielsRogge can confirm > - to know your opinion about the way to test with ftfy dependency For the tests we could add it to the custom tokenizer run which is created for that purpose: https://github.com/huggingface/transformers/blob/4df69506a8250d4bd298d457090b321b26b0c77f/.circleci/config.yml#L538-L565 > - and if you have time, I'd be really happy to know your opinion about all the other points I think what you propose is sensible. This will result in a breaking change, which is fine if it doesn't affect previous versions as it's really a bugfix, imo. Sylvain's proposal of using the versioning system could work, indeed. From what I'm seeing there aren't many repos with a fast tokenizer script, but it's very possible that users have fast tokenizer files stored locally, would these users benefit from the fix? Given how broken it is I'm actually wondering if removing the fast tokenizer isn't the best way forward, actually. It's the first time we would ever do something like that, but it would not break previous versions and it would ensure that all future versions will continue working correctly. If we decide to go this way, we could actually have an automatic way of returning a slow CLIP tokenizer from the fast tokenizer instantiation, since they're able to save themselves with slow files which can then be re-used by the slow tokenizer we would be returning. Very inelegant and unoptimized as extra serialization + deserialization + promise of returning a fast tokenizer while returning a slow one without many feautres of the fast tokenizer (are they really useful for a CLIP tokenizer though? thinking of offsets, for example).<|||||>@LysandreJik , Thanks a lot for your feedback! > For the tests we could add it to the custom tokenizer run which is created for that purpose: Well noted, I moved it (corresponding [commit](https://github.com/huggingface/transformers/pull/15067/commits/c9eebb7882fbee9c3cb1cae3a28bb90274a7ea03)). > Given how broken it is I'm actually wondering if removing the fast tokenizer isn't the best way forward, actually. It's the first time we would ever do something like that, but it would not break previous versions and it would ensure that all future versions will continue working correctly. If we decide to go this way, we could actually have an automatic way of returning a slow CLIP tokenizer from the fast tokenizer instantiation, since they're able to save themselves with slow files which can then be re-used by the slow tokenizer we would be returning. Very inelegant and unoptimized as extra serialization + deserialization + promise of returning a fast tokenizer while returning a slow one without many feautres of the fast tokenizer (are they really useful for a CLIP tokenizer though? thinking of offsets, for example). I prefer to make sure I don't miss anything here. Actually you think we should replace `CLIPTokenizerFast` with a bridge that would build a `CLIPTokenizer` instead - not the tokenizer proposed in this PR. Do I have this right?
transformers
15,066
closed
How do I change the classification head of a model from multi-label to multi-class?
am downloading the model https://huggingface.co/unitary/unbiased-toxic-roberta/tree/main and then using it. **Transformer Version: '4.11.3'** **unbiased-toxic-roberta** The model is trained on Jigsaw data which has **16 classes ( Multi-label )** and use **BCE** loss function `Labels is like : sentence1---> label [0,0,..1,0..,0]` **I am using this model to fine-tune on binary classification problem. ( 0 and 1 as my labels)** `Labels is like : sentence1---> label 1` `Labels is like : sentence2---> label 0` My loss function is simply an accuracy loss defined below. I have written the below code: ``` def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) acc = np.sum(predictions == labels) / predictions.shape[0] return {"accuracy" : acc} ``` ``` model = tr.RobertaForSequenceClassification.from_pretrained("/home/pc/unbiased_toxic_roberta",num_labels=2) #ignore_mismatched_sizes=True, model.to(device) training_args = tr.TrainingArguments( output_dir='/home/pc/1_Proj_hate_speech/results_roberta', # output directory overwrite_output_dir = True, num_train_epochs=20, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation learning_rate=2e-5, warmup_steps=1000, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs3', # directory for storing logs logging_steps=1000, evaluation_strategy="epoch" ,save_strategy="epoch" ,load_best_model_at_end=True ) trainer = tr.Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_data, # training dataset eval_dataset=val_data, # evaluation dataset compute_metrics=compute_metrics ) ``` Error: ``` - classifier.out_proj.weight: found shape torch.Size([16, 768]) in the checkpoint and torch.Size([2, 768]) in the model instantiated - classifier.out_proj.bias: found shape torch.Size([16]) in the checkpoint and torch.Size([2]) in the model instantiated - ``` **How can I change the code to accommodate the number of classes and accuracy loss instead of BCE**
01-07-2022 12:02:04
01-07-2022 12:02:04
One can replace the head by setting the `ignore_mismatched_sizes` argument to `True` in the `from_pretrained` method, like so: ``` from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("unitary/unbiased-toxic-roberta", num_labels=2, ignore_mismatched_sizes=True) ``` This will print a warning, indicating which parameters are randomly initialized. <|||||>> One can replace the head by setting the `ignore_mismatched_sizes` argument to `True` in the `from_pretrained` method, like so: > > ``` > from transformers import AutoModelForSequenceClassification > > model = AutoModelForSequenceClassification.from_pretrained("unitary/unbiased-toxic-roberta", num_labels=2, ignore_mismatched_sizes=True) > ``` > > This will print a warning, indicating which parameters are randomly initialized. **This introduces another error:** ``` Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at /home/pc/unbiased_toxic_roberta and are newly initialized because the shapes did not match: - classifier.out_proj.weight: found shape torch.Size([16, 768]) in the checkpoint and torch.Size([2, 768]) in the model instantiated - classifier.out_proj.bias: found shape torch.Size([16]) in the checkpoint and torch.Size([2]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` ``` loading configuration file /home/pc/unbiased_toxic_roberta/config.json Model config RobertaConfig { "architectures": [ "RobertaForSequenceClassification" ], "attention_probs_dropout_prob": 0.1, "bos_token_id": 0, "classifier_dropout": null, "eos_token_id": 2, "function_to_apply": "sigmoid", "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 514, "model_type": "roberta", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "position_embedding_type": "absolute", "problem_type": "multi_label_classification", "transformers_version": "4.10.3", "type_vocab_size": 1, "use_cache": true, "vocab_size": 50265 } ``` ``` File "/home/pc/.local/lib/python3.6/site-packages/torch/nn/functional.py", line 2958, in binary_cross_entropy_with_logits raise ValueError("Target size ({}) must be the same as input size ({})".format(target.size(), input.size())) ValueError: Target size (torch.Size([2])) must be the same as input size (torch.Size([2, 2])) ``` **Here is the model structure:** ``` (11): RobertaLayer( (attention): RobertaAttention( (self): RobertaSelfAttention( (query): Linear(in_features=768, out_features=768, bias=True) (key): Linear(in_features=768, out_features=768, bias=True) (value): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): RobertaSelfOutput( (dense): Linear(in_features=768, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (intermediate): RobertaIntermediate( (dense): Linear(in_features=768, out_features=3072, bias=True) ) (output): RobertaOutput( (dense): Linear(in_features=3072, out_features=768, bias=True) (LayerNorm): LayerNorm((768,), eps=1e-05, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ) ) ) (classifier): RobertaClassificationHead( (dense): Linear(in_features=768, out_features=768, bias=True) (dropout): Dropout(p=0.1, inplace=False) (out_proj): Linear(in_features=768, out_features=2, bias=True) ) ) ```<|||||>Apologies, you also have to set the problem_type to "single_label_classification": ``` from transformers import AutoModelForSequenceClassification model = AutoModelForSequenceClassification.from_pretrained("unitary/unbiased-toxic-roberta", problem_type="single_label_classification", num_labels=2, ignore_mismatched_sizes=True) ``` This ensures the cross-entropy loss is used instead of the binary cross-entropy with logits (BCE) loss.
transformers
15,065
closed
compile error when installing transformers[flax]
## Environment info Running on windows 10. - `transformers` version: - Platform: Windows 10 - Python version: 3.9.1 - PyTorch version (GPU?): 1.9.1+cu102 ### Who can help @patil-suraj ## Information I am trying to use `FlaxVisionEncoderDecoderModel` and ran into the same issue as [this dev.](https://github.com/huggingface/transformers/issues/14831) However attempting to run ` pip install transformers[flax]` results in an error: ```bash PS G:\Projects\> pip install transformers[flax] Requirement already satisfied: transformers[flax] in c:\python39\lib\site-packages (4.15.0) Requirement already satisfied: sacremoses in c:\python39\lib\site-packages (from transformers[flax]) (0.0.46) Requirement already satisfied: packaging>=20.0 in c:\python39\lib\site-packages (from transformers[flax]) (20.9) Requirement already satisfied: filelock in c:\python39\lib\site-packages (from transformers[flax]) (3.0.12) Requirement already satisfied: pyyaml>=5.1 in c:\users\bobak\appdata\roaming\python\python39\site-packages (from transformers[flax]) (5.4.1) Requirement already satisfied: regex!=2019.12.17 in c:\python39\lib\site-packages (from transformers[flax]) (2021.9.24) Requirement already satisfied: tokenizers<0.11,>=0.10.1 in c:\python39\lib\site-packages (from transformers[flax]) (0.10.3) Requirement already satisfied: huggingface-hub<1.0,>=0.1.0 in c:\python39\lib\site-packages (from transformers[flax]) (0.2.1) Requirement already satisfied: requests in c:\python39\lib\site-packages (from transformers[flax]) (2.25.1) Requirement already satisfied: numpy>=1.17 in c:\users\bobak\appdata\roaming\python\python39\site-packages (from transformers[flax]) (1.19.5) Requirement already satisfied: tqdm>=4.27 in c:\python39\lib\site-packages (from transformers[flax]) (4.62.1) Requirement already satisfied: jax>=0.2.8 in c:\python39\lib\site-packages (from transformers[flax]) (0.2.26) Collecting optax>=0.0.8 Using cached optax-0.1.0-py3-none-any.whl (126 kB) Collecting flax>=0.3.5 Using cached flax-0.3.6-py3-none-any.whl (207 kB) Collecting transformers[flax] Using cached transformers-4.14.1-py3-none-any.whl (3.4 MB) ... Using cached transformers-4.9.2-py3-none-any.whl (2.6 MB) Collecting huggingface-hub==0.0.12 Using cached huggingface_hub-0.0.12-py3-none-any.whl (37 kB) Collecting transformers[flax] Using cached transformers-4.9.1-py3-none-any.whl (2.6 MB) ... Using cached transformers-4.7.0-py3-none-any.whl (2.5 MB) Collecting huggingface-hub==0.0.8 Using cached huggingface_hub-0.0.8-py3-none-any.whl (34 kB) Collecting transformers[flax] Using cached transformers-4.6.1-py3-none-any.whl (2.2 MB) ... Using cached transformers-4.2.2-py3-none-any.whl (1.8 MB) Collecting tokenizers==0.9.4 Using cached tokenizers-0.9.4-cp39-cp39-win_amd64.whl (1.9 MB) Collecting transformers[flax] Using cached transformers-4.2.1-py3-none-any.whl (1.8 MB) ... Using cached transformers-3.5.1-py3-none-any.whl (1.3 MB) Collecting sentencepiece==0.1.91 Using cached sentencepiece-0.1.91.tar.gz (500 kB) Preparing metadata (setup.py) ... done Collecting tokenizers==0.9.3 Using cached tokenizers-0.9.3.tar.gz (172 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: protobuf in c:\python39\lib\site-packages (from transformers[flax]) (3.18.0) Collecting transformers[flax] Using cached transformers-3.5.0-py3-none-any.whl (1.3 MB) Using cached transformers-3.4.0-py3-none-any.whl (1.3 MB) Collecting sentencepiece!=0.1.92 Using cached sentencepiece-0.1.96-cp39-cp39-win_amd64.whl (1.1 MB) Collecting tokenizers==0.9.2 Using cached tokenizers-0.9.2.tar.gz (170 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Collecting transformers[flax] Using cached transformers-3.3.1-py3-none-any.whl (1.1 MB) WARNING: transformers 3.3.1 does not provide the extra 'flax' Collecting tokenizers==0.8.1.rc2 Using cached tokenizers-0.8.1rc2.tar.gz (97 kB) Installing build dependencies ... done Getting requirements to build wheel ... done Preparing metadata (pyproject.toml) ... done Requirement already satisfied: colorama in c:\python39\lib\site-packages (from tqdm>=4.27->transformers[flax]) (0.4.4) Requirement already satisfied: pyparsing>=2.0.2 in c:\python39\lib\site-packages (from packaging>=20.0->transformers[flax]) (2.4.7) Requirement already satisfied: idna<3,>=2.5 in c:\python39\lib\site-packages (from requests->transformers[flax]) (2.10) Requirement already satisfied: chardet<5,>=3.0.2 in c:\python39\lib\site-packages (from requests->transformers[flax]) (4.0.0) Requirement already satisfied: urllib3<1.27,>=1.21.1 in c:\python39\lib\site-packages (from requests->transformers[flax]) (1.26.3) Requirement already satisfied: certifi>=2017.4.17 in c:\python39\lib\site-packages (from requests->transformers[flax]) (2020.12.5) Requirement already satisfied: joblib in c:\python39\lib\site-packages (from sacremoses->transformers[flax]) (1.0.1) Requirement already satisfied: six in c:\python39\lib\site-packages (from sacremoses->transformers[flax]) (1.15.0) Requirement already satisfied: click in c:\python39\lib\site-packages (from sacremoses->transformers[flax]) (7.1.2) Building wheels for collected packages: tokenizers Building wheel for tokenizers (pyproject.toml) ... error ERROR: Command errored out with exit status 1: command: 'c:\python39\python.exe' 'c:\python39\lib\site-packages\pip\_vendor\pep517\in_process\_in_process.py' build_wheel 'C:\Users\Bobak\AppData\Local\Temp\tmpmeoat5a5' cwd: C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947 Complete output (251 lines): C:\Users\Bobak\AppData\Local\Temp\pip-build-env-ojm6weqz\overlay\Lib\site-packages\setuptools\dist.py:493: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2' warnings.warn(tmpl.format(**locals())) running bdist_wheel running build running build_py creating build creating build\lib.win-amd64-3.9 ... copying tokenizers\pre_tokenizers\__init__.pyi -> build\lib.win-amd64-3.9\tokenizers\pre_tokenizers copying tokenizers\processors\__init__.pyi -> build\lib.win-amd64-3.9\tokenizers\processors copying tokenizers\trainers\__init__.pyi -> build\lib.win-amd64-3.9\tokenizers\trainers running build_ext Updating crates.io index Updating git repository `https://github.com/n1t0/rayon-cond` cargo rustc --lib --manifest-path Cargo.toml --target x86_64-pc-windows-msvc --release -v --features pyo3/extension-module -- --crate-type cdylib warning: unused manifest key: target.x86_64-apple-darwin.rustflags Compiling proc-macro2 v1.0.36 Compiling unicode-xid v0.2.2 Compiling syn v1.0.85 Compiling autocfg v1.0.1 Compiling memchr v2.4.1 Compiling serde v1.0.133 Compiling cfg-if v1.0.0 Compiling serde_derive v1.0.133 Running `rustc --crate-name build_script_build --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\proc-macro2-1.0.36\build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"proc-macro\"" -C metadata=57eed64b2791e1b0 -C extra-filename=-57eed64b2791e1b0 --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\build\proc-macro2-57eed64b2791e1b0 -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --cap-lints allow` Running `rustc --crate-name unicode_xid C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\unicode-xid-0.2.2\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"default\"" -C metadata=6f559fb7bfe8c6bc -C extra-filename=-6f559fb7bfe8c6bc --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --cap-lints allow` Running `rustc --crate-name build_script_build --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\syn-1.0.85\build.rs --error-format=json --json=diagnostic-rendered-ansi --crate-type bin --emit=dep-info,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"clone-impls\"" --cfg "feature=\"default\"" --cfg "feature=\"derive\"" --cfg "feature=\"extra-traits\"" --cfg "feature=\"full\"" --cfg "feature=\"parsing\"" --cfg "feature=\"printing\"" --cfg "feature=\"proc-macro\"" --cfg "feature=\"quote\"" -C metadata=1244fbe766a9b584 -C extra-filename=-1244fbe766a9b584 --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\build\syn-1244fbe766a9b584 -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --cap-lints allow` ... Compiling console v0.15.0 Running `rustc --crate-name console --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\console-0.15.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"ansi-parsing\"" --cfg "feature=\"default\"" --cfg "feature=\"regex\"" --cfg "feature=\"unicode-width\"" -C metadata=7f04b326976a8469 -C extra-filename=-7f04b326976a8469 --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps --target x86_64-pc-windows-msvc -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --extern encode_unicode=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libencode_unicode-f5082f2d76afde34.rmeta --extern libc=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\liblibc-3099232219a69ddb.rmeta --extern once_cell=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libonce_cell-e26956b93b1a289e.rmeta --extern regex=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libregex-209ee3cef779bb9d.rmeta --extern terminal_size=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libterminal_size-caf7104c0642fe58.rmeta --extern unicode_width=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libunicode_width-ce66de2e14cc3ba9.rmeta --extern winapi=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libwinapi-3c77d9c468bf0363.rmeta --cap-lints allow` Compiling clap v2.34.0 Running `rustc --crate-name clap --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\clap-2.34.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"ansi_term\"" --cfg "feature=\"atty\"" --cfg "feature=\"color\"" --cfg "feature=\"default\"" --cfg "feature=\"strsim\"" --cfg "feature=\"suggestions\"" --cfg "feature=\"vec_map\"" -C metadata=7b7912e462d4241a -C extra-filename=-7b7912e462d4241a --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps --target x86_64-pc-windows-msvc -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --extern atty=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libatty-0152d3558cda1980.rmeta --extern bitflags=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libbitflags-1d434c93db9ceb5c.rmeta --extern strsim=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libstrsim-563e93c49ab77cd6.rmeta --extern textwrap=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libtextwrap-d44db99661ee5bc3.rmeta --extern unicode_width=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libunicode_width-ce66de2e14cc3ba9.rmeta --extern vec_map=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libvec_map-f82cb06a00d96a3c.rmeta --cap-lints allow` Compiling parking_lot v0.10.2 Running `rustc --crate-name parking_lot --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\parking_lot-0.10.2\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"default\"" --cfg "feature=\"nightly\"" -C metadata=f2e6ca23d2714e74 -C extra-filename=-f2e6ca23d2714e74 --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps --target x86_64-pc-windows-msvc -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --extern lock_api=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\liblock_api-feeea9924ab45186.rmeta --extern parking_lot_core=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libparking_lot_core-85a7641040e9f7f3.rmeta --cap-lints allow` error[E0658]: `if` is not allowed in a `const fn` --> C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\clap-2.34.0\src\app\settings.rs:7:1 | 7 | / bitflags! { 8 | | struct Flags: u64 { 9 | | const SC_NEGATE_REQS = 1; 10 | | const SC_REQUIRED = 1 << 1; ... | 51 | | } 52 | | } | |_^ | = note: see issue #49146 <https://github.com/rust-lang/rust/issues/49146> for more information = help: add `#![feature(const_if_match)]` to the crate attributes to enable = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) error[E0658]: `if` is not allowed in a `const fn` --> C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\clap-2.34.0\src\args\settings.rs:6:1 | 6 | / bitflags! { 7 | | struct Flags: u32 { 8 | | const REQUIRED = 1; ... | 28 | | } 29 | | } | |_^ | = note: see issue #49146 <https://github.com/rust-lang/rust/issues/49146> for more information = help: add `#![feature(const_if_match)]` to the crate attributes to enable = note: this error originates in a macro (in Nightly builds, run with -Z macro-backtrace for more info) Running `rustc --crate-name rayon_core --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\rayon-core-1.9.1\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no -C metadata=7c4c87a2d6db8bd0 -C extra-filename=-7c4c87a2d6db8bd0 --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps --target x86_64-pc-windows-msvc -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --extern crossbeam_channel=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libcrossbeam_channel-a47a697c8edb26dd.rmeta --extern crossbeam_deque=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libcrossbeam_deque-f5042eb50925de89.rmeta --extern crossbeam_utils=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libcrossbeam_utils-01e26c75846d7d46.rmeta --extern lazy_static=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\liblazy_static-38c6c694dbdce5e7.rmeta --extern num_cpus=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libnum_cpus-3087e410382f51cc.rmeta --cap-lints allow` error: aborting due to 2 previous errors For more information about this error, try `rustc --explain E0658`. error: could not compile `clap`. Caused by: process didn't exit successfully: `rustc --crate-name clap --edition=2018 C:\Users\Bobak\.cargo\registry\src\github.com-1ecc6299db9ec823\clap-2.34.0\src\lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -Cembed-bitcode=no --cfg "feature=\"ansi_term\"" --cfg "feature=\"atty\"" --cfg "feature=\"color\"" --cfg "feature=\"default\"" --cfg "feature=\"strsim\"" --cfg "feature=\"suggestions\"" --cfg "feature=\"vec_map\"" -C metadata=7b7912e462d4241a -C extra-filename=-7b7912e462d4241a --out-dir C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps --target x86_64-pc-windows-msvc -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps -L dependency=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\release\deps --extern atty=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libatty-0152d3558cda1980.rmeta --extern bitflags=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libbitflags-1d434c93db9ceb5c.rmeta --extern strsim=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libstrsim-563e93c49ab77cd6.rmeta --extern textwrap=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libtextwrap-d44db99661ee5bc3.rmeta --extern unicode_width=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libunicode_width-ce66de2e14cc3ba9.rmeta --extern vec_map=C:\Users\Bobak\AppData\Local\Temp\pip-install-h_ao1nbk\tokenizers_631ba1cc987c45708c975abb68e99947\target\x86_64-pc-windows-msvc\release\deps\libvec_map-f82cb06a00d96a3c.rmeta --cap-lints allow` (exit code: 1) warning: build failed, waiting for other jobs to finish... error: build failed error: cargo failed with code: 101 ---------------------------------------- ERROR: Failed building wheel for tokenizers Failed to build tokenizers ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects PS G:\Projects\> tokenizers==0.9.2tokenizers==0.9.2 tokenizers==0.9.2tokenizers==0.9.2 : The term 'tokenizers==0.9.2tokenizers==0.9.2' is not recognized as the name of a cmdlet, function, script file, or operable prsaogram. Check the spelddling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + tokenizers==0.9.2tokenizers==0.9.2 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (tokenizers==0.9.2tokenizers==0.9.2:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException ``` There seems to be some deep error in the tokenizers install during Compiling parking_lot v0.10.2 which seems to be part of tokenizers==0.9.2 or 0.8.1.rc2 I have new versions of flax, transformers (without flax), and tokenizers installed, so I'm not sure what's going on with this particular script.
01-07-2022 11:16:22
01-07-2022 11:16:22
This seems to be related to `tokenizers`. Were you able to install the same version of `tokenizers` independently?<|||||>Thanks for the response, I have `tokenizers` latest, but get the same error if I force the version.<|||||>I see. Could you then open an issue on the `tokenizers` repo, since it's not related to `transformers`? Thank you!<|||||>[moved](https://github.com/huggingface/tokenizers/issues/874)<|||||>Also, note that the `tokenizers` version is now updated in the master which will be included in the next release. cf https://github.com/huggingface/transformers/blob/master/setup.py#L152
transformers
15,064
closed
Question: CANINE with (pre-trained) LM head
# 🚀 Feature request This is more a question than a request. Would it be possible to have [CANINE](https://huggingface.co/docs/transformers/model_doc/canine) model with a (pre-trained) LM head to perform word/character prediction? (Just like BERTForMaskedLM and other similar models) ## Motivation The character-based paradigm of the CANINE model is very promising and currently it is possible to perform TokenClassification, SequenceClassification, etc. using it. But I don't see the way of using it to emit LM probabilities over characters of a masked input. In fact, unless I am mistaken, I don't see that any LM pre-trained weights come within the available [pretrained-checkpoints](https://huggingface.co/models?sort=downloads&search=canine). I also see that, in the source code, there are [LM related CANINE heads](https://github.com/huggingface/transformers/blob/v4.15.0/src/transformers/models/canine/modeling_canine.py#L878), but they do not seem to be in use. So, in summary, CANINE only outputs character embeddings. I have tried to add a fresh LM head on top of it and fine-tune to predict characters, but that lacks all the knowledge that such a LM head would have acquired during a large self-supervised pre-training. My question is, am I missing or misunderstanding something? Is CANINE suitable to be used for masked words/characters prediction? Will there be, eventually, a pre-trained LM head available for CANINE? (Or maybe this is a question for the original authors of the model?) ## Your contribution N/A. I am afraid I cannot contribute anything in this regard. Thank youy so much.
01-07-2022 11:14:56
01-07-2022 11:14:56
Hi, CANINE can do masked language modeling (that's how it was pre-trained). However, the authors did not release their pre-training code (yet) as seen [here](https://github.com/google-research/language/tree/master/language/canine#pre-training-code-coming-later). See also #12892.<|||||>Ok, that was my understanding of the situation so far. I just wanted some clarification or confirmation from a more knowledgeable person :-) Since it has more to do with the authors of the model and not with the Transformers library itself, I close this question for now. Thank you very much for the quick answer!
transformers
15,063
closed
Not able to log training and validation loss to visualise in tensor-board as tfevents?
I am downloading the model https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384/tree/main microsoft/Multilingual-MiniLM-L12-H384 and then using it. Transformer Version: '4.11.3' I have written the below code: ``` def compute_metrics(eval_pred): logits, labels = eval_pred predictions = np.argmax(logits, axis=-1) acc = np.sum(predictions == labels) / predictions.shape[0] return {"accuracy" : acc} ``` ``` model = tr.BertForSequenceClassification.from_pretrained("/home/pc/minilm_model",num_labels=2) model.to(device) print("hello") training_args = tr.TrainingArguments( output_dir='/home/pc/proj/results2', # output directory num_train_epochs=10, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=32, # batch size for evaluation learning_rate=2e-5, warmup_steps=1000, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=1000, evaluation_strategy="epoch", save_strategy="no" ) trainer = tr.Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_data, # training dataset eval_dataset=val_data, # evaluation dataset compute_metrics=compute_metrics ) ``` Is there way to retrieve my ( **I want to use tensor-board**) : 1. **Training loss** for every epoch 2. **Validation loss** for every epoch I do not see anything in my log directory apart from model arguments which is empty? **How can I save my Training and validation loss so that tensorboard events captures it.**
01-07-2022 06:18:59
01-07-2022 06:18:59
I am not so sure, but try to run your code **without** `logging_steps=1000,`, it might solve your problem. You are training your model for 10 epochs but you are telling it to log it at every 1000th epoch.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,062
closed
Import Error : cannot import name 'create_repo' from 'huggingface_hub'
Working with BERT Text Classification,i found this error import torch from tqdm.notebook import tqdm from transformers import BertTokenizer from torch.utils.data import TensorDataset from transformers import BertForSequenceClassification **ImportError:** cannot import name 'create_repo' from 'huggingface_hub' (C:\ProgramData\Anaconda3\lib\site-packages\huggingface_hub\__init__.py) How to solve this?
01-07-2022 06:05:00
01-07-2022 06:05:00
@santanumitra22 Which transformers version are you using? <|||||>Version: 4.15.0 @frankhart2018 <|||||>And is you huggingface_hub version 0.2.1?<|||||>no, huggingface_hub 0.0.8<|||||>That's probably the issue, can you try upgrading to 0.2.1, its working fine for me when I am using 0.2.1<|||||>Same issue here, my transformers version is 4.15.0., and the huggingface_hub version is 0.2.1. However, it does not work. <|||||>> Same issue here, my transformers version is 4.15.0., and the huggingface_hub version is 0.2.1. However, it does not work. I was able to correct this by restarting the python kernel (working in Jupyter Lab) after logging in to the CLI<|||||>Error fixed, thanks! :)<|||||>This happened for me from a fresh install of conda install transformers on an Ubuntu Deep Learning AMI. Is there a `huggingface_hub` dependency that should be updated?<|||||>> That's probably the issue, can you try upgrading to 0.2.1, its working fine for me when I am using 0.2.1 Faced same issue, updating to 0.2.1 helped, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>In my case downgrading **python 3.8** to **3.7** is what solved the problem. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Same issue here, my transformers version is 4.15.0., and the huggingface_hub version is 0.2.1 both updated to latest version. However, it does not work. I'm still getting the following import error `Import Error : cannot import name 'create_repo' from 'huggingface_hub' `<|||||>The latest `huggingface_hub` version is v0.5.1; how did you identify that v0.2.1 was the latest? This is definitely an error with the wrong version being installed.<|||||>Hi @LysandreJik, thanks! Yes, I tried `conda update huggingface_hub` and `conda update transformers` but it doesn't work. I finally solved this by doing a fresh installation of both of these in a new environment. <|||||>Glad you got it to work! Very weird that conda refused to update your packages, let me know if it happens again!<|||||>I experienced the same problem today: ``` > conda create -n test_env python=3.7 > conda activate test_env > conda install huggingface_hub > conda list | grep hugging huggingface_hub 0.2.1 pyhd3eb1b0_0 ``` Could maybe the python version be the problem? When using conda-forge I get the most recent version: ``` > conda create -n test_env python=3.7 > conda activate test_env > conda install -c conda-forge huggingface_hub > conda list | grep hugging huggingface_hub 0.5.1 pyhd8ed1ab_0 conda-forge ``` Also, installing subsequent dependencies not from `conda-forge` actually downgraded huggingface_hub to 0.1.7, so people that run into the same problem might try to install their remaining packages from `conda-forge` as well an keep an eye on conda whether it attempts to downgrade huggingface_hub.<|||||>For me ```conda update huggingface_hub``` and ```conda update transformers``` worked nicely.<|||||>Here's my two cents: I had installed `transformers` 4.10.3 and `huggingface-hub` 0.4.0 and was throwing ``` ImportError: cannot import name 'RepositoryNotFoundError' from 'huggingface_hub.utils' ``` Then I ran ```bash pip install --upgrade huggingface-hub ``` After that, `huggingface-hub` was updated to version 0.9.1, didn't modify any other version of my environment, and solved the problem.<|||||>> in case this doesn't work for you upgrade pip e.g. ``` /Users/brandomiranda/opt/anaconda3/envs/meta_learning/bin/python -m pip install --upgrade pip pip install --upgrade huggingface-hub ```<|||||>``` huggingface-hub 0.10.0 transformers 4.13.0 ``` my issue happens and above is true <|||||>what about upgrading all 3? ``` /Users/brandomiranda/opt/anaconda3/envs/meta_learning/bin/python -m pip install --upgrade pip pip install --upgrade transformers pip install --upgrade huggingface-hub pip install --upgrade datasets ``` ok failed but now the error is: ``` Traceback (most recent call last): File "/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 11, in <module> File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/site-packages/transformers/__init__.py", line 30, in <module> from . import dependency_versions_check File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 36, in <module> from .utils import is_tokenizers_available ImportError: cannot import name 'is_tokenizers_available' from 'transformers.utils' (/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/site-packages/transformers/utils/__init__.py) ```<|||||>``` /Users/brandomiranda/opt/anaconda3/envs/meta_learning/bin/python -m pip install --upgrade pip pip install --upgrade tokenizers ``` upgraded correctly trom 0.12.1 to 0.13.0 still fails. ``` (meta_learning) ❯ pip list Package Version Editable project location ------------------------------------------------- ---------- ------------------------------------------------------------------------------ absl-py 1.0.0 aiohttp 3.8.1 aiosignal 1.2.0 antlr4-python3-runtime 4.8 argcomplete 2.0.0 async-timeout 4.0.1 attrs 21.4.0 automl-meta-learning 0.1.0 /Users/brandomiranda/automl-meta-learning/automl-proj-src bcj-cffi 0.5.1 boto 2.49.0 Bottleneck 1.3.4 Brotli 1.0.9 brotlicffi 1.0.9.2 brotlipy 0.7.0 cachetools 4.2.4 certifi 2022.9.14 cffi 1.15.1 charset-normalizer 2.0.9 cherry-rl 0.1.4 click 8.0.3 cloudpickle 2.0.0 colorama 0.4.4 configparser 5.2.0 conllu 4.4.1 crcmod 1.7 cryptography 37.0.1 cycler 0.11.0 Cython 0.29.25 dataclasses 0.6 datasets 2.5.1 dill 0.3.4 diversity-for-predictive-success-of-meta-learning 0.0.1 /Users/brandomiranda/diversity-for-predictive-success-of-meta-learning/div_src docker-pycreds 0.4.0 editdistance 0.6.0 et-xmlfile 1.1.0 fairseq 0.10.0 fastcluster 1.2.4 fasteners 0.17.3 filelock 3.6.0 fonttools 4.28.3 frozenlist 1.2.0 fsspec 2022.7.1 gcs-oauth2-boto-plugin 3.0 gitdb 4.0.9 GitPython 3.1.24 google-apitools 0.5.32 google-auth 2.3.3 google-auth-oauthlib 0.4.6 google-reauth 0.1.1 grpcio 1.42.0 gsutil 5.6 gym 0.21.0 h5py 3.6.0 higher 0.2.1 httplib2 0.20.4 huggingface-hub 0.10.0 hydra-core 1.1.1 idna 3.3 importlib-metadata 4.11.3 joblib 1.1.0 kiwisolver 1.3.2 lark-parser 0.12.0 learn2learn 0.1.7 lxml 4.8.0 Markdown 3.3.6 matplotlib 3.5.1 mkl-fft 1.3.1 mkl-random 1.2.2 mkl-service 2.4.0 monotonic 1.6 multidict 5.2.0 multiprocess 0.70.12.2 multivolumefile 0.2.3 munkres 1.1.4 networkx 2.6.3 numexpr 2.8.1 numpy 1.21.5 oauth2client 4.1.3 oauthlib 3.1.1 omegaconf 2.1.1 openpyxl 3.0.10 ordered-set 4.0.2 packaging 21.3 pandas 1.4.2 pathtools 0.1.2 Pillow 9.0.1 pip 22.2.2 plotly 5.4.0 portalocker 2.3.2 progressbar2 3.55.0 promise 2.3 protobuf 3.19.1 psutil 5.8.0 py7zr 0.16.1 pyarrow 9.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycparser 2.21 pycryptodomex 3.15.0 pyOpenSSL 22.0.0 pyparsing 3.0.6 pyppmd 0.16.1 PySocks 1.7.1 python-dateutil 2.8.2 python-utils 2.5.6 pytz 2021.3 pyu2f 0.1.5 PyYAML 6.0 pyzstd 0.14.4 qpth 0.0.15 regex 2021.11.10 requests 2.28.1 requests-oauthlib 1.3.0 responses 0.18.0 retry-decorator 1.1.1 rsa 4.7.2 sacrebleu 2.0.0 sacremoses 0.0.46 scikit-learn 1.0.1 scipy 1.7.3 seaborn 0.11.2 sentry-sdk 1.5.1 setproctitle 1.2.2 setuptools 58.0.4 shortuuid 1.0.8 six 1.16.0 sklearn 0.0 smmap 5.0.0 subprocess32 3.5.4 tabulate 0.8.9 tenacity 8.0.1 tensorboard 2.7.0 tensorboard-data-server 0.6.1 tensorboard-plugin-wit 1.8.0 termcolor 1.1.0 texttable 1.6.4 threadpoolctl 3.0.0 tokenizers 0.12.1 torch 1.9.1 torchaudio 0.9.1 torchmeta 1.8.0 torchtext 0.10.1 torchvision 0.10.1 tornado 6.1 tqdm 4.62.3 transformers 4.22.2 typing_extensions 4.3.0 ultimate-anatome 0.1.1 /Users/brandomiranda/ultimate-anatome ultimate-aws-cv-task2vec 0.0.1 /Users/brandomiranda/ultimate-aws-cv-task2vec ultimate-utils 0.6.1 /Users/brandomiranda/ultimate-utils/ultimate-utils-proj-src urllib3 1.26.11 wandb 0.13.3 Werkzeug 2.0.2 wheel 0.37.0 xxhash 2.0.2 yarl 1.8.1 yaspin 2.1.0 zipp 3.8.0 ```<|||||>did: ``` pip install pytorch-transformers ``` didn't work same eroor about tokenizer.<|||||>trying: ``` /Users/brandomiranda/opt/anaconda3/envs/meta_learning/bin/python -m pip install --upgrade pip pip install --upgrade torch pip install --upgrade torchvision pip install --upgrade torchtext pip install --upgrade torchaudio # pip install --upgrade torchmeta pip uninstall torchmeta ``` still fails with: ``` Traceback (most recent call last): File "/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/code.py", line 90, in runcode exec(code, self.locals) File "<input>", line 11, in <module> File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/site-packages/transformers/__init__.py", line 30, in <module> from . import dependency_versions_check File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import module = self._system_import(name, *args, **kwargs) File "/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/site-packages/transformers/dependency_versions_check.py", line 36, in <module> from .utils import is_tokenizers_available ImportError: cannot import name 'is_tokenizers_available' from 'transformers.utils' (/Users/brandomiranda/opt/anaconda3/envs/meta_learning/lib/python3.9/site-packages/transformers/utils/__init__.py) ``` SO post: https://stackoverflow.com/questions/73939929/how-to-resolve-the-hugging-face-error-importerror-cannot-import-name-is-tokeni<|||||>> I was able to correct this by restarting the python kernel (working in Jupyter Lab) after logging in to the CLI solved by @rsm5909 's, thanks!<|||||>restarting Jupyter notebook has solved the problem
transformers
15,061
closed
Add CharacterBERT model [WIP]
# What does this PR do? Moves work on #10053 over to a new PR <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. Please see - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik and @helboukkouri please take a look. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-07-2022 05:33:05
01-07-2022 05:33:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Well, it was a fun ride 😄<|||||>Hey folks! @helboukkouri @LysandreJik what are pending things to have CharacterBERT on transformers? Can I help it in some way?
transformers
15,060
closed
Feature request: m2m100_418M support on onnx
# 🚀 Feature request Hi, it would be great if onnx could support the facebook M2M100 418M multilingual translation model! ## Motivation I would like to run m2m100 with reduced translation time on cpu. For this I discovered onnx which reduces the translation time up to 3 times (perfect).So I ran this command I found on the hugging face doc that I adapted for M2M100: `python -m transformers.onnx --model=facebook/m2m100_418M m2m100.oonx` however it returned this error: `task='sequence-classification')}} are supported. If you want to support (m2m_100) please propose a PR or open up an issue.` Only I don't have the skills to post a PR (I'm a total beginner in this field, not even a professional, I've been looking into offline translation to avoid google translation). So if someone could make onnx support m2m100 418M that would be great! Finally I think that it would not only benefit me because this model is really great (ratio performance/quality translation is really not bad) and to be able to run it faster would be extremely convenient for the 50 000 people who download this model every month. ## Your contribution I want to try to contribute but I really don't have the skills to do so.
01-06-2022 20:58:26
01-06-2022 20:58:26
Gently pinging @michaelbenayoun @lewtun :)<|||||>Hi, should I ping them? or did you. <|||||>Hey @Jourdelune thanks for the feature request - I agree that M2M100 is a great model to support for ONNX :) Just so you know, exporting the model will only be the first step. In order to generate the translations you'll have to implement your own `generate()` function with e.g. beam search. We have plans in our `optimum` [library](https://github.com/huggingface/optimum) to support this, but you might have to wait a bit until that feature is available. In the meantime, I'm happy to work on the ONNX export of M2M100!<|||||>Thank you for accepting the request features! I can wait as long as it takes :D (I have to close the issue?).<|||||>Hey @Jourdelune no need to close the issue - we close them when the pull request is merged (and you'll see a link back to this issue when it's being reviewed) :)<|||||>okay, thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Re-opening since @michaelbenayoun is currently working on this :)<|||||>ah ah yes, thanks for doing it :D<|||||>Hey @lewtun and @michaelbenayoun, I am also interested in exporting M2M100 to onnx. Is there anything I can do to support you? I am also working on exporting `BartForConditionalGeneration`, and the task here is quite similar as well. Once done, I would love to write a blog post about it (you can host it in your blog if you feel like it), as this is a recurring and hot problem. Thank you for your consideration!<|||||>Hey @jbesomi thanks for the offer! @michaelbenayoun has a PR open [here](https://github.com/huggingface/transformers/pull/15193) for the export, but ran into an issue with exporting the large variant of M2M100. If Michael agrees, one idea would be to create a new branch off [`michaelbenayoun:m2m_100_onnx`](https://github.com/michaelbenayoun/transformers/tree/m2m_100_onnx) and see if you resolve it. Then you could open a PR from your new branch into his, and we could proceed from there.<|||||>I think the remaining issue is very very minor, I just need to take the time to solve it. <|||||>Thanks to all those who work for the support of onnx! [jbesomi](https://github.com/jbesomi) I would love to be able to read the blog post which I think would be very useful for many people.<|||||>Yes @jbesomi we would be very happy to host your blog post on https://huggingface.co/blog 😍 ! I also agree that text generation with ONNX models is a popular topic that would get a lot of traction in the community<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Closed by #15193
transformers
15,059
closed
Adding additional layers to TFHubertModel throws OperatorNotAllowedInGraphError
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @anton-l <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): [TFHubertModel](https://huggingface.co/docs/transformers/model_doc/hubert#transformers.TFHubertModel) ## To reproduce Steps to reproduce the behavior: Just a simple code snippet that loads a pretrained TFHubertModel and adds - (1) a lambda layer to sum over the hidden units obtained from the Hubert model (for example: from `(N, 120,1024)` -> `(N,1024)` and (2) dense & dropout layers ```import librosa import tensorflow as tf import torch import numpy as np from tensorflow.keras.optimizers import Adam from transformers import TFHubertModel def create_model(bert_model, dim): input_ids = tf.keras.Input(shape=(dim,),dtype='int32') attention_masks = tf.keras.Input(shape=(dim,),dtype='int32') output = bert_model([input_ids,attention_masks]) output = output[0] output = tf.keras.layers.Lambda(lambda x: tf.keras.backend.sum(x, axis=1), name = "Pooling_Embs")(output) output = tf.keras.layers.Dense(32,activation='relu')(output) output = tf.keras.layers.Dropout(0.2)(output) output = tf.keras.layers.Dense(1,activation='sigmoid')(output) model = tf.keras.models.Model(inputs = [input_ids,attention_masks],outputs = output) model.compile(Adam(learning_rate=1e-6), loss='binary_crossentropy', metrics=['accuracy']) return model # custom model creation hubert_model = TFHubertModel.from_pretrained('facebook/hubert-large-ls960-ft') model = create_model(hubert_model, dim=38744) model.summary() ``` The model compiles just fine without any error: ``` Model: "model_2" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_5 (InputLayer) [(None, 38744)] 0 [] input_6 (InputLayer) [(None, 38744)] 0 [] tf_hubert_model (TFHubertModel TFBaseModelOutput(l 315438720 ['input_5[0][0]', ) ast_hidden_state=(N 'input_6[0][0]'] one, 120, 1024), hidden_states=None , attentions=None) Pooling_Embs (Lambda) (None, 1024) 0 ['tf_hubert_model[1][0]'] dense_4 (Dense) (None, 32) 32800 ['Pooling_Embs[0][0]'] dropout_173 (Dropout) (None, 32) 0 ['dense_4[0][0]'] dense_5 (Dense) (None, 1) 33 ['dropout_173[0][0]'] ================================================================================================== Total params: 315,471,553 Trainable params: 315,471,553 Non-trainable params: 0 ``` When I try to fit the model, the error is thrown (dummy inputs/attention masks used here are for demonstration purposes only, ideally they will come from passing audio through a feature extractor like `Wav2Vec2FeatureExtractor`): ``` # fit model input_values = np.random.rand(5,38744) attention_masks = np.random.randint(0,2, size=(5,38744)) labels = np.asarray([0, 1, 0, 0, 1]) model.fit([input_values,attention_masks], labels, epochs=2, batch_size=2) ``` The error thrown: ```__________________________________________________________________________________________________ Epoch 1/2 --------------------------------------------------------------------------- OperatorNotAllowedInGraphError Traceback (most recent call last) <ipython-input-24-3a4ee842c0f2> in <module>() 4 input_values = np.random.rand(5,38744) 5 attention_masks = np.random.randint(0,2, size=(5,38744)) ----> 6 history = model.fit([input_values,attention_masks],np.asarray(label),epochs=2,batch_size=2) 1 frames /usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/func_graph.py in autograph_handler(*args, **kwargs) 1127 except Exception as e: # pylint:disable=broad-except 1128 if hasattr(e, "ag_error_metadata"): -> 1129 raise e.ag_error_metadata.to_exception(e) 1130 else: 1131 raise OperatorNotAllowedInGraphError: in user code: File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 878, in train_function * return step_function(self, iterator) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 867, in step_function ** outputs = model.distribute_strategy.run(run_step, args=(data,)) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 860, in run_step ** outputs = model.train_step(data) File "/usr/local/lib/python3.7/dist-packages/keras/engine/training.py", line 808, in train_step y_pred = self(x, training=True) File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "tf_hubert_model" (type TFHubertModel). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/hubert/modeling_tf_hubert.py", line 1453, in call * outputs = self.hubert( File "/usr/local/lib/python3.7/dist-packages/keras/utils/traceback_utils.py", line 67, in error_handler ** raise e.with_traceback(filtered_tb) from None OperatorNotAllowedInGraphError: Exception encountered when calling layer "hubert" (type TFHubertMainLayer). in user code: File "/usr/local/lib/python3.7/dist-packages/transformers/models/hubert/modeling_tf_hubert.py", line 1237, in call * hidden_states = self._mask_hidden_states(hidden_states, mask_time_indices=mask_time_indices) File "/usr/local/lib/python3.7/dist-packages/transformers/models/hubert/modeling_tf_hubert.py", line 1168, in _mask_hidden_states * mask_time_indices = _compute_mask_indices( File "/usr/local/lib/python3.7/dist-packages/transformers/models/hubert/modeling_tf_hubert.py", line 229, in _compute_mask_indices * num_masked_spans = max(num_masked_spans, min_masks) OperatorNotAllowedInGraphError: using a `tf.Tensor` as a Python `bool` is not allowed: AutoGraph did convert this function. This might indicate you are trying to use an unsupported feature. Call arguments received: • input_values=tf.Tensor(shape=(None, 38744), dtype=int32) • attention_mask=tf.Tensor(shape=(None, 38744), dtype=int32) • token_type_ids=None • position_ids=None • head_mask=None • inputs_embeds=None • output_attentions=False • output_hidden_states=False • return_dict=True • training=True • kwargs=<class 'inspect._empty'> Call arguments received: • input_values=['tf.Tensor(shape=(None, 38744), dtype=int32)', 'tf.Tensor(shape=(None, 38744), dtype=int32)'] • attention_mask=None • token_type_ids=None • position_ids=None • head_mask=None • inputs_embeds=None • output_attentions=None • output_hidden_states=None • return_dict=None • training=True ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> The final aim is to get the model up and running so that a voice liveliness detection system (i.e. whether the audio is live or a replayed one) can be trained. ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The model should fit without any error, similar to the one in [this notebook](https://www.kaggle.com/dhruv1234/huggingface-tfbertmodel) where they did the same as above but for the TFBertModel. ## Additional Information I have already tried removing the lambda layer just to see if that helps but the error persists.
01-06-2022 17:53:58
01-06-2022 17:53:58
It seems that we have to change https://github.com/huggingface/transformers/blob/cc406da4debea3c2e7e93f42585339e79e18fd6b/src/transformers/models/hubert/modeling_tf_hubert.py#L231 to use a tensorflow max function instead https://www.tensorflow.org/api_docs/python/tf/math/maximum @gante , @Rocketknight1 - would you guys be interested in taking a look here?<|||||>@patrickvonplaten I can take a look, assigning to me<|||||>@patrickvonplaten replacing `max` by `tf.math.maximum` then exposes an error on `_scatter_values_on_batch_indices` due to an unknown batch size in TF's graph mode. However, forcing eager execution at train time (i.e. compiling the model with `run_eagerly=True`) bypasses all these errors, but then the model hits a shape-related issue in a Conv layer -- see below. Running a modified version of the script kindly provided by @V-Sher ```python import tensorflow as tf import numpy as np from tensorflow.keras.optimizers import Adam from transformers import TFHubertModel def create_model(bert_model, dim): input_ids = tf.keras.Input(shape=(dim,),dtype='int32') attention_masks = tf.keras.Input(shape=(dim,),dtype='int32') output = bert_model([input_ids,attention_masks]) output = output[0] output = tf.keras.layers.Lambda(lambda x: tf.keras.backend.sum(x, axis=1), name = "Pooling_Embs")(output) output = tf.keras.layers.Dense(32,activation='relu')(output) output = tf.keras.layers.Dropout(0.2)(output) output = tf.keras.layers.Dense(1, activation='sigmoid')(output) model = tf.keras.models.Model(inputs = [input_ids,attention_masks],outputs = output) model.compile(Adam(learning_rate=1e-6), loss='binary_crossentropy', metrics=['accuracy'], run_eagerly=True) return model # custom model creation hubert_model = TFHubertModel.from_pretrained('facebook/hubert-large-ls960-ft') model = create_model(hubert_model, dim=38744) model.summary() # fit model input_values = np.random.rand(5,38744) attention_masks = np.random.randint(0,2, size=(5,38744)) labels = np.asarray([0, 1, 0, 0, 1]) print(model([input_values,attention_masks])) model.fit([input_values,attention_masks], labels, epochs=2, batch_size=2) ``` We get ``` Epoch 1/2 Traceback (most recent call last): File "test.py", line 30, in <module> model.fit([input_values,attention_masks], File "/home/joao_huggingface_co/hf/lib/python3.8/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/joao_huggingface_co/hf/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 7107, in raise_from_not_ok_status raise core._status_to_exception(e) from None # pylint: disable=protected-access tensorflow.python.framework.errors_impl.InvalidArgumentError: Computed input depth 1024 doesn't match filter input depth 64 [Op:Conv2DBackpropInput] ``` Before I dig deeper @patrickvonplaten @Rocketknight1, any clues?<|||||>@gante I don't have any immediate intuition about what the problem is here, sorry! If you get stuck or don't have time to keep working on it though, feel free to ping me and I'll dig in deeper.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>(not stale, next on my todo list)<|||||>Update: the script I shared above works on GPU, but fails on CPU due to upstream problems in Keras -- TF does not support backpropagation of grouped convolutions on CPU. (@V-Sher) The PR mentioned above adds an informative error message for these situations.
transformers
15,058
closed
Model summary doc page horizontal banners
# What does this PR do? Horizontally align model summary page banners To achieve this results, there are 2 options: 1. Write html directly (as done in this PR) OR 2. Create a specific svelte component & use it (something like `import SomeComponent.svelte` & `<SomeComponent banners={...}>`) I think option2 is an overkill for this use case since option1 is a very simple html. @sgugger wdyt <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) As mentioned in https://github.com/huggingface/doc-builder/issues/73#issuecomment-1006423852 compare "Original GPT" banners vs "GPT-2" banners <img width="785" alt="Screenshot 2022-01-06 at 17 46 00" src="https://user-images.githubusercontent.com/11827707/148421166-f613b7d9-b4ed-4ef7-925b-35b9f647bba5.png"> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-06-2022 17:01:36
01-06-2022 17:01:36
cc: @AK391
transformers
15,057
closed
[VisionTextDualEncoder] Fix doc example
# What does this PR do? In `modeling_vision_text_dual_encoder.py`, this line (doc example) will fail ``` >>> loss, logits_per_image = outputs.loss, outputs.logits_per_imag ``` It should be `outputs.logits_per_image` (`CLIPOutput`) ## Who can review? @patil-suraj
01-06-2022 16:34:04
01-06-2022 16:34:04
transformers
15,056
closed
Fix usage of additional kwargs in `from_encoder_decoder_pretrained` in encoder-decoder models
# What does this PR do? Somewhat continuing https://github.com/huggingface/transformers/pull/15043 but concerning all of the encoder-decoder models: If, like in this [example](https://github.com/huggingface/transformers/blob/f71fb5c36e739d8224419bb091b4c16531df829f/examples/pytorch/speech-recognition/README.md?plain=1#L218), extra keyword arguments are used with `from_encoder_decoder_pretrained` it raises an error: ```pthon TypeError: __init__() got an unexpected keyword argument 'add_adapter' ``` This is because the arguments [should be part of the config](https://github.com/huggingface/transformers/blob/f71fb5c36e739d8224419bb091b4c16531df829f/src/transformers/models/auto/auto_factory.py#L136). This PR tries to fix this issue. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patrickvonplaten
01-06-2022 16:20:39
01-06-2022 16:20:39
Hey @jsnfly, sorry for answering so late. Great job on identifying the problem! I left some improvements as comments in the PR. The problem is that if we remove `**kwargs_encoder` from `AutoModel.from_pretrained(...)` we loose functionality - *e.g.* if someone would pass `local_files_only=True`. Could you maybe adapt all examples as shown for `EncoderDecoderModel` above? :-) Think then we can merge this one!<|||||>Hey @patrickvonplaten, thank you for the suggestions, they do indeed make the code a lot cleaner and more functional! Let me know if there are additional improvements.<|||||>> indeed Great job @jsnfly! Checked it locally and everything works perfectly fine :-) <|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,055
closed
Return type of `ViTFeatureExtractor` does not match `return_tensors` parameter when input is `torch.Tensor` or `PIL.Image.Image`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Ubuntu 16.04 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.1+cu113 - Tensorflow version (GPU?): not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): ViT The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Classification on ImageNet ## To reproduce **Note**: Since `typing.List` is deprecated since python 3.9, I am using `builtins.list` in the following contents. Steps to reproduce the behavior: 1. Set `do_normalize` and `do_resize` parameter of a `ViTFeatureExtractor` 2. Try different combinations 3. We call like this ```python from transformers import ViTFeatureExtractor extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") img = torch.randn(3, 256, 256) input = extractor(img, return_tensors="pt") # or input = extractor([img, img], return_tensors="pt") ``` As the `__call__` function of `ViTFeatureExtractor` accepts `(PIL.Image.Image, np.ndarray, torch.Tensor, list[PIL.Image.Image], list[np.ndarray], list[torch.Tensor])` as its first parameter, it does't matter whether to call `extractor(img)` or `extractor([img])`. I also tried this: ```jupyter >>> ViTFeatureExtractor() ViTFeatureExtractor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } >>> ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") ViTFeatureExtractor { "do_normalize": true, "do_resize": true, "feature_extractor_type": "ViTFeatureExtractor", "image_mean": [ 0.5, 0.5, 0.5 ], "image_std": [ 0.5, 0.5, 0.5 ], "resample": 2, "size": 224 } ``` which indicates the two extractors are exactly same. And the results are very weird, shown in table below: | `do_resize` | `do_normalize` | `return_tensors` | actual return type | | :----------: | :---------------: | :----------------: | :-----------: | | ✅ | ✅ | not specified | `list[np.ndarray]` | | ✅ | ✅ | "pt" | 4-D `torch.Tensor` <br> with shape `(B, C, H, W)` | | ❌ | ✅ | not specified | `list[torch.Tensor]` | | ❌ | ✅ | "pt" | ValueError | | ✅ | ❌ | not specified | `list[PIL.Image.Image]` | | ✅ | ❌ | "pt" | ValueError | | ❌ | ❌ | not specified | `list[torch.Tensor]` | | ❌ | ❌ | "pt" | ValueError | When return type is `list[torch.Tensor]`, each element in the list is a 3-D `torch.Tensor` with shape `(C, H, W)`. The `ValueError` in table refers to >ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length. I cannot understand why `__call__` returns a list of `PIL.Image.Image` when `do_normalize = False` and `do_resize = True`, which is the most weird thing. It seems that tensors are only converted to PIL images, resized but no more operations. According to the doc, the default value of `return_tensors` is `"np"` when not specified. But it does not correspond to the real type of return value when `do_normalize` or `do_resize` is changed. The doc also says >NumPy arrays and PyTorch tensors are converted to PIL images when resizing, so the most efficient is to pass PIL images. As we can do resize and normalization using `torchvision.transforms`, there are 3 solutions: ### Solution 1 Do feature extract before using `torch.utils.data.DataLoader` (pass PIL images to `__call__`). ```python from torchvision.datasets import ImageFolder dataset = ImageFolder(root) extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") extractor.do_normalize = True extractor.do_resize = True # X is list of PIL.Image.Image X = [x for x, _ in dataset] # This will consume all of your memory X = extractor(X, return_tensors="pt")["pixel_values"] # or for x, _ in dataset: x = extractor(x, return_tensors="pt")["pixel_values"].squeeze() # This is very slow and inefficient ``` ### Solution 2 Do feature extract after `torch.utils.data.DataLoader` and `torchvision.datasets.ImageFolder` in a small batch (pass PIL images to `__call__`). ```python from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader dataset = ImageFolder(root) extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") extractor.do_normalize = True extractor.do_resize = True loader = DataLoader(dataset, batch_size=64) X, y = next(iter(loader)) X = extractor(X, return_tensors="pt")["pixel_values"] ``` This will raise an error: >TypeError: default_collate: batch must contain tensors, numpy arrays, numbers, dicts or lists; found <class 'PIL.Image.Image'> ### Solution 3 #### Warning It is not recommended to use `ViTFeatureExtractor` along with `torchvision.transforms`. Inappropriate combinations can lead to a decrease in accuracy. Generally, never do resize after normalization. Examples: * Use `torchvision.transforms` only ```python import torch from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader from torchvision import transforms from transformers.image_utils import ( IMAGENET_STANDARD_MEAN, IMAGENET_STANDARD_STD ) from transformers import ViTForImageClassification img_size = 224 normalize = transforms.Normalize(mean=IMAGENET_STANDARD_MEAN, std=IMAGENET_STANDARD_STD) tf = transforms.Compose([ transforms.Resize((img_size, img_size)), transforms.ToTensor(), normalize]) dataset = ImageFolder(root=root, transform=tf) loader = DataLoader(dataset, batch_size=64) model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") outputs = model(dataset[0][0].unsqueeze(0)) # or outputs = model(next(iter(loader))[0]) ``` * Use `ViTFeatureExtractor` only (not recommended, images are converted to tensors then to PIL images again) ```python import torch from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader from torchvision import transforms from transformers import ViTFeatureExtractor, ViTForImageClassification tf = transforms.ToTensor() dataset = ImageFolder(root=root, transform=tf) loader = DataLoader(dataset, batch_size=64) extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") inputs = extractor(images=dataset[0][0], return_tensors="pt") outputs = model(**inputs) ``` `DataLoader` cannot be used here as it requires that each tensor in the mini-batch has the same shape. * Do resize using `torchvision.transforms`, do normalization using `ViTFeatureExtractor` ```python import torch from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader from torchvision import transforms from transformers import ViTFeatureExtractor, ViTForImageClassification img_size = 224 tf = transforms.Compose([ transforms.Resize((img_size, img_size)), transforms.ToTensor()]) dataset = ImageFolder(root=root, transform=tf) loader = DataLoader(dataset, batch_size=64) extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") extractor.do_resize = False inputs = extractor(images=dataset[0][0]) outputs = model(torch.stack(inputs["pixel_values"])) # or images = list(next(iter(loader))[0].unbind()) inputs = extractor(images=images) outputs = model(torch.stack(inputs["pixel_values"])) ``` And I proposed a flexible workaround for those who want to use `ViTModel` or `ViTForImageClassification` with `torch.utils.data.DataLoader`. ```python import torch from torch import Tensor from torchvision.datasets import ImageFolder from torch.utils.data import DataLoader from torchvision import transforms from transformers.image_utils import ( IMAGENET_STANDARD_MEAN, IMAGENET_STANDARD_STD ) img_size = 224 normalize = transforms.Normalize(mean=IMAGENET_STANDARD_MEAN, std=IMAGENET_STANDARD_STD) tf = transforms.Compose([ transforms.Resize((img_size, img_size)), transforms.RandomHorizontalFlip(), transforms.ToTensor(), normalize]) dataset = ImageFolder(root=root, transform=tf) loader = DataLoader(dataset, batch_size=64) extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") # X must be a 4-D tensor with shape (B, C, H, W) def feature_extract(X: Tensor, extractor: ViTFeatureExtractor, do_resize=False, do_normalize=False) -> Tensor: X = list(X.unbind()) extractor.do_resize = do_resize extractor.do_normalize = do_normalize if do_resize: if do_normalize: batch_feature = extractor(images=X, return_tensors="pt") return batch_feature["pixel_values"] else: batch_feature = extractor(images=X) imgs = [transforms.ToTensor()(img) for img in batch_feature["pixel_values"]] return torch.stack(imgs) else: batch_feature = extractor(images=X) return torch.stack(batch_feature["pixel_values"]) ``` Usage: ```python from torch import nn def model_fn(batch: list[Tensor], extractor: ViTFeatureExtractor, model: nn.Module, device: str, criterion: nn.Module) -> tuple[Tensor, Tensor]: X, y = batch X = feature_extract(X, extractor) X, y = X.to(device), y.to(device) o = model(X) if hasattr(o, "logits"): # Use for ViTForImageClassification outs: Tensor = o.logits else: # Use for other model containing ViTModel outs: Tensor = o loss = criterion(outs, y) preds = outs.argmax(-1) accuracy = torch.mean((preds == y).float()) return loss, accuracy loss, accuracy = model_fn(next(iter(loader)), extractor, model, "cuda", nn.CrossEntropyLoss()) ``` ### Update Tested on official docs, the return type is not functioning either, see table below. I modified on code provided by offical docs: ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224") feature_extractor.do_resize = False feature_extractor.do_normalize = True inputs = feature_extractor(images=image, return_tensors="pt") # or inputs = feature_extractor(images=[image, image], return_tensors="pt") ``` type of `image` is ``` >>> type(image) PIL.JpegImagePlugin.JpegImageFile ``` and results are | `do_resize` | `do_normalize` | `return_tensors` | actual return type | | :----------: | :---------------: | :----------------: | :-----------: | | ✅ | ✅ | not specified | `list[np.ndarray]` | | ✅ | ✅ | "pt" | 4-D `torch.Tensor` <br> with shape `(B, C, H, W)` | | ❌ | ✅ | not specified | `list[np.ndarray]` | | ❌ | ✅ | "pt" | 4-D `torch.Tensor` <br> with shape `(B, C, H, W)` | | ✅ | ❌ | not specified | `list[PIL.Image.Image]` | | ✅ | ❌ | "pt" | ValueError | | ❌ | ❌ | not specified | `list[PIL.JpegImagePlugin.JpegImageFile]` | | ❌ | ❌ | "pt" | ValueError | I have only tested when input is `torch.Tensor | list[torch.Tensor]` or `PIL.Image.Image | list[PIL.Image.Image]`. Not sure if other conditions work properly. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Return type of `ViTFeatureExtractor.__call__` should match `return_tensors`, and there should be no error in table above. <!-- A clear and concise description of what you would expect to happen. -->
01-06-2022 15:21:08
01-06-2022 15:21:08
Thanks for this table, it beautifully summarizes all possible cases and it should indeed work in any case. Here are my thoughts: ## Single image If you are providing a single image (or a list containing a single image) and not specifying the `return_tensors` argument/setting it to 'np', it should in any case return a 4D Numpy array of shape `(1, num_channels, height, width)` - feature extractors should always include a batch dimension similar to the tokenizers. Whether or not to place the channel dimension last is a debate, but as all models use channels=first, we should probably default to this. ## Batch of images In case a batch of images is provided to the feature extractor, it should work (i.e., return a 4D tensor) in any case _unless_ one provides a list of images which don't have the same resolution and one sets `do_resize=False`. In that case, the following error should be returned: > ValueError: Unable to create tensor, you should probably activate padding with 'padding=True' to have batched tensors with the same length. Note that the feature extractors use PIL behind the scenes, rather than torchvision (image transformations are defined in [image_utils.py](https://github.com/huggingface/transformers/blob/master/src/transformers/image_utils.py)) We will work on a PR that fixes this. Edit: updated to make sure channels are first.<|||||>@NielsRogge Updated results when input are PIL images.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@liyufan Thanks for reporting these issues. They should now be resolved with the recent refactoring of the feature extractors (now called image processors). Following from your snippet, I ran the below example ```python from transformers import ViTFeatureExtractor, ViTForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224") settings = [ {"do_resize": True, "do_normalize": True, "return_tensors": None}, {"do_resize": True, "do_normalize": True, "return_tensors": "pt"}, {"do_resize": False, "do_normalize": True, "return_tensors": None}, {"do_resize": False, "do_normalize": True, "return_tensors": "pt"}, {"do_resize": True, "do_normalize": False, "return_tensors": None}, {"do_resize": True, "do_normalize": False, "return_tensors": "pt"}, {"do_resize": False, "do_normalize": False, "return_tensors": None}, {"do_resize": False, "do_normalize": False, "return_tensors": "pt"}, ] for kwargs in settings: input_single = feature_extractor(images=image, **kwargs)['pixel_values'] input_batch = feature_extractor(images=[image, image], **kwargs)['pixel_values'] print("\n" + str(kwargs)) print(f"Single image - type: {type(input_single)}, shape: {[x.shape for x in input_single] if isinstance(input_single, list) else input_single.shape}") print(f"Batch of images - type: {type(input_batch)}, shape: {[x.shape for x in input_batch] if isinstance(input_batch, list) else input_batch.shape}") ``` And got the following output: ``` {'do_resize': True, 'do_normalize': True, 'return_tensors': None} Single image - type: <class 'list'>, shape: [(3, 224, 224)] Batch of images - type: <class 'list'>, shape: [(3, 224, 224), (3, 224, 224)] {'do_resize': True, 'do_normalize': True, 'return_tensors': 'pt'} Single image - type: <class 'torch.Tensor'>, shape: torch.Size([1, 3, 224, 224]) Batch of images - type: <class 'torch.Tensor'>, shape: torch.Size([2, 3, 224, 224]) {'do_resize': False, 'do_normalize': True, 'return_tensors': None} Single image - type: <class 'list'>, shape: [(3, 480, 640)] Batch of images - type: <class 'list'>, shape: [(3, 480, 640), (3, 480, 640)] {'do_resize': False, 'do_normalize': True, 'return_tensors': 'pt'} Single image - type: <class 'torch.Tensor'>, shape: torch.Size([1, 3, 480, 640]) Batch of images - type: <class 'torch.Tensor'>, shape: torch.Size([2, 3, 480, 640]) {'do_resize': True, 'do_normalize': False, 'return_tensors': None} Single image - type: <class 'list'>, shape: [(3, 224, 224)] Batch of images - type: <class 'list'>, shape: [(3, 224, 224), (3, 224, 224)] {'do_resize': True, 'do_normalize': False, 'return_tensors': 'pt'} Single image - type: <class 'torch.Tensor'>, shape: torch.Size([1, 3, 224, 224]) Batch of images - type: <class 'torch.Tensor'>, shape: torch.Size([2, 3, 224, 224]) {'do_resize': False, 'do_normalize': False, 'return_tensors': None} Single image - type: <class 'list'>, shape: [(3, 480, 640)] Batch of images - type: <class 'list'>, shape: [(3, 480, 640), (3, 480, 640)] {'do_resize': False, 'do_normalize': False, 'return_tensors': 'pt'} Single image - type: <class 'torch.Tensor'>, shape: torch.Size([1, 3, 480, 640]) Batch of images - type: <class 'torch.Tensor'>, shape: torch.Size([2, 3, 480, 640]) ``` Disabling either `do_resize` or `do_normalize` should no longer have an effect on the output type and `return_tensors=None` will return a list of numpy arrays. However, if `do_resize=False` and `return_tensors="pt"` an error may be raised if the images are of different shapes as the images can't be batched together. Note that `do_resize` and `do_normalize` can now also be passed to the image processor call directly, rather than modifying the instance properties. <|||||>Thanks a lot @amyeroberts! Closing this issue as it's resolved.
transformers
15,054
closed
How can I update special token ids?
I am combining a tokenizer from model A with apretrained model B. I want to align special token ids. I am able to change the token name such as <s> but I cannot change the token id number. How can I aproach this problem?
01-06-2022 14:18:00
01-06-2022 14:18:00
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,053
closed
Add Detectron2 to Github actions
# What does this PR do? This PR makes sure the torch GPU + multi GPU LayoutLMv2/LayoutXLM tests are run on Github actions, by installing Detectron2 from source. It also fixes a device issue.
01-06-2022 13:34:55
01-06-2022 13:34:55
transformers
15,052
closed
Multilabel Token Classification in trainer
# 🚀 Feature request We need to be able to use the trainer for multilabel token classification problems. ## Motivation Right now we create a custom model and a custom trainer class. The model has an additional layer for the second set of labels. Which makes the codebase complicated and harder to maintain. This feature may help to identify entity groups or roles as it is implemented in [RASA](https://rasa.com/blog/introducing-entity-roles-and-groups/) ## Your contribution I can try to help if you direct me to the solution.
01-06-2022 12:03:28
01-06-2022 12:03:28
I am unsure why the model would need an additional layer. Can't it all be treated by a proper loss function?<|||||>Thanks for the suggestion. After brainstorming with my team, it seems to be possible. I am closing this thread.
transformers
15,051
closed
Trainer model __init__() got an unexpected keyword argument 'prediction_loss_only'
## Environment info - `transformers` version: 4.15.0 - Platform: Linux-4.18.0-193.19.1.el8_2.x86_64-x86_64-with-centos-8.2.2004-Core - Python version: 3.6.13 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help @sgugger ## Information Model I am using is BERT. The problem arises when using: * [ ] the official example scripts: (give details below) cd examples/legacy/ export CUDA_VISIBLE_DEVICES=0 export TRAIN_FILE=data/source export TEST_FILE=data/source python run_language_modeling.py \ --output_dir=chinese_finetuned_lm \ --model_type=bert \ --model_name_or_path=bert-base-chinese \ --do_train \ --train_data_file=$TRAIN_FILE \ --do_eval \ --eval_data_file=$TEST_FILE \ --mlm The tasks I am working on is: * [ ] Finetuning a language model based on bert-base-chinese model. ## To reproduce Steps to reproduce the behavior: 1.prepare bert-base-chinese model. 2.prepare data/source file(A pure Chinese text). 3.run run_language_modeling.py Traceback (most recent call last): File "run_language_modeling.py", line 364, in <module> main() File "run_language_modeling.py", line 318, in main prediction_loss_only=True, TypeError: __init__() got an unexpected keyword argument 'prediction_loss_only' ## Expected behavior No error.
01-06-2022 09:35:41
01-06-2022 09:35:41
You are using a legacy script with a recent version of Transformers, this is not going to work. You should use a version of Transformers compatible with that script, or use one of our maintained examples.<|||||>I had the same issue and solved by using an older release: `pip install transformers==v3.4.0` @atptour2017, just in case that works for you too. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,050
closed
Cannot fine-tune google/mobilebert-uncased using native Pytorch
## Environment info - `transformers` version: 4.12.5 - Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.23 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik @sgugger ## To reproduce Steps to reproduce the behavior: 1. Follow the "fine-tuning in native PyTorch" section in "Fine-tuning a pretrained model" on the official documentation 2. Replace "bert-base-cased" with "google/mobilebert-uncased" 3. Add max_length=171 argument to tokenize function ## Expected behavior The model fine-tuned with native PyTorch reach the same accuracy as the model fine-tuned with a Trainer. But instead the accuracy stucks around 50% and even dropped after one epoch. I am wondering if the huggingface Trainer class used some training techniques that I must add in native PyTorch in order for the model to be properly fine-tuned. I looked into the Trainer class but didn't find anything special. Any help is appreciated.
01-06-2022 07:24:27
01-06-2022 07:24:27
Hi, For training-related questions, please refer to the forum. See [here](https://discuss.huggingface.co/search?q=mobilebert) for all MobileBERT-related questions for instance. We like to keep Github issues for bugs/feature requests. Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,049
closed
Inference API: Error with GPU inference
### Who can help @LysandreJik @patil-suraj ## To reproduce ``` import requests import json headers = {"Authorization": f"Bearer {MY_BEARER_TOKEN}"} API_URL = "https://api-inference.huggingface.co/models/gpt2" data = json.dumps({"inputs": INPUT_TEXT, "parameters":{"num_return_sequences":NUM_SEQUENCES, "max_length":MAX_LENGTH},"options": {"wait_for_model": True, "use_cache": False, "use_gpu":True}}) response = requests.request("POST", API_URL, headers=headers, data=data) print(json.loads(response.content.decode("utf-8"))) ``` ## Expected behavior For more than a month, I have used the above code snippet to retrieve text completions from gpt2 using huggingface's inference API. However, when the same snippet ran again today, the inference API gave the following response: Error Message 1: ``` CUDA error: all CUDA-capable devices are busy or unavailable\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1."} ``` This persisted for roughly half an hour, and after that time, the API would only allow me to make API requests with very few tokens of text in the ```INPUT_TEXT``` variable. All normal-sized requests gave the following error: Error Message 2: ``` {'error': 'CUDA out of memory, try a smaller payload', 'warnings': ['Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation.']} ``` Keep in mind that when I get the above error, it is with the same arguments (the value for INPUT_TEXT, NUM_SEQUENCES, and MAX_LENGTH) that I have been using this API with for more than a month. I have checked my account, and the inference API dashboard shows that I am still within the free quota provided by huggingface (my subscription plan is the "Lab: pay as you go" option). Can you please help me resolve this? Sample argument that causes an error: INPUT_TEXT = 'Hippocrates, another ancient Greek, established a medical school, wrote many medical treatises, and is— because of Hippocrates, another ancient Greek,' NUM_SEQUENCES = 7 MAX_LENGTH = 105 ## Edit It appears that the API response message is varying between Error Messages 1 and 2 (originally it was 1, then 2, and now 1 again).
01-06-2022 06:58:05
01-06-2022 06:58:05
Hi! Do have an update on this issue? Thank you for all the support. @LysandreJik @patil-suraj<|||||>cc @Narsil <|||||>Hi @nbravulapalli , There indeed seemed to have been an issue with that model. It should be back up again. We are actively tracking those issues to reduce them to a minimum, but sometimes there is indeed a memory error depending on what other models are being used at the same time. Sorry about the issue you were seeing.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I'm facing this same error on facebook/bart-large-mnli when trying to use GPU-accelerated inference. I am using this model for text classification and passing 10 candidate labels. When using GPU-Accelerated Inference I am getting error 400 Bad request { "error": "CUDA error: out of memory\nCUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1." } Could anyone point me to why this is the case? Thanks. @Narsil <|||||>I'm also facing this issue for `facebook/bart-large-mnli` on the Lab plan. Is there any advice on workarounds here? Also, just as user feedback, I would expect this error to result in a 500 or 503 error, instead of 400.<|||||>I still have the same issue on facebook/bart-large-cnn with GPU inference? Any solutions?<|||||>For GPU inference, you should check out our premium plans: Spaces https://huggingface.co/docs/hub/spaces-overview Or Inference Endpoints https://huggingface.co/inference-endpoints The API is public and free, so GPU access is limited.
transformers
15,048
closed
word_ids method is not available on fast tokenizers when using "prepare_for_model"
Tokenizer: RoBERTa ``` hf_tokenizer = AutoTokenizer.from_pretrained("roberta-base") hf_tokenizer.is_fast # = True ``` ``` inputs = hf_tokenizer("This is a test") print(inputs.word_ids()) input_ids = inputs["input_ids"] ``` Returns: `[None, 0, 1, 2, 3, None]` ``` inputs2 = hf_tokenizer.prepare_for_model(input_ids) print(inputs2.word_ids()) ``` Returns this error: ~/miniconda3/envs/blurr/lib/python3.9/site-packages/transformers/tokenization_utils_base.py in word_ids(self, batch_index) 351 """ 352 if not self._encodings: --> 353 raise ValueError("word_ids() is not available when using Python-based tokenizers") 354 return self._encodings[batch_index].word_ids 355 ValueError: word_ids() is not available when using Python-based tokenizers **Expected result** Both approaches should return `[None, 0, 1, 2, 3, None]`
01-05-2022 22:10:00
01-05-2022 22:10:00
Hi, The `prepare_for_model` is not a user-facing method, it's recommended to just call the tokenizer (as in your first code snippet). The `prepare_for_model` method does not set the `_encodings` field of `BatchEncoding` (this is only done in the `_encode_plus` and `_batch_encode_plus` methods of `tokenization_utils_fast.py` (as seen [here](https://github.com/huggingface/transformers/blob/2e9af294940083915ccb2740a7c8d5b154194f15/src/transformers/tokenization_utils_fast.py#L520) and [here](https://github.com/huggingface/transformers/blob/2e9af294940083915ccb2740a7c8d5b154194f15/src/transformers/tokenization_utils_fast.py#L472) respectively).<|||||>`prepare_for_model` is the only method that works like calling the tokenizer, but with input_ids as input (which is what several of my pre-processed datasets have). Is there another method call I could use that takes input_ids and includes the ability to add padding, truncation, etc...?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,047
closed
[Trainer] Very different loss reported during training vs. at the end.
I have trained GPT from scratch. During training, I got logs outputting evaluation and 'loss', like shown below: `{'loss': 6.5349, 'learning_rate': 0.00014698625308424393, 'epoch': 42.65}` But after the last step I get this log, different to the previous ones: `{'train_runtime': 751.251, 'train_samples_per_second': 580108.554, 'train_steps_per_second': 566.455, 'train_loss': 0.008420203693772214, 'epoch': 50.0}` Besides, I get this metrics: ``` ***** train metrics ***** epoch = 50.0 train_loss = 0.0084 train_runtime = 0:12:31.27 train_samples = 8716143 train_samples_per_second = 580087.323 train_steps_per_second = 566.434 ``` The values of `train_loss` and `loss` are remarkably different, so what's each of them referring to? And what is the real training loss? If it's the last one, what is the point of plotting the training curves using the variable `loss`?
01-05-2022 20:01:34
01-05-2022 20:01:34
cc @sgugger <|||||>It looks like the reported `"train_loss"` is averaged twice, so it's not accurate. Will fix this week.<|||||>I was actually wrong, and the training loss is properly computed (see #15096 that adds a test which is passing). Are you sure you are using the most recent version of Transformers?<|||||>I trained this model using transformers==4.14.1. Was it fixed after that? If so, would I need to train again? Training takes several weeks so updating to every new version is not possible in my case.<|||||>I think the value of 'train_loss' is meant to be different to 'loss'. If you follow the code: https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py logs["loss"] = round(tr_loss_scalar / (self.state.global_step - self._globalstep_last_logged), 4) 'loss': loss on the batch data within one step only train_loss = self._total_loss_scalar / self.state.global_step 'train_loss' average on all steps <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,046
closed
Adding support for `microphone` streaming within pipeline.
- Uses `ffmpeg` to get microphone data. - Makes sure alignment is made to `size_of_sample`. - Works by sending `{"raw": ..data.., "stride": (n, left, right), "partial": bool, "sampling_rate": sampling_rate}` directly to the pipeline enabling to stream partial results and still get inference. - Let's `partial` information flow through the pipeline to enable caller to get it back and choose to display text or not. - ~~The striding reconstitution is bound to have errors since CTC does not keep previous state. Currently most of the errors are we don't know if there's a space or not between two chunks. Since we have some left striding info, we could use that during decoding to choose what to do with those spaces and even extra letters maybe (if the stride is long enough, it's bound to cover at least a few symbols)~~ Fixed by using intelligent replacement on the dropped `tokens`. ```python import datetime import sys from transformers import pipeline from transformers.pipelines.audio_utils import ffmpeg_microphone_live pipe = pipeline("automatic-speech-recognition", device=0) sampling_rate = pipe.feature_extractor.sampling_rate start = datetime.datetime.now() chunk_length_s = 5 stream_chunk_s = 0.1 mic = ffmpeg_microphone_live( sampling_rate=sampling_rate, chunk_length_s=chunk_length_s, stream_chunk_s=stream_chunk_s, ) print("Start talking...") for item in pipe(mic): sys.stdout.write("\033[K") print(item["text"], end="\r") if not item["partial"][0]: print("") ``` 2nd Better IMO, but low-level demo (requires curses on UNIX like, does not work on windows variants): ```python import sys import numpy as np from transformers import pipeline from transformers.pipelines.audio_utils import ffmpeg_microphone_live from curses import wrapper import curses def main(): pipe = pipeline("automatic-speech-recognition", device=0) sampling_rate = pipe.feature_extractor.sampling_rate chunk_length_s = 5 stream_chunk_s = 0.1 mic = ffmpeg_microphone_live( sampling_rate=sampling_rate, chunk_length_s=chunk_length_s, stream_chunk_s=stream_chunk_s, # , stride_length_s=(1, 0.1) ) print("Start talking...") stdscr = curses.initscr() curses.noecho() curses.cbreak() text = "" for item in pipe(mic): displayed = text + item["text"] if not item["partial"][0]: text += item["text"] stdscr.addstr(0, 0, displayed) stdscr.clrtoeol() stdscr.refresh() if __name__ == "__main__": wrapper(main()) ``` # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-05-2022 18:02:11
01-05-2022 18:02:11
Hey @Narsil, I think the PR broke some slow tests: ``` FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_speech_to_text_leveraged FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_torch_speech_encoder_decoder FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_xls_r_from_en FAILED tests/test_pipelines_automatic_speech_recognition.py::AutomaticSpeechRecognitionPipelineTests::test_xls_r_to_en ``` Could you take a look maybe? :-)<|||||>I can't reproduce. Is did have an issue with old `1.18.0` version, gone in `1.18.3`, was that it ?<|||||>Fixed it :-)
transformers
15,045
closed
[FX] `symbolic_trace` yields a TraceError for `BertModel`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.0.dev0 - Platform: Linux - Python version: 3.9.2 - PyTorch version (GPU?): 1.10.1+cpu - Tensorflow version (GPU?): / - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @michaelbenayoun Models: - BERT: @LysandreJik ## Information Model I am using : Bert The problem arises when using: * [ x ] my own modified scripts The tasks I am working on is: * Convert a `BertModel` to `torch.fx.GraphModule` ## To reproduce Steps to reproduce the behavior: ``` from transformers import BertModel import torch from transformers.utils.fx import symbolic_trace import transformers # see https://github.com/huggingface/transformers/issues/14632 transformers.utils.fx.is_torch_fx_available = lambda: True model = BertModel.from_pretrained("bert-base-uncased") model = model.eval() g = symbolic_trace(model, sequence_length=20) ``` Traceback message: ``` Traceback (most recent call last): File "<tmp 1>", line 20, in <module> g = symbolic_trace(model, File "/home/felix/.local/lib/python3.9/site-packages/transformers-4.16.0.dev0-py3.9.egg/transformers/utils/fx.py", line 581, in symbolic_trace traced_graph = tracer.trace(model, concrete_args=concrete_args) File "/home/felix/.local/lib/python3.9/site-packages/transformers-4.16.0.dev0-py3.9.egg/transformers/utils/fx.py", line 372, in trace graph = super().trace(root, concrete_args=concrete_args) File "/home/felix/.local/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 615, in trace self.create_node('output', 'output', (self.create_arg(fn(*args)),), {}, File "/home/felix/.local/lib/python3.9/site-packages/transformers-4.16.0.dev0-py3.9.egg/transformers/models/albert/modeling_albert.py", line 720, in forward encoder_outputs = self.encoder( File "/home/felix/.local/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 604, in module_call_wrapper return self.call_module(mod, forward, args, kwargs) File "/home/felix/.local/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 422, in call_module return forward(*args, **kwargs) File "/home/felix/.local/lib/python3.9/site-packages/torch/fx/_symbolic_trace.py", line 600, in forward return _orig_module_call(mod, *args, **kwargs) File "/home/felix/.local/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/felix/.local/lib/python3.9/site-packages/transformers-4.16.0.dev0-py3.9.egg/transformers/models/albert/modeling_albert.py", line 484, in forward return BaseModelOutput( File "<string>", line 6, in __init__ File "/home/felix/.local/lib/python3.9/site-packages/transformers-4.16.0.dev0-py3.9.egg/transformers/file_utils.py", line 2296, in __post_init__ iterator = iter(first_field) File "/home/felix/.local/lib/python3.9/site-packages/torch/fx/proxy.py", line 248, in __iter__ return self.tracer.iter(self) File "/home/felix/.local/lib/python3.9/site-packages/torch/fx/proxy.py", line 161, in iter raise TraceError('Proxy object cannot be iterated. This can be ' torch.fx.proxy.TraceError: Proxy object cannot be iterated. This can be attempted when the Proxy is used in a loop or as a *args or **kwargs function argument. See the torch.fx docs on pytorch.org for a more detailed explanation of what types of control flow can be traced, and check out the Proxy docstring for help troubleshooting Proxy iteration errors ``` Tweaking the parameters here https://github.com/huggingface/transformers/blob/master/src/transformers/utils/fx.py#L497 does not seem to help. Maybe this is related, but I am not sure, I am not familiar with fx: https://github.com/pytorch/pytorch/issues/44665 Maybe Bert and other models are not yet fx-friendly? ## Expected behavior No error.
01-05-2022 16:08:56
01-05-2022 16:08:56
Hi, In order to symbolic trace a model, you also need to provide the input_names: ``` from transformers import BertModel from transformers.utils.fx import symbolic_trace model = BertModel.from_pretrained("bert-base-uncased") traced_model = symbolic_trace( model, input_names=["input_ids", "attention_mask", "token_type_ids"], batch_size=1, sequence_length=128, ) ``` This works for me on Transformers v4.15.0 and PyTorch 1.9.0 in Google Colab. Running `print(traced_model.graph)` prints the following for me: ``` graph(): %input_ids : [#users=1] = placeholder[target=input_ids] %embeddings_token_type_ids : [#users=1] = get_attr[target=embeddings.token_type_ids] %getitem : [#users=1] = call_function[target=operator.getitem](args = (%embeddings_token_type_ids, (slice(None, None, None), slice(None, 20, None))), kwargs = {}) %expand : [#users=1] = call_method[target=expand](args = (%getitem, 1, 20), kwargs = {}) %embeddings_position_ids : [#users=1] = get_attr[target=embeddings.position_ids] %getitem_1 : [#users=1] = call_function[target=operator.getitem](args = (%embeddings_position_ids, (slice(None, None, None), slice(0, 20, None))), kwargs = {}) %embeddings_word_embeddings : [#users=1] = call_module[target=embeddings.word_embeddings](args = (%input_ids,), kwargs = {}) %embeddings_token_type_embeddings : [#users=1] = call_module[target=embeddings.token_type_embeddings](args = (%expand,), kwargs = {}) %add : [#users=1] = call_function[target=operator.add](args = (%embeddings_word_embeddings, %embeddings_token_type_embeddings), kwargs = {}) %embeddings_position_embeddings : [#users=1] = call_module[target=embeddings.position_embeddings](args = (%getitem_1,), kwargs = {}) %add_1 : [#users=1] = call_function[target=operator.add](args = (%add, %embeddings_position_embeddings), kwargs = {}) %embeddings_layer_norm : [#users=1] = call_module[target=embeddings.LayerNorm](args = (%add_1,), kwargs = {}) %embeddings_dropout : [#users=4] = call_module[target=embeddings.dropout](args = (%embeddings_layer_norm,), kwargs = {}) %encoder_layer_0_attention_self_query : [#users=1] = call_module[target=encoder.layer.0.attention.self.query](args = (%embeddings_dropout,), kwargs = {}) %encoder_layer_0_attention_self_key : [#users=1] = call_module[target=encoder.layer.0.attention.self.key](args = (%embeddings_dropout,), kwargs = {}) %view : [#users=1] = call_method[target=view](args = (%encoder_layer_0_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute : [#users=1] = call_method[target=permute](args = (%view, 0, 2, 1, 3), kwargs = {}) %encoder_layer_0_attention_self_value : [#users=1] = call_module[target=encoder.layer.0.attention.self.value](args = (%embeddings_dropout,), kwargs = {}) %view_1 : [#users=1] = call_method[target=view](args = (%encoder_layer_0_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_1 : [#users=1] = call_method[target=permute](args = (%view_1, 0, 2, 1, 3), kwargs = {}) %view_2 : [#users=1] = call_method[target=view](args = (%encoder_layer_0_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_2 : [#users=1] = call_method[target=permute](args = (%view_2, 0, 2, 1, 3), kwargs = {}) %transpose : [#users=1] = call_method[target=transpose](args = (%permute, -1, -2), kwargs = {}) %matmul : [#users=1] = call_function[target=torch.matmul](args = (%permute_2, %transpose), kwargs = {}) %truediv : [#users=1] = call_function[target=operator.truediv](args = (%matmul, 8.0), kwargs = {}) %_tensor_constant0 : [#users=1] = get_attr[target=_tensor_constant0] %add_2 : [#users=1] = call_function[target=operator.add](args = (%truediv, %_tensor_constant0), kwargs = {}) %softmax : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_2,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_0_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.0.attention.self.dropout](args = (%softmax,), kwargs = {}) %matmul_1 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_0_attention_self_dropout, %permute_1), kwargs = {}) %permute_3 : [#users=1] = call_method[target=permute](args = (%matmul_1, 0, 2, 1, 3), kwargs = {}) %contiguous : [#users=1] = call_method[target=contiguous](args = (%permute_3,), kwargs = {}) %view_3 : [#users=1] = call_method[target=view](args = (%contiguous, 1, 20, 768), kwargs = {}) %encoder_layer_0_attention_output_dense : [#users=1] = call_module[target=encoder.layer.0.attention.output.dense](args = (%view_3,), kwargs = {}) %encoder_layer_0_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.0.attention.output.dropout](args = (%encoder_layer_0_attention_output_dense,), kwargs = {}) %add_3 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_0_attention_output_dropout, %embeddings_dropout), kwargs = {}) %encoder_layer_0_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.0.attention.output.LayerNorm](args = (%add_3,), kwargs = {}) %encoder_layer_0_intermediate_dense : [#users=1] = call_module[target=encoder.layer.0.intermediate.dense](args = (%encoder_layer_0_attention_output_layer_norm,), kwargs = {}) %gelu : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_0_intermediate_dense,), kwargs = {}) %encoder_layer_0_output_dense : [#users=1] = call_module[target=encoder.layer.0.output.dense](args = (%gelu,), kwargs = {}) %encoder_layer_0_output_dropout : [#users=1] = call_module[target=encoder.layer.0.output.dropout](args = (%encoder_layer_0_output_dense,), kwargs = {}) %add_4 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_0_output_dropout, %encoder_layer_0_attention_output_layer_norm), kwargs = {}) %encoder_layer_0_output_layer_norm : [#users=4] = call_module[target=encoder.layer.0.output.LayerNorm](args = (%add_4,), kwargs = {}) %encoder_layer_1_attention_self_query : [#users=1] = call_module[target=encoder.layer.1.attention.self.query](args = (%encoder_layer_0_output_layer_norm,), kwargs = {}) %encoder_layer_1_attention_self_key : [#users=1] = call_module[target=encoder.layer.1.attention.self.key](args = (%encoder_layer_0_output_layer_norm,), kwargs = {}) %view_4 : [#users=1] = call_method[target=view](args = (%encoder_layer_1_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_4 : [#users=1] = call_method[target=permute](args = (%view_4, 0, 2, 1, 3), kwargs = {}) %encoder_layer_1_attention_self_value : [#users=1] = call_module[target=encoder.layer.1.attention.self.value](args = (%encoder_layer_0_output_layer_norm,), kwargs = {}) %view_5 : [#users=1] = call_method[target=view](args = (%encoder_layer_1_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_5 : [#users=1] = call_method[target=permute](args = (%view_5, 0, 2, 1, 3), kwargs = {}) %view_6 : [#users=1] = call_method[target=view](args = (%encoder_layer_1_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_6 : [#users=1] = call_method[target=permute](args = (%view_6, 0, 2, 1, 3), kwargs = {}) %transpose_1 : [#users=1] = call_method[target=transpose](args = (%permute_4, -1, -2), kwargs = {}) %matmul_2 : [#users=1] = call_function[target=torch.matmul](args = (%permute_6, %transpose_1), kwargs = {}) %truediv_1 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_2, 8.0), kwargs = {}) %_tensor_constant1 : [#users=1] = get_attr[target=_tensor_constant1] %add_5 : [#users=1] = call_function[target=operator.add](args = (%truediv_1, %_tensor_constant1), kwargs = {}) %softmax_1 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_5,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_1_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.1.attention.self.dropout](args = (%softmax_1,), kwargs = {}) %matmul_3 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_1_attention_self_dropout, %permute_5), kwargs = {}) %permute_7 : [#users=1] = call_method[target=permute](args = (%matmul_3, 0, 2, 1, 3), kwargs = {}) %contiguous_1 : [#users=1] = call_method[target=contiguous](args = (%permute_7,), kwargs = {}) %view_7 : [#users=1] = call_method[target=view](args = (%contiguous_1, 1, 20, 768), kwargs = {}) %encoder_layer_1_attention_output_dense : [#users=1] = call_module[target=encoder.layer.1.attention.output.dense](args = (%view_7,), kwargs = {}) %encoder_layer_1_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.1.attention.output.dropout](args = (%encoder_layer_1_attention_output_dense,), kwargs = {}) %add_6 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_1_attention_output_dropout, %encoder_layer_0_output_layer_norm), kwargs = {}) %encoder_layer_1_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.1.attention.output.LayerNorm](args = (%add_6,), kwargs = {}) %encoder_layer_1_intermediate_dense : [#users=1] = call_module[target=encoder.layer.1.intermediate.dense](args = (%encoder_layer_1_attention_output_layer_norm,), kwargs = {}) %gelu_1 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_1_intermediate_dense,), kwargs = {}) %encoder_layer_1_output_dense : [#users=1] = call_module[target=encoder.layer.1.output.dense](args = (%gelu_1,), kwargs = {}) %encoder_layer_1_output_dropout : [#users=1] = call_module[target=encoder.layer.1.output.dropout](args = (%encoder_layer_1_output_dense,), kwargs = {}) %add_7 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_1_output_dropout, %encoder_layer_1_attention_output_layer_norm), kwargs = {}) %encoder_layer_1_output_layer_norm : [#users=4] = call_module[target=encoder.layer.1.output.LayerNorm](args = (%add_7,), kwargs = {}) %encoder_layer_2_attention_self_query : [#users=1] = call_module[target=encoder.layer.2.attention.self.query](args = (%encoder_layer_1_output_layer_norm,), kwargs = {}) %encoder_layer_2_attention_self_key : [#users=1] = call_module[target=encoder.layer.2.attention.self.key](args = (%encoder_layer_1_output_layer_norm,), kwargs = {}) %view_8 : [#users=1] = call_method[target=view](args = (%encoder_layer_2_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_8 : [#users=1] = call_method[target=permute](args = (%view_8, 0, 2, 1, 3), kwargs = {}) %encoder_layer_2_attention_self_value : [#users=1] = call_module[target=encoder.layer.2.attention.self.value](args = (%encoder_layer_1_output_layer_norm,), kwargs = {}) %view_9 : [#users=1] = call_method[target=view](args = (%encoder_layer_2_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_9 : [#users=1] = call_method[target=permute](args = (%view_9, 0, 2, 1, 3), kwargs = {}) %view_10 : [#users=1] = call_method[target=view](args = (%encoder_layer_2_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_10 : [#users=1] = call_method[target=permute](args = (%view_10, 0, 2, 1, 3), kwargs = {}) %transpose_2 : [#users=1] = call_method[target=transpose](args = (%permute_8, -1, -2), kwargs = {}) %matmul_4 : [#users=1] = call_function[target=torch.matmul](args = (%permute_10, %transpose_2), kwargs = {}) %truediv_2 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_4, 8.0), kwargs = {}) %_tensor_constant2 : [#users=1] = get_attr[target=_tensor_constant2] %add_8 : [#users=1] = call_function[target=operator.add](args = (%truediv_2, %_tensor_constant2), kwargs = {}) %softmax_2 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_8,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_2_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.2.attention.self.dropout](args = (%softmax_2,), kwargs = {}) %matmul_5 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_2_attention_self_dropout, %permute_9), kwargs = {}) %permute_11 : [#users=1] = call_method[target=permute](args = (%matmul_5, 0, 2, 1, 3), kwargs = {}) %contiguous_2 : [#users=1] = call_method[target=contiguous](args = (%permute_11,), kwargs = {}) %view_11 : [#users=1] = call_method[target=view](args = (%contiguous_2, 1, 20, 768), kwargs = {}) %encoder_layer_2_attention_output_dense : [#users=1] = call_module[target=encoder.layer.2.attention.output.dense](args = (%view_11,), kwargs = {}) %encoder_layer_2_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.2.attention.output.dropout](args = (%encoder_layer_2_attention_output_dense,), kwargs = {}) %add_9 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_2_attention_output_dropout, %encoder_layer_1_output_layer_norm), kwargs = {}) %encoder_layer_2_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.2.attention.output.LayerNorm](args = (%add_9,), kwargs = {}) %encoder_layer_2_intermediate_dense : [#users=1] = call_module[target=encoder.layer.2.intermediate.dense](args = (%encoder_layer_2_attention_output_layer_norm,), kwargs = {}) %gelu_2 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_2_intermediate_dense,), kwargs = {}) %encoder_layer_2_output_dense : [#users=1] = call_module[target=encoder.layer.2.output.dense](args = (%gelu_2,), kwargs = {}) %encoder_layer_2_output_dropout : [#users=1] = call_module[target=encoder.layer.2.output.dropout](args = (%encoder_layer_2_output_dense,), kwargs = {}) %add_10 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_2_output_dropout, %encoder_layer_2_attention_output_layer_norm), kwargs = {}) %encoder_layer_2_output_layer_norm : [#users=4] = call_module[target=encoder.layer.2.output.LayerNorm](args = (%add_10,), kwargs = {}) %encoder_layer_3_attention_self_query : [#users=1] = call_module[target=encoder.layer.3.attention.self.query](args = (%encoder_layer_2_output_layer_norm,), kwargs = {}) %encoder_layer_3_attention_self_key : [#users=1] = call_module[target=encoder.layer.3.attention.self.key](args = (%encoder_layer_2_output_layer_norm,), kwargs = {}) %view_12 : [#users=1] = call_method[target=view](args = (%encoder_layer_3_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_12 : [#users=1] = call_method[target=permute](args = (%view_12, 0, 2, 1, 3), kwargs = {}) %encoder_layer_3_attention_self_value : [#users=1] = call_module[target=encoder.layer.3.attention.self.value](args = (%encoder_layer_2_output_layer_norm,), kwargs = {}) %view_13 : [#users=1] = call_method[target=view](args = (%encoder_layer_3_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_13 : [#users=1] = call_method[target=permute](args = (%view_13, 0, 2, 1, 3), kwargs = {}) %view_14 : [#users=1] = call_method[target=view](args = (%encoder_layer_3_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_14 : [#users=1] = call_method[target=permute](args = (%view_14, 0, 2, 1, 3), kwargs = {}) %transpose_3 : [#users=1] = call_method[target=transpose](args = (%permute_12, -1, -2), kwargs = {}) %matmul_6 : [#users=1] = call_function[target=torch.matmul](args = (%permute_14, %transpose_3), kwargs = {}) %truediv_3 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_6, 8.0), kwargs = {}) %_tensor_constant3 : [#users=1] = get_attr[target=_tensor_constant3] %add_11 : [#users=1] = call_function[target=operator.add](args = (%truediv_3, %_tensor_constant3), kwargs = {}) %softmax_3 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_11,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_3_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.3.attention.self.dropout](args = (%softmax_3,), kwargs = {}) %matmul_7 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_3_attention_self_dropout, %permute_13), kwargs = {}) %permute_15 : [#users=1] = call_method[target=permute](args = (%matmul_7, 0, 2, 1, 3), kwargs = {}) %contiguous_3 : [#users=1] = call_method[target=contiguous](args = (%permute_15,), kwargs = {}) %view_15 : [#users=1] = call_method[target=view](args = (%contiguous_3, 1, 20, 768), kwargs = {}) %encoder_layer_3_attention_output_dense : [#users=1] = call_module[target=encoder.layer.3.attention.output.dense](args = (%view_15,), kwargs = {}) %encoder_layer_3_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.3.attention.output.dropout](args = (%encoder_layer_3_attention_output_dense,), kwargs = {}) %add_12 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_3_attention_output_dropout, %encoder_layer_2_output_layer_norm), kwargs = {}) %encoder_layer_3_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.3.attention.output.LayerNorm](args = (%add_12,), kwargs = {}) %encoder_layer_3_intermediate_dense : [#users=1] = call_module[target=encoder.layer.3.intermediate.dense](args = (%encoder_layer_3_attention_output_layer_norm,), kwargs = {}) %gelu_3 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_3_intermediate_dense,), kwargs = {}) %encoder_layer_3_output_dense : [#users=1] = call_module[target=encoder.layer.3.output.dense](args = (%gelu_3,), kwargs = {}) %encoder_layer_3_output_dropout : [#users=1] = call_module[target=encoder.layer.3.output.dropout](args = (%encoder_layer_3_output_dense,), kwargs = {}) %add_13 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_3_output_dropout, %encoder_layer_3_attention_output_layer_norm), kwargs = {}) %encoder_layer_3_output_layer_norm : [#users=4] = call_module[target=encoder.layer.3.output.LayerNorm](args = (%add_13,), kwargs = {}) %encoder_layer_4_attention_self_query : [#users=1] = call_module[target=encoder.layer.4.attention.self.query](args = (%encoder_layer_3_output_layer_norm,), kwargs = {}) %encoder_layer_4_attention_self_key : [#users=1] = call_module[target=encoder.layer.4.attention.self.key](args = (%encoder_layer_3_output_layer_norm,), kwargs = {}) %view_16 : [#users=1] = call_method[target=view](args = (%encoder_layer_4_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_16 : [#users=1] = call_method[target=permute](args = (%view_16, 0, 2, 1, 3), kwargs = {}) %encoder_layer_4_attention_self_value : [#users=1] = call_module[target=encoder.layer.4.attention.self.value](args = (%encoder_layer_3_output_layer_norm,), kwargs = {}) %view_17 : [#users=1] = call_method[target=view](args = (%encoder_layer_4_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_17 : [#users=1] = call_method[target=permute](args = (%view_17, 0, 2, 1, 3), kwargs = {}) %view_18 : [#users=1] = call_method[target=view](args = (%encoder_layer_4_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_18 : [#users=1] = call_method[target=permute](args = (%view_18, 0, 2, 1, 3), kwargs = {}) %transpose_4 : [#users=1] = call_method[target=transpose](args = (%permute_16, -1, -2), kwargs = {}) %matmul_8 : [#users=1] = call_function[target=torch.matmul](args = (%permute_18, %transpose_4), kwargs = {}) %truediv_4 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_8, 8.0), kwargs = {}) %_tensor_constant4 : [#users=1] = get_attr[target=_tensor_constant4] %add_14 : [#users=1] = call_function[target=operator.add](args = (%truediv_4, %_tensor_constant4), kwargs = {}) %softmax_4 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_14,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_4_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.4.attention.self.dropout](args = (%softmax_4,), kwargs = {}) %matmul_9 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_4_attention_self_dropout, %permute_17), kwargs = {}) %permute_19 : [#users=1] = call_method[target=permute](args = (%matmul_9, 0, 2, 1, 3), kwargs = {}) %contiguous_4 : [#users=1] = call_method[target=contiguous](args = (%permute_19,), kwargs = {}) %view_19 : [#users=1] = call_method[target=view](args = (%contiguous_4, 1, 20, 768), kwargs = {}) %encoder_layer_4_attention_output_dense : [#users=1] = call_module[target=encoder.layer.4.attention.output.dense](args = (%view_19,), kwargs = {}) %encoder_layer_4_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.4.attention.output.dropout](args = (%encoder_layer_4_attention_output_dense,), kwargs = {}) %add_15 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_4_attention_output_dropout, %encoder_layer_3_output_layer_norm), kwargs = {}) %encoder_layer_4_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.4.attention.output.LayerNorm](args = (%add_15,), kwargs = {}) %encoder_layer_4_intermediate_dense : [#users=1] = call_module[target=encoder.layer.4.intermediate.dense](args = (%encoder_layer_4_attention_output_layer_norm,), kwargs = {}) %gelu_4 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_4_intermediate_dense,), kwargs = {}) %encoder_layer_4_output_dense : [#users=1] = call_module[target=encoder.layer.4.output.dense](args = (%gelu_4,), kwargs = {}) %encoder_layer_4_output_dropout : [#users=1] = call_module[target=encoder.layer.4.output.dropout](args = (%encoder_layer_4_output_dense,), kwargs = {}) %add_16 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_4_output_dropout, %encoder_layer_4_attention_output_layer_norm), kwargs = {}) %encoder_layer_4_output_layer_norm : [#users=4] = call_module[target=encoder.layer.4.output.LayerNorm](args = (%add_16,), kwargs = {}) %encoder_layer_5_attention_self_query : [#users=1] = call_module[target=encoder.layer.5.attention.self.query](args = (%encoder_layer_4_output_layer_norm,), kwargs = {}) %encoder_layer_5_attention_self_key : [#users=1] = call_module[target=encoder.layer.5.attention.self.key](args = (%encoder_layer_4_output_layer_norm,), kwargs = {}) %view_20 : [#users=1] = call_method[target=view](args = (%encoder_layer_5_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_20 : [#users=1] = call_method[target=permute](args = (%view_20, 0, 2, 1, 3), kwargs = {}) %encoder_layer_5_attention_self_value : [#users=1] = call_module[target=encoder.layer.5.attention.self.value](args = (%encoder_layer_4_output_layer_norm,), kwargs = {}) %view_21 : [#users=1] = call_method[target=view](args = (%encoder_layer_5_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_21 : [#users=1] = call_method[target=permute](args = (%view_21, 0, 2, 1, 3), kwargs = {}) %view_22 : [#users=1] = call_method[target=view](args = (%encoder_layer_5_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_22 : [#users=1] = call_method[target=permute](args = (%view_22, 0, 2, 1, 3), kwargs = {}) %transpose_5 : [#users=1] = call_method[target=transpose](args = (%permute_20, -1, -2), kwargs = {}) %matmul_10 : [#users=1] = call_function[target=torch.matmul](args = (%permute_22, %transpose_5), kwargs = {}) %truediv_5 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_10, 8.0), kwargs = {}) %_tensor_constant5 : [#users=1] = get_attr[target=_tensor_constant5] %add_17 : [#users=1] = call_function[target=operator.add](args = (%truediv_5, %_tensor_constant5), kwargs = {}) %softmax_5 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_17,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_5_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.5.attention.self.dropout](args = (%softmax_5,), kwargs = {}) %matmul_11 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_5_attention_self_dropout, %permute_21), kwargs = {}) %permute_23 : [#users=1] = call_method[target=permute](args = (%matmul_11, 0, 2, 1, 3), kwargs = {}) %contiguous_5 : [#users=1] = call_method[target=contiguous](args = (%permute_23,), kwargs = {}) %view_23 : [#users=1] = call_method[target=view](args = (%contiguous_5, 1, 20, 768), kwargs = {}) %encoder_layer_5_attention_output_dense : [#users=1] = call_module[target=encoder.layer.5.attention.output.dense](args = (%view_23,), kwargs = {}) %encoder_layer_5_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.5.attention.output.dropout](args = (%encoder_layer_5_attention_output_dense,), kwargs = {}) %add_18 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_5_attention_output_dropout, %encoder_layer_4_output_layer_norm), kwargs = {}) %encoder_layer_5_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.5.attention.output.LayerNorm](args = (%add_18,), kwargs = {}) %encoder_layer_5_intermediate_dense : [#users=1] = call_module[target=encoder.layer.5.intermediate.dense](args = (%encoder_layer_5_attention_output_layer_norm,), kwargs = {}) %gelu_5 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_5_intermediate_dense,), kwargs = {}) %encoder_layer_5_output_dense : [#users=1] = call_module[target=encoder.layer.5.output.dense](args = (%gelu_5,), kwargs = {}) %encoder_layer_5_output_dropout : [#users=1] = call_module[target=encoder.layer.5.output.dropout](args = (%encoder_layer_5_output_dense,), kwargs = {}) %add_19 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_5_output_dropout, %encoder_layer_5_attention_output_layer_norm), kwargs = {}) %encoder_layer_5_output_layer_norm : [#users=4] = call_module[target=encoder.layer.5.output.LayerNorm](args = (%add_19,), kwargs = {}) %encoder_layer_6_attention_self_query : [#users=1] = call_module[target=encoder.layer.6.attention.self.query](args = (%encoder_layer_5_output_layer_norm,), kwargs = {}) %encoder_layer_6_attention_self_key : [#users=1] = call_module[target=encoder.layer.6.attention.self.key](args = (%encoder_layer_5_output_layer_norm,), kwargs = {}) %view_24 : [#users=1] = call_method[target=view](args = (%encoder_layer_6_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_24 : [#users=1] = call_method[target=permute](args = (%view_24, 0, 2, 1, 3), kwargs = {}) %encoder_layer_6_attention_self_value : [#users=1] = call_module[target=encoder.layer.6.attention.self.value](args = (%encoder_layer_5_output_layer_norm,), kwargs = {}) %view_25 : [#users=1] = call_method[target=view](args = (%encoder_layer_6_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_25 : [#users=1] = call_method[target=permute](args = (%view_25, 0, 2, 1, 3), kwargs = {}) %view_26 : [#users=1] = call_method[target=view](args = (%encoder_layer_6_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_26 : [#users=1] = call_method[target=permute](args = (%view_26, 0, 2, 1, 3), kwargs = {}) %transpose_6 : [#users=1] = call_method[target=transpose](args = (%permute_24, -1, -2), kwargs = {}) %matmul_12 : [#users=1] = call_function[target=torch.matmul](args = (%permute_26, %transpose_6), kwargs = {}) %truediv_6 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_12, 8.0), kwargs = {}) %_tensor_constant6 : [#users=1] = get_attr[target=_tensor_constant6] %add_20 : [#users=1] = call_function[target=operator.add](args = (%truediv_6, %_tensor_constant6), kwargs = {}) %softmax_6 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_20,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_6_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.6.attention.self.dropout](args = (%softmax_6,), kwargs = {}) %matmul_13 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_6_attention_self_dropout, %permute_25), kwargs = {}) %permute_27 : [#users=1] = call_method[target=permute](args = (%matmul_13, 0, 2, 1, 3), kwargs = {}) %contiguous_6 : [#users=1] = call_method[target=contiguous](args = (%permute_27,), kwargs = {}) %view_27 : [#users=1] = call_method[target=view](args = (%contiguous_6, 1, 20, 768), kwargs = {}) %encoder_layer_6_attention_output_dense : [#users=1] = call_module[target=encoder.layer.6.attention.output.dense](args = (%view_27,), kwargs = {}) %encoder_layer_6_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.6.attention.output.dropout](args = (%encoder_layer_6_attention_output_dense,), kwargs = {}) %add_21 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_6_attention_output_dropout, %encoder_layer_5_output_layer_norm), kwargs = {}) %encoder_layer_6_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.6.attention.output.LayerNorm](args = (%add_21,), kwargs = {}) %encoder_layer_6_intermediate_dense : [#users=1] = call_module[target=encoder.layer.6.intermediate.dense](args = (%encoder_layer_6_attention_output_layer_norm,), kwargs = {}) %gelu_6 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_6_intermediate_dense,), kwargs = {}) %encoder_layer_6_output_dense : [#users=1] = call_module[target=encoder.layer.6.output.dense](args = (%gelu_6,), kwargs = {}) %encoder_layer_6_output_dropout : [#users=1] = call_module[target=encoder.layer.6.output.dropout](args = (%encoder_layer_6_output_dense,), kwargs = {}) %add_22 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_6_output_dropout, %encoder_layer_6_attention_output_layer_norm), kwargs = {}) %encoder_layer_6_output_layer_norm : [#users=4] = call_module[target=encoder.layer.6.output.LayerNorm](args = (%add_22,), kwargs = {}) %encoder_layer_7_attention_self_query : [#users=1] = call_module[target=encoder.layer.7.attention.self.query](args = (%encoder_layer_6_output_layer_norm,), kwargs = {}) %encoder_layer_7_attention_self_key : [#users=1] = call_module[target=encoder.layer.7.attention.self.key](args = (%encoder_layer_6_output_layer_norm,), kwargs = {}) %view_28 : [#users=1] = call_method[target=view](args = (%encoder_layer_7_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_28 : [#users=1] = call_method[target=permute](args = (%view_28, 0, 2, 1, 3), kwargs = {}) %encoder_layer_7_attention_self_value : [#users=1] = call_module[target=encoder.layer.7.attention.self.value](args = (%encoder_layer_6_output_layer_norm,), kwargs = {}) %view_29 : [#users=1] = call_method[target=view](args = (%encoder_layer_7_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_29 : [#users=1] = call_method[target=permute](args = (%view_29, 0, 2, 1, 3), kwargs = {}) %view_30 : [#users=1] = call_method[target=view](args = (%encoder_layer_7_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_30 : [#users=1] = call_method[target=permute](args = (%view_30, 0, 2, 1, 3), kwargs = {}) %transpose_7 : [#users=1] = call_method[target=transpose](args = (%permute_28, -1, -2), kwargs = {}) %matmul_14 : [#users=1] = call_function[target=torch.matmul](args = (%permute_30, %transpose_7), kwargs = {}) %truediv_7 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_14, 8.0), kwargs = {}) %_tensor_constant7 : [#users=1] = get_attr[target=_tensor_constant7] %add_23 : [#users=1] = call_function[target=operator.add](args = (%truediv_7, %_tensor_constant7), kwargs = {}) %softmax_7 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_23,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_7_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.7.attention.self.dropout](args = (%softmax_7,), kwargs = {}) %matmul_15 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_7_attention_self_dropout, %permute_29), kwargs = {}) %permute_31 : [#users=1] = call_method[target=permute](args = (%matmul_15, 0, 2, 1, 3), kwargs = {}) %contiguous_7 : [#users=1] = call_method[target=contiguous](args = (%permute_31,), kwargs = {}) %view_31 : [#users=1] = call_method[target=view](args = (%contiguous_7, 1, 20, 768), kwargs = {}) %encoder_layer_7_attention_output_dense : [#users=1] = call_module[target=encoder.layer.7.attention.output.dense](args = (%view_31,), kwargs = {}) %encoder_layer_7_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.7.attention.output.dropout](args = (%encoder_layer_7_attention_output_dense,), kwargs = {}) %add_24 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_7_attention_output_dropout, %encoder_layer_6_output_layer_norm), kwargs = {}) %encoder_layer_7_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.7.attention.output.LayerNorm](args = (%add_24,), kwargs = {}) %encoder_layer_7_intermediate_dense : [#users=1] = call_module[target=encoder.layer.7.intermediate.dense](args = (%encoder_layer_7_attention_output_layer_norm,), kwargs = {}) %gelu_7 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_7_intermediate_dense,), kwargs = {}) %encoder_layer_7_output_dense : [#users=1] = call_module[target=encoder.layer.7.output.dense](args = (%gelu_7,), kwargs = {}) %encoder_layer_7_output_dropout : [#users=1] = call_module[target=encoder.layer.7.output.dropout](args = (%encoder_layer_7_output_dense,), kwargs = {}) %add_25 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_7_output_dropout, %encoder_layer_7_attention_output_layer_norm), kwargs = {}) %encoder_layer_7_output_layer_norm : [#users=4] = call_module[target=encoder.layer.7.output.LayerNorm](args = (%add_25,), kwargs = {}) %encoder_layer_8_attention_self_query : [#users=1] = call_module[target=encoder.layer.8.attention.self.query](args = (%encoder_layer_7_output_layer_norm,), kwargs = {}) %encoder_layer_8_attention_self_key : [#users=1] = call_module[target=encoder.layer.8.attention.self.key](args = (%encoder_layer_7_output_layer_norm,), kwargs = {}) %view_32 : [#users=1] = call_method[target=view](args = (%encoder_layer_8_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_32 : [#users=1] = call_method[target=permute](args = (%view_32, 0, 2, 1, 3), kwargs = {}) %encoder_layer_8_attention_self_value : [#users=1] = call_module[target=encoder.layer.8.attention.self.value](args = (%encoder_layer_7_output_layer_norm,), kwargs = {}) %view_33 : [#users=1] = call_method[target=view](args = (%encoder_layer_8_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_33 : [#users=1] = call_method[target=permute](args = (%view_33, 0, 2, 1, 3), kwargs = {}) %view_34 : [#users=1] = call_method[target=view](args = (%encoder_layer_8_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_34 : [#users=1] = call_method[target=permute](args = (%view_34, 0, 2, 1, 3), kwargs = {}) %transpose_8 : [#users=1] = call_method[target=transpose](args = (%permute_32, -1, -2), kwargs = {}) %matmul_16 : [#users=1] = call_function[target=torch.matmul](args = (%permute_34, %transpose_8), kwargs = {}) %truediv_8 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_16, 8.0), kwargs = {}) %_tensor_constant8 : [#users=1] = get_attr[target=_tensor_constant8] %add_26 : [#users=1] = call_function[target=operator.add](args = (%truediv_8, %_tensor_constant8), kwargs = {}) %softmax_8 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_26,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_8_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.8.attention.self.dropout](args = (%softmax_8,), kwargs = {}) %matmul_17 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_8_attention_self_dropout, %permute_33), kwargs = {}) %permute_35 : [#users=1] = call_method[target=permute](args = (%matmul_17, 0, 2, 1, 3), kwargs = {}) %contiguous_8 : [#users=1] = call_method[target=contiguous](args = (%permute_35,), kwargs = {}) %view_35 : [#users=1] = call_method[target=view](args = (%contiguous_8, 1, 20, 768), kwargs = {}) %encoder_layer_8_attention_output_dense : [#users=1] = call_module[target=encoder.layer.8.attention.output.dense](args = (%view_35,), kwargs = {}) %encoder_layer_8_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.8.attention.output.dropout](args = (%encoder_layer_8_attention_output_dense,), kwargs = {}) %add_27 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_8_attention_output_dropout, %encoder_layer_7_output_layer_norm), kwargs = {}) %encoder_layer_8_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.8.attention.output.LayerNorm](args = (%add_27,), kwargs = {}) %encoder_layer_8_intermediate_dense : [#users=1] = call_module[target=encoder.layer.8.intermediate.dense](args = (%encoder_layer_8_attention_output_layer_norm,), kwargs = {}) %gelu_8 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_8_intermediate_dense,), kwargs = {}) %encoder_layer_8_output_dense : [#users=1] = call_module[target=encoder.layer.8.output.dense](args = (%gelu_8,), kwargs = {}) %encoder_layer_8_output_dropout : [#users=1] = call_module[target=encoder.layer.8.output.dropout](args = (%encoder_layer_8_output_dense,), kwargs = {}) %add_28 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_8_output_dropout, %encoder_layer_8_attention_output_layer_norm), kwargs = {}) %encoder_layer_8_output_layer_norm : [#users=4] = call_module[target=encoder.layer.8.output.LayerNorm](args = (%add_28,), kwargs = {}) %encoder_layer_9_attention_self_query : [#users=1] = call_module[target=encoder.layer.9.attention.self.query](args = (%encoder_layer_8_output_layer_norm,), kwargs = {}) %encoder_layer_9_attention_self_key : [#users=1] = call_module[target=encoder.layer.9.attention.self.key](args = (%encoder_layer_8_output_layer_norm,), kwargs = {}) %view_36 : [#users=1] = call_method[target=view](args = (%encoder_layer_9_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_36 : [#users=1] = call_method[target=permute](args = (%view_36, 0, 2, 1, 3), kwargs = {}) %encoder_layer_9_attention_self_value : [#users=1] = call_module[target=encoder.layer.9.attention.self.value](args = (%encoder_layer_8_output_layer_norm,), kwargs = {}) %view_37 : [#users=1] = call_method[target=view](args = (%encoder_layer_9_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_37 : [#users=1] = call_method[target=permute](args = (%view_37, 0, 2, 1, 3), kwargs = {}) %view_38 : [#users=1] = call_method[target=view](args = (%encoder_layer_9_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_38 : [#users=1] = call_method[target=permute](args = (%view_38, 0, 2, 1, 3), kwargs = {}) %transpose_9 : [#users=1] = call_method[target=transpose](args = (%permute_36, -1, -2), kwargs = {}) %matmul_18 : [#users=1] = call_function[target=torch.matmul](args = (%permute_38, %transpose_9), kwargs = {}) %truediv_9 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_18, 8.0), kwargs = {}) %_tensor_constant9 : [#users=1] = get_attr[target=_tensor_constant9] %add_29 : [#users=1] = call_function[target=operator.add](args = (%truediv_9, %_tensor_constant9), kwargs = {}) %softmax_9 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_29,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_9_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.9.attention.self.dropout](args = (%softmax_9,), kwargs = {}) %matmul_19 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_9_attention_self_dropout, %permute_37), kwargs = {}) %permute_39 : [#users=1] = call_method[target=permute](args = (%matmul_19, 0, 2, 1, 3), kwargs = {}) %contiguous_9 : [#users=1] = call_method[target=contiguous](args = (%permute_39,), kwargs = {}) %view_39 : [#users=1] = call_method[target=view](args = (%contiguous_9, 1, 20, 768), kwargs = {}) %encoder_layer_9_attention_output_dense : [#users=1] = call_module[target=encoder.layer.9.attention.output.dense](args = (%view_39,), kwargs = {}) %encoder_layer_9_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.9.attention.output.dropout](args = (%encoder_layer_9_attention_output_dense,), kwargs = {}) %add_30 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_9_attention_output_dropout, %encoder_layer_8_output_layer_norm), kwargs = {}) %encoder_layer_9_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.9.attention.output.LayerNorm](args = (%add_30,), kwargs = {}) %encoder_layer_9_intermediate_dense : [#users=1] = call_module[target=encoder.layer.9.intermediate.dense](args = (%encoder_layer_9_attention_output_layer_norm,), kwargs = {}) %gelu_9 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_9_intermediate_dense,), kwargs = {}) %encoder_layer_9_output_dense : [#users=1] = call_module[target=encoder.layer.9.output.dense](args = (%gelu_9,), kwargs = {}) %encoder_layer_9_output_dropout : [#users=1] = call_module[target=encoder.layer.9.output.dropout](args = (%encoder_layer_9_output_dense,), kwargs = {}) %add_31 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_9_output_dropout, %encoder_layer_9_attention_output_layer_norm), kwargs = {}) %encoder_layer_9_output_layer_norm : [#users=4] = call_module[target=encoder.layer.9.output.LayerNorm](args = (%add_31,), kwargs = {}) %encoder_layer_10_attention_self_query : [#users=1] = call_module[target=encoder.layer.10.attention.self.query](args = (%encoder_layer_9_output_layer_norm,), kwargs = {}) %encoder_layer_10_attention_self_key : [#users=1] = call_module[target=encoder.layer.10.attention.self.key](args = (%encoder_layer_9_output_layer_norm,), kwargs = {}) %view_40 : [#users=1] = call_method[target=view](args = (%encoder_layer_10_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_40 : [#users=1] = call_method[target=permute](args = (%view_40, 0, 2, 1, 3), kwargs = {}) %encoder_layer_10_attention_self_value : [#users=1] = call_module[target=encoder.layer.10.attention.self.value](args = (%encoder_layer_9_output_layer_norm,), kwargs = {}) %view_41 : [#users=1] = call_method[target=view](args = (%encoder_layer_10_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_41 : [#users=1] = call_method[target=permute](args = (%view_41, 0, 2, 1, 3), kwargs = {}) %view_42 : [#users=1] = call_method[target=view](args = (%encoder_layer_10_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_42 : [#users=1] = call_method[target=permute](args = (%view_42, 0, 2, 1, 3), kwargs = {}) %transpose_10 : [#users=1] = call_method[target=transpose](args = (%permute_40, -1, -2), kwargs = {}) %matmul_20 : [#users=1] = call_function[target=torch.matmul](args = (%permute_42, %transpose_10), kwargs = {}) %truediv_10 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_20, 8.0), kwargs = {}) %_tensor_constant10 : [#users=1] = get_attr[target=_tensor_constant10] %add_32 : [#users=1] = call_function[target=operator.add](args = (%truediv_10, %_tensor_constant10), kwargs = {}) %softmax_10 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_32,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_10_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.10.attention.self.dropout](args = (%softmax_10,), kwargs = {}) %matmul_21 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_10_attention_self_dropout, %permute_41), kwargs = {}) %permute_43 : [#users=1] = call_method[target=permute](args = (%matmul_21, 0, 2, 1, 3), kwargs = {}) %contiguous_10 : [#users=1] = call_method[target=contiguous](args = (%permute_43,), kwargs = {}) %view_43 : [#users=1] = call_method[target=view](args = (%contiguous_10, 1, 20, 768), kwargs = {}) %encoder_layer_10_attention_output_dense : [#users=1] = call_module[target=encoder.layer.10.attention.output.dense](args = (%view_43,), kwargs = {}) %encoder_layer_10_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.10.attention.output.dropout](args = (%encoder_layer_10_attention_output_dense,), kwargs = {}) %add_33 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_10_attention_output_dropout, %encoder_layer_9_output_layer_norm), kwargs = {}) %encoder_layer_10_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.10.attention.output.LayerNorm](args = (%add_33,), kwargs = {}) %encoder_layer_10_intermediate_dense : [#users=1] = call_module[target=encoder.layer.10.intermediate.dense](args = (%encoder_layer_10_attention_output_layer_norm,), kwargs = {}) %gelu_10 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_10_intermediate_dense,), kwargs = {}) %encoder_layer_10_output_dense : [#users=1] = call_module[target=encoder.layer.10.output.dense](args = (%gelu_10,), kwargs = {}) %encoder_layer_10_output_dropout : [#users=1] = call_module[target=encoder.layer.10.output.dropout](args = (%encoder_layer_10_output_dense,), kwargs = {}) %add_34 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_10_output_dropout, %encoder_layer_10_attention_output_layer_norm), kwargs = {}) %encoder_layer_10_output_layer_norm : [#users=4] = call_module[target=encoder.layer.10.output.LayerNorm](args = (%add_34,), kwargs = {}) %encoder_layer_11_attention_self_query : [#users=1] = call_module[target=encoder.layer.11.attention.self.query](args = (%encoder_layer_10_output_layer_norm,), kwargs = {}) %encoder_layer_11_attention_self_key : [#users=1] = call_module[target=encoder.layer.11.attention.self.key](args = (%encoder_layer_10_output_layer_norm,), kwargs = {}) %view_44 : [#users=1] = call_method[target=view](args = (%encoder_layer_11_attention_self_key, 1, 20, 12, 64), kwargs = {}) %permute_44 : [#users=1] = call_method[target=permute](args = (%view_44, 0, 2, 1, 3), kwargs = {}) %encoder_layer_11_attention_self_value : [#users=1] = call_module[target=encoder.layer.11.attention.self.value](args = (%encoder_layer_10_output_layer_norm,), kwargs = {}) %view_45 : [#users=1] = call_method[target=view](args = (%encoder_layer_11_attention_self_value, 1, 20, 12, 64), kwargs = {}) %permute_45 : [#users=1] = call_method[target=permute](args = (%view_45, 0, 2, 1, 3), kwargs = {}) %view_46 : [#users=1] = call_method[target=view](args = (%encoder_layer_11_attention_self_query, 1, 20, 12, 64), kwargs = {}) %permute_46 : [#users=1] = call_method[target=permute](args = (%view_46, 0, 2, 1, 3), kwargs = {}) %transpose_11 : [#users=1] = call_method[target=transpose](args = (%permute_44, -1, -2), kwargs = {}) %matmul_22 : [#users=1] = call_function[target=torch.matmul](args = (%permute_46, %transpose_11), kwargs = {}) %truediv_11 : [#users=1] = call_function[target=operator.truediv](args = (%matmul_22, 8.0), kwargs = {}) %_tensor_constant11 : [#users=1] = get_attr[target=_tensor_constant11] %add_35 : [#users=1] = call_function[target=operator.add](args = (%truediv_11, %_tensor_constant11), kwargs = {}) %softmax_11 : [#users=1] = call_function[target=torch.nn.functional.softmax](args = (%add_35,), kwargs = {dim: -1, _stacklevel: 3, dtype: None}) %encoder_layer_11_attention_self_dropout : [#users=1] = call_module[target=encoder.layer.11.attention.self.dropout](args = (%softmax_11,), kwargs = {}) %matmul_23 : [#users=1] = call_function[target=torch.matmul](args = (%encoder_layer_11_attention_self_dropout, %permute_45), kwargs = {}) %permute_47 : [#users=1] = call_method[target=permute](args = (%matmul_23, 0, 2, 1, 3), kwargs = {}) %contiguous_11 : [#users=1] = call_method[target=contiguous](args = (%permute_47,), kwargs = {}) %view_47 : [#users=1] = call_method[target=view](args = (%contiguous_11, 1, 20, 768), kwargs = {}) %encoder_layer_11_attention_output_dense : [#users=1] = call_module[target=encoder.layer.11.attention.output.dense](args = (%view_47,), kwargs = {}) %encoder_layer_11_attention_output_dropout : [#users=1] = call_module[target=encoder.layer.11.attention.output.dropout](args = (%encoder_layer_11_attention_output_dense,), kwargs = {}) %add_36 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_11_attention_output_dropout, %encoder_layer_10_output_layer_norm), kwargs = {}) %encoder_layer_11_attention_output_layer_norm : [#users=2] = call_module[target=encoder.layer.11.attention.output.LayerNorm](args = (%add_36,), kwargs = {}) %encoder_layer_11_intermediate_dense : [#users=1] = call_module[target=encoder.layer.11.intermediate.dense](args = (%encoder_layer_11_attention_output_layer_norm,), kwargs = {}) %gelu_11 : [#users=1] = call_function[target=torch.nn.functional.gelu](args = (%encoder_layer_11_intermediate_dense,), kwargs = {}) %encoder_layer_11_output_dense : [#users=1] = call_module[target=encoder.layer.11.output.dense](args = (%gelu_11,), kwargs = {}) %encoder_layer_11_output_dropout : [#users=1] = call_module[target=encoder.layer.11.output.dropout](args = (%encoder_layer_11_output_dense,), kwargs = {}) %add_37 : [#users=1] = call_function[target=operator.add](args = (%encoder_layer_11_output_dropout, %encoder_layer_11_attention_output_layer_norm), kwargs = {}) %encoder_layer_11_output_layer_norm : [#users=2] = call_module[target=encoder.layer.11.output.LayerNorm](args = (%add_37,), kwargs = {}) %getitem_2 : [#users=1] = call_function[target=operator.getitem](args = (%encoder_layer_11_output_layer_norm, (slice(None, None, None), 0)), kwargs = {}) %pooler_dense : [#users=1] = call_module[target=pooler.dense](args = (%getitem_2,), kwargs = {}) %pooler_activation : [#users=1] = call_module[target=pooler.activation](args = (%pooler_dense,), kwargs = {}) return {'last_hidden_state': encoder_layer_11_output_layer_norm, 'pooler_output': pooler_activation} ```<|||||>Thank you! So it is a version problem. I let you know this code with transformers version: 4.16.0.dev0 Python version: 3.9.2 PyTorch version (GPU?): 1.10.1+cpu do yield the error I gave: ``` from transformers import BertModel from transformers.utils.fx import symbolic_trace import transformers model = BertModel.from_pretrained("bert-base-uncased") transformers.utils.fx.is_torch_fx_available = lambda: True traced_model = symbolic_trace( model, input_names=["input_ids", "attention_mask", "token_type_ids"], batch_size=1, sequence_length=128, ) ``` I will downgrade PyTorch and transformers, thank you.<|||||>Hi @fxmarty, I am working on a PR (#14321) that should allow you to use both 1.9 and 1.10 version if everything goes well. It should be merged by the end of the week.<|||||>It's all good, thanks a lot!
transformers
15,044
closed
[Fix doc examples] missing from_pretrained
# What does this PR do? In a docstring, ``` model = SegformerModel("nvidia/segformer-b0-finetuned-ade-512-512") ``` fails -> should use `from_pretrained`. This PR fixes it. ## Who can review? @NielsRogge
01-05-2022 15:51:26
01-05-2022 15:51:26