repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
14,438
closed
Update file_utils.py
# What does this PR do? Fixes # (issue) This PR is a simple suggestion to replace the ValueError with ConnectionError at https://github.com/huggingface/transformers/blob/1991da07f7fe1f2dca0bb49e964aa971beca5746/src/transformers/file_utils.py#L1715. The main reason for this modification is that when the downstream task tries to track or handle the exception, it gives an exception catch block improper signal to check the value error instead of connection error. e.g., if the connection error is detected correctly, we can retry in the exception handler. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-17-2021 23:37:28
11-17-2021 23:37:28
Hi, sorry for late reply, I tried to fix the test, can you review or amend it (if necessary)?<|||||>@LysandreJik <|||||>This looks good, thank you @yangheng95! Pinging @sgugger for confirmation regarding if this is too much of a breaking change or not.<|||||>I personally think this is too much of a breaking change, some users rely on specific exceptions being raised in their code (we recently saw this with a change of exception in `huggingface_hub` causing problems in the AllenNLP codebase) and this is one of the core functions of the library.<|||||>Hey @yangheng95, after discussing a bit with @sgugger we agree that this is, unfortunately, a breaking change that will be tough to merge (as he mentions above). You give a good example in your PR description as if you catch `ValueError` and you would rather catch `ConnectionError`s, then others are bound to do the same and be surprised when their try/catch mechanism fail. We could switch that in version 5, but how to setup a deprecation cycle for an error?<|||||>Hi, Thanks for your discussion about this PR @sgugger, @LysandreJik . I understand it is a breaking change and recognize steadblity is important. As I said it is a suggestion, transformers helps me a lot. Cheers!<|||||>Thanks for your understanding, sorry not to have thought about this before asking you to fix the tests; really looking forward to your future contributions.<|||||>Just for an example about exception handling: ``` Catch exception: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on. in <function train4apc at 0x7f8bfbfd9790>, retry soon if you dont terminate the process... ``` Sometimes, for a kind of complex code structure, it is better to catch the ConnectionError so that the code can try to connect again (ubiquitous situation in China, e.g., up to 10% connection failure rate in some area). But a ValueError usually means an internal error in modeling and is much dangerous that can not simply retry. Since the early version of transformers, i.e., pytorch-pretrained-bert, for me it is hard to distinguish the `ConnectionError` from `ValueError` exceptions. Finally, Hope the great repo better and better. Regards.
transformers
14,437
closed
Recover Deleted XNLI Instructions
# What does this PR do? This reintroduces the instructions and task description of XNLI to the examples documentation for text classification. These seem to have been inadvertently deleted during the examples re-org since they followed the instructions for Tensorflow instructions for `run_glue.py`. You can see the deletion [here](https://github.com/huggingface/transformers/pull/11350/files#L206). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [x] This PR fixes a typo or improves the docs. ## Who can review? The reorg was done by @sgugger, so he can best confirm that the deletion of these was inadvertent.
11-17-2021 22:28:15
11-17-2021 22:28:15
transformers
14,436
closed
BERT outperformed XLNet
Hi, I am doing a tweet sentiment classification (binary) project and I want to compare the F1 scores of XLNet and BERT. I expected XLNet to outperform BERT , however, BERT outperformed XLNet other permutation language models.I know it shouldn't be necessarily the case but I can't explain the reason. I used Huggingface transformers and pretrained model. Where should I look to understand the result? dataset link : http://help.sentiment140.com/for-students the reason why I think XLNet would outperform BERT : https://towardsdatascience.com/what-is-xlnet-and-why-it-outperforms-bert-8d8fce710335
11-17-2021 20:42:46
11-17-2021 20:42:46
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,435
closed
Ecco package - integration with HuggingFace
Hi there! I'm writing this issue to reach the HuggingFace team and to ask if it would make sense to merge the [Ecco package](https://github.com/jalammar/ecco) with huggingface or, for example, if it makes sense to integrate it in the transformers' API. Note: I do not speak behalf of the Ecco package, I'm just a code contributor of the repository and heavy user of it + transformers!
11-17-2021 18:23:15
11-17-2021 18:23:15
Hello! We appreciate @jalammar's work on `ecco`, and we're happy to work with Jay and other members of ecco's community on making sure that `transformers` remains compatible with it - but I wonder if merging the two packages would improve users' workflows or if keeping the two as they are isn't the best for ecco's development. In general we're happy to work with other package maintainers to provide the best compatibility we can, and we're also happy to integrate tools in transformers but only if that's a wish from the package maintainers and if it comes with a significantly improved user experience.<|||||>Agreed! Would it make sense to make `ecco` a `transformers` extra dependency for an explainability API? `ecco` can be extended to deal with [other models](https://huggingface.co/transformers/model_doc/auto.html?highlight=auto). Now we are only dealing with CausalLM, MaskedLM and Seq2SeqLM. Any kind of model classification would be a fairly easy integration, for example :) <|||||>Thank you @JoaoLages for the initiative and @LysandreJik for your support! `transformers` is the main dependency for Ecco and insuring compatibility is indeed key. Certainly happy to brainstorm on how to improve the user experience. And would certainly love Ecco to be considered for interpretability/explainability initiatives around `transformers`. This can potentially fall under [Bertology ](https://huggingface.co/transformers/master/bertology.html) research. Another possibility is one or multiple language model interpretability notebooks as a part of community notebooks. @JoaoLages, the Integrated Gradients feature is likely a great candidate for something like this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thank you @JoaoLages for the initiative and @LysandreJik for your support! > > `transformers` is the main dependency for Ecco and insuring compatibility is indeed key. > > Certainly happy to brainstorm on how to improve the user experience. And would certainly love Ecco to be considered for interpretability/explainability initiatives around `transformers`. This can potentially fall under [Bertology ](https://huggingface.co/transformers/master/bertology.html) research. Another possibility is one or multiple language model interpretability notebooks as a part of community notebooks. @JoaoLages, the Integrated Gradients feature is likely a great candidate for something like this. @LysandreJik how do you see Ecco being integrated into the `transformers` package? Any of the possibilities pointed here? It would be easily generalizable to most NLP tasks but it is a component that is not present in this package, from what I see - that's why I believe it would be an awesome integration😄. An idea is to have a new transformers' explainability API with Ecco dependencies or simply import the Ecco code into the package. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,434
closed
[Bart] Fix docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fix docs. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-17-2021 17:50:16
11-17-2021 17:50:16
@patil-suraj feel free to merge if ok for you<|||||>Can you also fix #14395 in this PR? Nvm: seems to be the same issue :)
transformers
14,433
closed
Issue doing multi gpu training with TrOCR Transformer
Hello. Thank you for your work on this lovely product. I have managed to train the model on my dataset using the awesome notebook provided by @NielsRogge [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/TrOCR/Fine_tune_TrOCR_on_IAM_Handwriting_Database_using_native_PyTorch.ipynb) I am having some issues doing multi gpu training on a single node. i.e using DataParallel The way I set up my model is the following: ``` model = VisionEncoderDecoderModel.from_pretrained("microsoft/trocr-base-printed") device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") model = DataParallel(model) // do this because of previous issue with hugginface model with multi gpu model = model.module model.to(device) ``` and later on, in the for loop, i use the same device for the inputs: ``` for i, (images, labels) in enumerate(data_loader): images, labels = images.to(device), labels.to(device) ``` Nevertheless, I am still getting: ``` File "/opt/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/vision_encoder_decoder/modeling_vision_encoder_decoder.py", line 463, in forward **kwargs_encoder, File "/opt/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/vit/modeling_vit.py", line 557, in forward return_dict=return_dict, File "/opt/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/vit/modeling_vit.py", line 388, in forward layer_outputs = layer_module(hidden_states, layer_head_mask, output_attentions) File "/opt/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/vit/modeling_vit.py", line 319, in forward output_attentions=output_attentions, File "/opt/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/vit/modeling_vit.py", line 262, in forward self_outputs = self.attention(hidden_states, head_mask, output_attentions) File "/opt/miniconda/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/opt/miniconda/lib/python3.7/site-packages/transformers/models/vit/modeling_vit.py", line 191, in forward attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2)) RuntimeError: CUDA out of memory. Tried to allocate 244.00 MiB (GPU 0; 11.17 GiB total capacity; 10.54 GiB already allocated; 85.44 MiB free; 10.63 GiB reserved in total by PyTorch) ``` Any help is greatly appreciated. Thank you!
11-17-2021 17:34:45
11-17-2021 17:34:45
Hi, Thanks for your interest in TrOCR! What's the batch size you are using? <|||||>Hi @NielsRogge Currently trying with batch size 16 on 4 Nvidia K80 gpu's on an Azure Compute. It previously worked with batch size 4 on 1 Nvidia k80 <|||||>Hello @NielsRogge Any idea about this? I have managed to add automatic mixed precision training from pytorch on V100 compute but I can only get up to batch size 8 this way. Any help would be greatly appreciated. Thank you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,432
closed
Longformer slower than Roberta
Hi, It seems I have a similar issue than (https://github.com/huggingface/transformers/issues/8725). However, the time difference between Longformer and Roberta is nearly a factor 10. Does it seem normal (sequences are in the 0-100 length for this first test but will go beyond 512 in practice)? After checking it is in the longformer.encoder step that this is quite slow. Here is the config I use (I have similar factor whatever the config): LongFormer_TOY_MODEL_HPARAMS = { "vocab_size": len(LongFormer_VOCAB), "hidden_size": 64, "num_hidden_layers": 3, "num_attention_heads": 8, "intermediate_size": 32, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "attention_probs_dropout_prob": 0.1, "max_position_embeddings": 512 + 2, # tokenizer's model_max_length + 2 ( / tokens of sequence) "initializer_range": 0.02, "layer_norm_eps": 1e-12, "attention_window": 512 } Thanks!
11-17-2021 16:49:49
11-17-2021 16:49:49
cc @patrickvonplaten <|||||>Hey @duvi86, Could you please provide a reproducible code snippet that shows the time difference between RoBERTa and Longformer? Thank you<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,431
closed
Add a post init method to all models
# What does this PR do? This PR introduces the proper fix for #14388 by introducing a new `post_init` method to each model, which replaces the current `init_weights()` call. The method can execute any code that requires the model to be properly initialized, such as the `init_weights()` or the gradient checkpointing BC fix (and more if need to in the future).
11-17-2021 14:26:28
11-17-2021 14:26:28
transformers
14,430
closed
`AttributeError: 'BertConfig' object has no attribute 'items'` when saving a tf keras model with `transformers 4.12.4`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.12.4 - Platform: Linux - Python version: 3.7.10 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.7.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): `TFBert` The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: [Google Colab notebook](https://colab.research.google.com/drive/1Awfs3hmfKn4OQl1tjmJbKVUu9qFZ0JIX#scrollTo=O7V5Ct-pcfVG) 1. Run this code: ```python import tensorflow as tf import transformers import sys print(sys.version) print(tf.__version__) print(transformers.__version__) bert = transformers.TFBertModel(transformers.BertConfig()) input_ids = tf.keras.layers.Input(shape=(512,), dtype=tf.int32) model = tf.keras.Model(inputs=[input_ids], outputs=[bert(input_ids).last_hidden_state]) model.compile() model.save("model") ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` 2021-11-17 19:16:12.651287: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-11-17 19:16:12.651307: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. 3.7.10 2.7.0 4.12.4 2021-11-17 19:16:13.813364: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory 2021-11-17 19:16:13.813392: W tensorflow/stream_executor/cuda/cuda_driver.cc:269] failed call to cuInit: UNKNOWN ERROR (303) 2021-11-17 19:16:13.813402: I tensorflow/stream_executor/cuda/cuda_diagnostics.cc:156] kernel driver does not appear to be running on this host (haru-Z590-S01): /proc/driver/nvidia/version does not exist 2021-11-17 19:16:13.813550: I tensorflow/core/platform/cpu_feature_guard.cc:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. 2021-11-17 19:16:20.635357: W tensorflow/python/util/util.cc:368] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them. WARNING:absl:Found untraced functions such as embeddings_layer_call_fn, embeddings_layer_call_and_return_conditional_losses, encoder_layer_call_fn, encoder_layer_call_and_return_conditional_losses, pooler_layer_call_fn while saving (showing 5 of 1055). These functions will not be directly callable after loading. Traceback (most recent call last): File "b.py", line 13, in <module> model.save("model") File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.7/site-packages/keras/utils/traceback_utils.py", line 67, in error_handler raise e.with_traceback(filtered_tb) from None File "/home/haru/miniconda3/envs/mlflow-dev-env/lib/python3.7/site-packages/transformers/configuration_utils.py", line 237, in __getattribute__ return super().__getattribute__(key) AttributeError: 'BertConfig' object has no attribute 'items' ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
11-17-2021 10:17:09
11-17-2021 10:17:09
In transformers 4.12.3, the attached code works.<|||||>After commenting out the changes made in https://github.com/huggingface/transformers/pull/14361, the attached code works in transformers 4.12.4. https://github.com/huggingface/transformers/blob/b567510cff606c9dd67cb7c56169bc596590e700/src/transformers/modeling_tf_utils.py#L695-L700 cc @sgugger and @LysandreJik who reviewed & approved #14361<|||||>@harupy The offending PR has been reverted and a patch release has been deployed. We're working on a fix that will resolve the original issues without causing new ones, which will hopefully be deployed soon. In the meantime, your code should work as before!<|||||>@Rocketknight1 Got it, thanks!
transformers
14,429
closed
`BertTokenizerFast.vocab.keys()` does not return a fixed order sequence
## Environment info - `transformers` version: 4.12.3 - Platform: Linux-5.4.0-1048-aws-x86_64-with-glibc2.27 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help Tokenizers: @LysandreJik ## Information I'm using `BertTokenizer` and `BertTokenizerFast`, and I found that `BertTokenizerFast.vocab.keys()` does not return a fixed order sequence, while `BertTokenizer` does. The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Create a python script named `test.py` with the following code, which simply loads a pre-trained BERT tokenizer, acquires its vocabulary with `tokenizer.vocab.key()` and dumps the vocabulary. ```python import json import random from sys import stdout from transformers import BertTokenizerFast random.seed(2333) tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') vocab_tokens = list(tokenizer.vocab.keys()) json.dump(vocab_tokens, stdout, indent=4, ensure_ascii=False) ``` 2. Run the script twice with stdout redirecting to two different files, and compare the two outputs, will find they are different. ```zsh ➜ python test.py > run1.json ➜ python test.py > run2.json ➜ diff -q run1.json run2.json Files run1.json and run2.json differ ``` 3. Change `BertTokenizerFast` to `BertTokenizer` in the script and execute commands of step 2 again, the two outputs are the same this time. Screenshot of my results for `BertTokenizerFast` and `BertTokenizer`: ![image](https://user-images.githubusercontent.com/38486514/142173100-b67cb38f-76ec-47e6-8cd0-d2cbd2196b69.png) ![image](https://user-images.githubusercontent.com/38486514/142173975-b917af77-3b3c-430c-a22b-22d29508d861.png) ## Expected behavior Like `BertTokenizer`, `BertTokenizerFast` should also output the same vocabulary across different runs.
11-17-2021 09:32:21
11-17-2021 09:32:21
Well, a dictionary-like object (in which the vocab is stored) is naturally unordered in most programming languages due to the data structure behind. The vocab order that `BertTokenizer` yields is fixed because it leverages `OrderedDict`, which is a special dict class making itself ordered. Additionally, the iterator order of Python's `dictview` object is guaranteed based on insertion order since Python 3.7 (See [this](https://docs.python.org/3.8/library/stdtypes.html#dictionary-view-objects)). Therefore, both mechanisms ensure you see a fixed vocab list. However, `BertTokenizerFast` is backed by Rust programming language, which uses `HashMap` to store the vocab, and this data structure is naturally unordered. That's why you see the difference between them.<|||||>> Well, a dictionary-like object (in which the vocab is stored) is naturally unordered in most programming languages due to the data structure behind. > > The vocab order that `BertTokenizer` yields is fixed because it leverages `OrderedDict`, which is a special dict class making itself ordered. Additionally, the iterator order of Python's `dictview` object is guaranteed based on insertion order since Python 3.7 (See [this](https://docs.python.org/3.8/library/stdtypes.html#dictionary-view-objects)). Therefore, both mechanisms ensure you see a fixed vocab list. > > However, `BertTokenizerFast` is backed by Rust programming language, which uses `HashMap` to store the vocab, and this data structure is naturally unordered. That's why you see the difference between them. Thanks for the explanation, and I agree that this is the reason. However when programme with Python3.7+, I would assume that all `dict` objects will be ordered, at least does not change across different runs. Unfortunately, this wrong assumption brings my specific implementation a bug and takes me plenty of time to locate it. I think many people may have similar thoughts like this, and people may not always keep in mind that the fast tokenizer has a Rust backend, which may lead to different behavior. I want to ask another more general question, is it necessary to keep consistent behavior between Python-backend tokenizer and Rust-backend tokenizer? Or shall we document this and give some warning?<|||||>In general, I think we should not make an assumption that a `dict`, or let's say a hash-backed object, is ordered since hash table is a naturally unordered data structure. Python is a special case of it and only with 3.7+ guarantees dict iterator order. Before Python 3.7, this is a CPython implementation detail of dict and can vary upon different implementations. Moreover, although `BertTokenizer` uses `OrderedDict` to guarantee the order of vocab, the [docs](https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.get_vocab) only states `a dict is returned`, meaning that even `BertTokenizer` is using `OrderedDict`, other models maybe not. Additionally, `transformers` is Python version agnostic which currently supports Python 3.6+, it cannot ensure dict ordered characteristics among different Python versions are consistent. Therefore, I perceive that the different behavior of this is fairly normal and acceptable, and giving a note or warning in the docs is also welcomed, which can make it more unequivocal for people.<|||||>@qqaatw Thanks, this makes sense! Currently, I would suggest one may want to sort their keys from the `dict` if they want an absolutely deterministic order, especially when the dict is from some unknown implementation (maybe this applies to the the whole Python programming).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,428
closed
[Benchmark] Tokenizers as Collate functions vs normal in loop tokenizing
Apologies if this is more of a bug than it is a benchmark, just wanted to check if it was normal for tokenizer as collate function to run twice as slow. [Here's the colab notebook](https://colab.research.google.com/drive/1dqnM3pWIZgUMTrk42RJrvby6-ZZCHNRO?usp=sharing).
11-17-2021 06:22:29
11-17-2021 06:22:29
Maybe @sgugger has an idea.<|||||>Please explain to us what the problem is, the Colab notebook you link is not public.<|||||>Sorry @sgugger, should be public now but basically the problem is that using a collate function slows down dataloader: ```python sentence_dl = DataLoader( sentence_ds, BATCH_SIZE, num_workers=NUM_WORKERS, shuffle=False, drop_last=False, pin_memory=True, ) # fast for batch in tqdm(sentence_dl): x = collate_fn(batch) ``` ```python sentence_dl = DataLoader( sentence_ds, BATCH_SIZE, num_workers=NUM_WORKERS, shuffle=False, drop_last=False, pin_memory=True, collate_fn=collate_fn, ) # slow for batch in tqdm(sentence_dl): continue ``` Where sentence_ds is just a `Dataset` created from the sentences (strings) in "snli" dataset and collate function is: ```python tokenizer = AutoTokenizer.from_pretrained(LANGUAGE_MODEL) class CollateFn: def __init__(self, tokenizer): self.tokenizer = tokenizer def __call__(self, x): return self.tokenizer( x, max_length=MAX_TEXT_LENGTH, truncation=True, padding="max_length", return_tensors="pt" ) collate_fn = CollateFn(tokenizer) ```<|||||>Understood. Yes, it's very likely that using the multiprocessing on the dataloders side slows down the multiprocessing of the "fast" tokenizer. FYI, for best speed, I would recommend passing 1,000 samples at once to the tokenizer and preprocessing once and for all beforehand.
transformers
14,427
closed
fix hook removal issue
Here is a possible workaround for an issue triggered by https://github.com/huggingface/transformers/pull/14408 and reported at https://github.com/huggingface/transformers/pull/14408#issuecomment-971004220. I repeat all the relevant information below. This broke HF/deepspeed integration with pt-1.8 or pt-1.9 - works fine with pt-1.10. found with git bisecting and reported by @jeffra, as their CI broke with our master. ``` RUN_SLOW=1 pyt tests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_clm_1_zero3 -sv ``` ``` E Traceback (most recent call last): E File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 524, in <module> E main() E File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 472, in main E train_result = trainer.train(resume_from_checkpoint=checkpoint) E File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1316, in train E tr_loss_step = self.training_step(model, inputs) E File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1849, in training_step E loss = self.compute_loss(model, inputs) E File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1881, in compute_loss E outputs = model(**inputs) E File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl E return forward_call(*input, **kwargs) E File "/mnt/nvme1/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 1580, in forward E loss = self.module(*inputs, **kwargs) E File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1057, in _call_impl E for hook in itertools.chain( E RuntimeError: OrderedDict mutated during iteration ``` The issue is triggered by: https://github.com/huggingface/transformers/blob/b567510cff606c9dd67cb7c56169bc596590e700/src/transformers/modeling_utils.py#L423 so it looks like Deepspeed is just a harbinger here, and any other application that also uses hooks that get inserted after this hook will trigger this issue. It appears that what happens is that the hook is being removed from the dict while it being traversed one or more frames above. Perhaps if the hook is last python doesn't report this issue. But if there are more hooks registered after that one, that's when the dict mutation is detected. I looked at what others did to solve this and they had to move the hook removal outside of the hook itself and into the `forward` when it's safe to remove it. Except we don't have a `forward` for this super class. For some reason I can't reproduce this with pt-1.10, which means that pytorch has reworked the loop that traverses the hooks dict to allow hooks to self-remove - probably using a copy to traverse the dict. So this PR is an attempt to make things work, while rendering the hook a noop for subsequent calls. As it says this is a temporary hook and will be removed soon, perhaps it's OK? for pt-1.10 we can safely remove it. Obviously, this is just a suggestion. now that you understand the issue, perhaps you will come up with a more efficient solution. @sgugger
11-17-2021 05:02:13
11-17-2021 05:02:13
Thanks a lot for the investigation! The hook was only a temporary solution to make a quick fix for the backward compatibility issue. I will work on the "real fix" today (which won't depend on hooks) and if I finish it quickly enough, I propose we just merge the "real fix". If it starts taking too much time, we can merge your PR as a quicker fix.<|||||>#14431 should fix the issue in a better way, by removing the hook entirely :-)<|||||>I confirm that https://github.com/huggingface/transformers/pull/14431 undid the problem. Thank you, Sylvain!
transformers
14,426
open
[Deepspeed Inference] HF Integration
This PR is working on an integration of [Deepspeed Inference](https://www.deepspeed.ai/tutorials/inference-tutorial/) which implements Tensor Parallelism. This is different from Deepspeed ZeRO inference. This is a very early draft. To try: ``` cd transformers export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 \ deepspeed --num_gpus=2 examples/pytorch/translation/run_translation.py \ --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 \ --evaluation_strategy=steps --do_eval --label_smoothing 0.1 --learning_rate \ 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 \ --max_target_length 128 --overwrite_output_dir --per_device_eval_batch_size $BS \ --predict_with_generate --sortish_sampler --source_lang en --target_lang ro \ --dataset_name wmt16 --dataset_config ro-en --source_prefix \ 'translate English to Romanian: ' --val_max_target_length 128 --warmup_steps \ 50 --max_eval_samples 50 --deepspeed_inference --skip_memory_metrics 0 ``` and it currently hangs with `--num_gpus > 1`. One gpu finishes processing and the other is stuck in preparing inputs. So need to figure out the synchronization of the gpus.
11-17-2021 01:57:38
11-17-2021 01:57:38
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_14426). All of your documentation changes will be reflected on that endpoint.
transformers
14,425
closed
Add documentation for exporting TorchScript model to accelerator
# 🚀 Feature request Add examples about how to trace a TorchScript transformer model and deploy it in an accelerator such as [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/ "Inferentia accelerator"). ## Motivation Communicated with @philschmid. HuggingFace transformer models can be traced to TorchScript models and run on accelerators. We would like to help HF transformer users save the cost of deploying and running HF transformer models at scale. We wish to upstream additional documentation showing minimal code updates to deploy traced TorchScript model in an accelerator. ## Your contribution Example code snippet to trace and deploy a TorchScript model in an accelerator such as [AWS Inferentia](https://aws.amazon.com/machine-learning/inferentia/ "Inferentia accelerator")
11-16-2021 23:02:17
11-16-2021 23:02:17
That sounds good! We'd welcome a section about deploying a TorchScript model in AWS Inferentia! Ideally this would be under a new documentation page "deployment". You can take inspiration from either the RST [serialization page](https://github.com/huggingface/transformers/blob/master/docs/source/serialization.rst) or the MD [migration page](https://github.com/huggingface/transformers/blob/master/docs/source/migration.md). You should then add a link to it in the index [here](https://github.com/huggingface/transformers/blob/master/docs/source/index.rst). Thank you, and let @philschmid or I know if you need any help!<|||||>Hi @kct22aws , Please let me know if you need any help regarding this as we also have several huggingface and custom models running in AWS inferentia !!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,424
closed
Debug doc
# What does this PR do? This should fix the `deploy_doc` failing job. Apparently there is something in the latest release of Python mardown that sphinx does not like.
11-16-2021 22:50:21
11-16-2021 22:50:21
transformers
14,423
closed
Initial install: No module named 'tensorflow.python.keras.engine.keras_tensor'
## Environment info Output of transformers-cli env is an error ending with: RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback): No module named 'tensorflow.python.keras.engine.keras_tensor' - `transformers` version: - Platform: Linux CentOS7 - Python version: 3.6.13 - PyTorch version (GPU?): 1.9.1.post3 - Tensorflow version (GPU?): 2.1.0 gpu ### Who can help Library: - Pipelines: @Narsil ## To reproduce Steps to reproduce the behavior: Installation with Mamba using conda recipe for transformers: `micromamba create -y -p <path> mamba python=3.6 cudatoolkit=10.0 cudnn=7.6.0 pytorch micromamba install -y -p <path> pandas seaborn plotly bokeh scikit-learn statsmodels scipy matplotlib simpleitk -c simpleitk micromamba install -y -p <path> transformers=4.12.3 source <path>/bin/activate base python -m pip install --upgrade pip python -m pip install tensorflow-gpu==2.1.0 ` Output of sample given in installation docs: ./python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))" `Traceback (most recent call last): File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/file_utils.py", line 2150, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/modeling_tf_utils.py", line 30, in <module> from tensorflow.python.keras.engine.keras_tensor import KerasTensor ModuleNotFoundError: No module named 'tensorflow.python.keras.engine.keras_tensor' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/file_utils.py", line 2150, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 994, in _gcd_import File "<frozen importlib._bootstrap>", line 971, in _find_and_load File "<frozen importlib._bootstrap>", line 955, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 665, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 678, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/pipelines/__init__.py", line 25, in <module> from ..models.auto.configuration_auto import AutoConfig File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/models/__init__.py", line 19, in <module> from . import ( File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/models/layoutlm/__init__.py", line 22, in <module> from .configuration_layoutlm import LAYOUTLM_PRETRAINED_CONFIG_ARCHIVE_MAP, LayoutLMConfig File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/models/layoutlm/configuration_layoutlm.py", line 22, in <module> from ...onnx import OnnxConfig, PatchingSpec File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/onnx/__init__.py", line 17, in <module> from .convert import export, validate_model_outputs File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/onnx/convert.py", line 23, in <module> from .. import PreTrainedModel, PreTrainedTokenizer, TensorType, TFPreTrainedModel, is_torch_available File "<frozen importlib._bootstrap>", line 1020, in _handle_fromlist File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/file_utils.py", line 2140, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/file_utils.py", line 2154, in _get_module ) from e RuntimeError: Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback): No module named 'tensorflow.python.keras.engine.keras_tensor' The above exception was the direct cause of the following exception: Traceback (most recent call last): File "<string>", line 1, in <module> File "<frozen importlib._bootstrap>", line 1020, in _handle_fromlist File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/file_utils.py", line 2140, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/programs/x86_64-linux/transformers/4.12.3_cu10.0/lib/python3.6/site-packages/transformers/file_utils.py", line 2154, in _get_module ) from e RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): Failed to import transformers.modeling_tf_utils because of the following error (look up to see its traceback): No module named 'tensorflow.python.keras.engine.keras_tensor'` ## Expected behavior Expected model output ending with: [{'label': 'NEGATIVE', 'score': 0.9991129040718079}]
11-16-2021 20:25:02
11-16-2021 20:25:02
Hi @james-vincent , It seems the version of tensorflow you're using is not supported anymore (https://github.com/huggingface/transformers/blob/master/setup.py#L155) . You need at least TF 2.3 to use transformers. Are you able to upgrgade your dependency ? <|||||>Thanks for the quick reply. I can change versions for everything. This is a standalone conda installation. I have used tensorflow-gpu version 2.4.1 but now get a different error when running the test: python -c "from transformers import pipeline; print(pipeline('sentiment-analysis')('I hate you'))" […] Traceback (most recent call last):   File "<string>", line 1, in <module>   File "<frozen importlib._bootstrap>", line 1020, in _handle_fromlist   File "/programs/x86_64-linux/transformers/4.12.3_cu11.0.3/lib/python3.6/site-packages/transformers/file_utils.py", line 2140, in __getattr__     module = self._get_module(self._class_to_module[name])   File "/programs/x86_64-linux/transformers/4.12.3_cu11.0.3/lib/python3.6/site-packages/transformers/file_utils.py", line 2154, in _get_module     ) from e RuntimeError: Failed to import transformers.pipelines because of the following error (look up to see its traceback): Failed to import transformers.modeling_utils because of the following error (look up to see its traceback): module 'torch' has no attribute 'BoolTensor' Installed versions of pertinent packages(via mamba): # Name                    Version                   Build  Channel cudatoolkit               9.0                  h13b8566_0    anaconda cudnn                     7.6.5                 cuda9.0_0    anaconda huggingface_hub           0.1.2              pyhd8ed1ab_0    conda-forge keras-preprocessing       1.1.2                    pypi_0    pypi mamba                     0.17.0           py36h05d92e0_0    conda-forge nccl                      1.3.5                 cuda9.0_0    anaconda numpy                     1.19.5           py36hfc0c790_2    conda-forge pip                       21.3.1             pyhd8ed1ab_0    conda-forge python                    3.6.7           h357f687_1008_cpython    conda-forge pytorch                   0.4.0            py36hdf912b8_0    anaconda tensorflow-gpu            2.4.1                    pypi_0    pypi James Vincent, PhD Bioinformatics Software Curator Dept. of BCMP, Harvard Medical School — BioGrids.org -- On Nov 17, 2021, 3:48 AM -0500, Nicolas Patry ***@***.***>, wrote: > Hi @james-vincent , > It seems the version of tensorflow you're using is not supported anymore (https://github.com/huggingface/transformers/blob/master/setup.py#L155) . You need at least TF 2.3 to use transformers. > Are you able to upgrgade your dependency ? > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe. > Triage notifications on the go with GitHub Mobile for iOS or Android. <|||||>Hi @james-vincent , seems like the error now lies in `torch` import. Do you mind sharing the version you're running ? Maybe simply updating those dependencies should work. Cheers,<|||||>Ah - thanks. I did not pay attention and install pytorch 0.4.0. I updated to pytorch 1.10.0 and now the test passes just fine. Thanks for the help. I found that the conda recipe, both from conda-forge and huggingface channels, did not install tensorflow. This is why I did it manually. It would be great if the conda recipe had tensorflow but maybe there are other conflicts or considerations. Thanks again, Jim James Vincent, PhD Bioinformatics Software Curator Dept. of BCMP, Harvard Medical School — BioGrids.org -- On Nov 18, 2021, 3:37 AM -0500, Nicolas Patry ***@***.***>, wrote: > Hi @james-vincent , seems like the error now lies in torch import. Do you mind sharing the version you're running ? Maybe simply updating those dependencies should work. > Cheers, > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub, or unsubscribe. > Triage notifications on the go with GitHub Mobile for iOS or Android. <|||||>Hi @james-vincent I am not an expert in conda whatsoever, but yes, transformers being able to run EITHER `torch` or `tensorflow` or `jax` independently, there are no hard requirements for either so we don't depend on ANY single one (even though without any of those dependencies, the library use is going to be very limited). You can also use all of them at the same time if you so desire.<|||||>Closing this, feel free to reopen if something was missed.<|||||>> Hi @james-vincent , > > It seems the version of tensorflow you're using is not supported anymore (https://github.com/huggingface/transformers/blob/master/setup.py#L155) . You need at least TF 2.3 to use transformers. > > Are you able to upgrgade your dependency ? I met the same problem, and upgraded tf2.1 to tf2.3. It solved. Thank you. @Narsil
transformers
14,422
closed
Exporting `sentence-transformers/LaBSE` to ONNX leads to different output
## Environment info - `transformers` version: 4.6.1 - Platform: macOS-10.14.6-x86_64-i386-64bit - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no - onnx==1.10.2 - onnxruntime==1.9.0 ### Who can help @LysandreJik ## Information Model I am using `sentence-transformers/LaBSE` The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: I am trying to export a fine-tuned `sentence-transformers/LaBSE` to ONNX ## To reproduce Hi all, I have fine-tuned a `sentence-transformers/LaBSE` on a binary sentence classification task and I am now trying to export it to ONNX. However, when the output of the ONNX model do not match the ones from the PyTorch models. <details> <summary>Steps to reproduce the behavior:</summary> ```python import numpy as np import onnxruntime as rt import torch import torch.nn as nn import transformers from transformers import AutoTokenizer, AutoConfig, AutoModel from transformers import convert_graph_to_onnx class LabseForClassification(nn.Module): def __init__(self, config): super(LabseForClassification, self).__init__() self.config = config self.num_labels = config.num_labels self.labse = AutoModel.from_config(config) self.classifier = nn.Linear(768, config.num_labels) def forward(self, input_ids=None, attention_mask=None, token_type_ids=None): model_output = self.labse( input_ids=input_ids, token_type_ids=token_type_ids, attention_mask=attention_mask ) embeddings = model_output[1] embeddings = nn.functional.normalize(embeddings) logits = self.classifier(embeddings) return torch.softmax(logits, dim=-1) TEXT = """LOS ANGELES (KTLA) – In high school football, the talent gap between future pros and average Joes can be quite wide indeed, and lopsided scores are nothing new. But a 106-0 win? That will draw some attention. That was the score at the game between Inglewood High and Inglewood Morningside in Southern California over the weekend. Despite scoring 59 points in the first quarter alone, Inglewood High head coach Mil’Von James declined to play backups and was initially reticent to use a running clock to shorten the game, according to Inglewood Morningside football coach Brian Collins. Inglewood High even went for a two-point conversion pass, instead of the traditional one-point kick attempt, after scoring to take a triple-digit lead, which Collins told the Los Angeles Times was “a classless move.” “I told them, ‘Go play St. John Bosco and Mater Dei,'” Collins said in reference to two of the area’s powerhouse high schools that recently produced the starting quarterbacks at top-tier programs Clemson University and the University of Alabama. James has not responded to an email seeking comment on the game. In a statement provided to the Times’ Eric Sondheimer, the California Interscholastic Federation Southern Section, which governs most Southern California high school sports, said the 106-0 score “does not represent” the organization’s ideals of character. “The CIF-SS condemns, in the strongest terms, results such as these,” the statement read. Other high school coaches were similarly incensed. Matt Poston, head coach at Tesoro High School in Las Flores, said he hoped he was “reading this wrong” when he looked at the score. “We’re supposed to be teaching young men life lessons through the game. What message was this staff teaching last night? Sad,” Poston wrote on Twitter. Legendary basketball sportscaster Dick Vitale also weighed in on Twitter. Sportswriter Nick Harris highlighted some of the most eye-popping stats, calling the game “a beatdown for the ages.” While 106-0 is a score rarely seen at any level of football, it’s not the largest margin of victory. The most lopsided football score of all time is widely considered to be Georgia Tech’s 222-0 win over Cumberland in 1916, when Cumberland had discontinued its football program but was forced to play the game, putting together a squad of fraternity brothers and other students.""" ENTITY = "Inglewood" TEXT = TEXT.replace(ENTITY, "[MASK]") if __name__ == "__main__": tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/LaBSE", use_fast=True) config = AutoConfig.from_pretrained("sentence-transformers/LaBSE", num_labels=2) model_raw = LabseForClassification(config) model_raw.load_state_dict(torch.load("outputs/SAL/pytorch_model.bin", map_location="cpu")) model_raw.eval() model_pipeline = transformers.Pipeline(model=model_raw, tokenizer=tokenizer) with torch.no_grad(): input_names, output_names, dynamic_axes, tokens = convert_graph_to_onnx.infer_shapes( model_pipeline, "pt" ) ordered_input_names, model_args = convert_graph_to_onnx.ensure_valid_input( model_pipeline.model, tokens, input_names ) del dynamic_axes["output_0"] # Delete unused output output_names = ["probs"] dynamic_axes["probs"] = {0: 'batch'} torch.onnx.export( model_raw, model_args, f="test.onnx", input_names=input_names, output_names=output_names, dynamic_axes=dynamic_axes, do_constant_folding=True, opset_version=12, ) sess = rt.InferenceSession("test.onnx") inputs_np = tokenizer(TEXT, return_tensors="np") probs_onnx = sess.run(None, { "input_ids": inputs_np["input_ids"], "attention_mask": inputs_np["attention_mask"], "token_type_ids": inputs_np["token_type_ids"] }) inputs = tokenizer(TEXT, return_tensors="pt") probs = model_raw(**inputs) assert np.allclose( probs_onnx[0].squeeze(), probs.squeeze().detach().numpy(), atol=1e-6, ) ``` </details> I'm getting this warning, which I suspect is the reason of the outputs divergence: ``` /Users/jules/Desktop/datanai/.venv/training/lib/python3.9/site-packages/transformers/modeling_utils.py:1967: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! assert all( WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. ``` Please let me know if you need anything else from my side. Thanks for your help :)
11-16-2021 16:47:21
11-16-2021 16:47:21
cc @michaelbenayoun @mfuntowicz <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey guys any update on this one?
transformers
14,421
closed
[Generation] Make `generate()` method compatible with speech and vision inputs while keeping 100% backward compatibility.
# 🚀 Feature request As mentioned by @NielsRogge and @LysandreJik [here](https://github.com/huggingface/transformers/pull/14139#discussion_r736788040) the `generate` function should be made more general by not assuming that the inputs are necessarily `input_ids`. We should also allow inputs to be called `input_values`, `input_features`, `pixel_values` to cope with speech and vision models at 100% backward compatibility. @anton-l - if you want to dive a bit into `generate()`, feel free to give this a stab :-)
11-16-2021 15:41:59
11-16-2021 15:41:59
If you're too busy with other things - don't worry, I can take care of it in like 2 weeks.<|||||>Also as a note, the following hacks should then be removed: - [this hack](https://github.com/huggingface/transformers/blob/b567510cff606c9dd67cb7c56169bc596590e700/src/transformers/trainer_seq2seq.py#L169) in `Seq2SeqTrainer` - remove `attention_mask=None` inputs currently defined in the forward pass of `modeling_vit.py` and `modeling_deit.py`.<|||||>For the record, this ``` remove attention_mask=None inputs currently defined in the forward pass of modeling_vit.py and modeling_deit.py ``` is addressed in this PR #14148, which is still under review.<|||||>PR will be finished soon: https://github.com/huggingface/transformers/pull/14784
transformers
14,420
closed
Adding support for `hidden_states` and `attentions` in unbatching support.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #14414 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-16-2021 15:19:19
11-16-2021 15:19:19
transformers
14,419
closed
Add forward method to dummy models
# What does this PR do? This PR adds a forward/call/__call__ method to the dummy models (depending on the framework) so that one can build the doc even if not all dependencies are present (torch-scatter in particular is annoying).
11-16-2021 14:00:56
11-16-2021 14:00:56
transformers
14,418
closed
DebertaTokenizerFast from microsoft/deberta-base returns strange offset_mapping for Ġ prefixed token
## Environment info - `transformers` version: 4.12.3 - Platform: Linux-5.14.17-301.fc35.x86_64-x86_64-with-debian-buster-sid - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @LysandreJik Model I am using: deberta-base (vs roberta-base) The problem arises when using: * [x] my own modified scripts: (see code in steps) The tasks I am working on is: * [x] my own task or dataset: (see code in steps) ## To reproduce Steps to reproduce the behavior: Default fast tokenizer for deberta returns supposedly incorrect ...(6, 10), (10, 13)... offsets mapping for Ġ prefixed token (thus making it "<whitespace>AG"), instead of expected ...(6, 10), (11, 13)... when compared with correct results (it's "AG") from fast roberta tokenizer. I assume that when initialized via AutoTokenizer with default parameters they are expected to return the same results without extra space. ` >>> import transformers >>> transformers.__version__ '4.12.3' >>> import tokenizers >>> tokenizers.__version__ '0.10.3' >>> from transformers import AutoTokenizer >>> text="EMPLOYMENT AGREEMENT" >>> dtsmall=AutoTokenizer.from_pretrained("microsoft/deberta-base", use_fast=True) Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 52.0/52.0 [00:00<00:00, 88.8kB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 474/474 [00:00<00:00, 730kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 878k/878k [00:01<00:00, 803kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 446k/446k [00:00<00:00, 606kB/s] >>> print(dtsmall) PreTrainedTokenizerFast(name_or_path='microsoft/deberta-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'bos_token': AddedToken("[CLS]", rstrip=False, lstrip=False, single_word=False, normalized=True), 'eos_token': AddedToken("[SEP]", rstrip=False, lstrip=False, single_word=False, normalized=True), 'unk_token': AddedToken("[UNK]", rstrip=False, lstrip=False, single_word=False, normalized=True), 'sep_token': AddedToken("[SEP]", rstrip=False, lstrip=False, single_word=False, normalized=True), 'pad_token': AddedToken("[PAD]", rstrip=False, lstrip=False, single_word=False, normalized=True), 'cls_token': AddedToken("[CLS]", rstrip=False, lstrip=False, single_word=False, normalized=True), 'mask_token': AddedToken("[MASK]", rstrip=False, lstrip=True, single_word=False, normalized=True)}) >>> dtsmall(text, return_offsets_mapping=True) {'input_ids': [1, 5330, 7205, 20664, 12613, 5680, 4629, 20944, 2], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 0), (0, 2), (2, 4), (4, 6), (6, 10), (10, 13), (13, 15), (15, 20), (0, 0)]} >>> dtsmall.convert_ids_to_tokens(at(text)["input_ids"]) ['[CLS]', 'EM', 'PL', 'OY', 'MENT', 'ĠAG', 'RE', 'EMENT', '[SEP]'] >>> dtroberta=AutoTokenizer.from_pretrained("roberta-base", use_fast=True) Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 481/481 [00:00<00:00, 188kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 878k/878k [00:01<00:00, 714kB/s] Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 446k/446k [00:00<00:00, 668kB/s] Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.29M/1.29M [00:02<00:00, 662kB/s] >>> print(dtroberta) PreTrainedTokenizerFast(name_or_path='roberta-base', vocab_size=50265, model_max_len=512, is_fast=True, padding_side='right', special_tokens={'bos_token': '<s>', 'eos_token': '</s>', 'unk_token': '<unk>', 'sep_token': '</s>', 'pad_token': '<pad>', 'cls_token': '<s>', 'mask_token': AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=False)}) >>> dtroberta(text, return_offsets_mapping=True) {'input_ids': [0, 5330, 7205, 20664, 12613, 5680, 4629, 20944, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 0), (0, 2), (2, 4), (4, 6), (6, 10), (11, 13), (13, 15), (15, 20), (0, 0)]} >>> dtroberta.convert_ids_to_tokens(at(text)["input_ids"]) ['<pad>', 'EM', 'PL', 'OY', 'MENT', 'ĠAG', 'RE', 'EMENT', '</s>'] `
11-16-2021 13:38:05
11-16-2021 13:38:05
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,417
closed
Errors while importing FlaxHybridCLIP checkpoints to FlaxCLIPModel or CLIPModel
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.12.2 - Platform: Linux-5.4.0-80-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.10.0+cu113 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) - Jax version: 0.2.25 - JaxLib version: 0.1.73 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patil-suraj @patrickvonplaten ## Information During the last Flax/JAX Community Week we trained a fine-tuned version of [CLIP for the Italian language](https://github.com/clip-italian/clip-italian). We used the [provided script](https://github.com/clip-italian/clip-italian/tree/master/hybrid_clip), so we trained a *FlaxHybridCLIP* model with Open AI's ViT and `"dbmdz/bert-base-italian-xxl-uncased"` BERT as encoders. Now, I'm trying to use that model with the transformers' official API classes, either FlaxCLIPModel or CLIPModel (my final goal would be to port it to pytorch and publish it to the hub). However, I am having a hard time loading our weights into any of the two. I tried different workarounds (see below) but none of them seems to be working. ## To reproduce I assume these imports ```python from modeling_hybrid_clip import FlaxHybridCLIP from configuration_hybrid_clip import HybridCLIPConfig from transformers import CLIPModel, CLIPConfig, FlaxCLIPModel, CLIPVisionConfig, CLIPTextConfig import jax import jax.numpy as jnp ``` Steps to reproduce the behavior: 1. My first tests were: ```python model = FlaxCLIPModel.from_pretrained("clip-italian/clip-italian") # or model = CLIPModel.from_pretrained("clip-italian/clip-italian", from_flax=True) # Output You are using a model of type hybrid-clip to instantiate a model of type clip. This is not supported for all configurations of models and can yield errors. INFO:absl:Unable to initialize backend 'tpu_driver': NOT_FOUND: Unable to find driver in registry given worker: INFO:absl:Unable to initialize backend 'gpu': NOT_FOUND: Could not find registered platform with name: "cuda". Available platform names are: Interpreter Host INFO:absl:Unable to initialize backend 'tpu': INVALID_ARGUMENT: TpuPlatform is not available. WARNING:absl:No GPU/TPU found, falling back to CPU. (Set TF_CPP_MIN_LOG_LEVEL=0 and rerun for more info.) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-3-52c9dab549d0> in <module> ----> 1 model = FlaxCLIPModel.from_pretrained("clip-italian/clip-italian") ~/venvs/unbias_venv/lib/python3.7/site-packages/transformers/modeling_flax_utils.py in from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs) 402 else: 403 raise ValueError( --> 404 f"Trying to load the pretrained weight for {key} failed: checkpoint has shape " 405 f"{state[key].shape} which is incompatible with the model shape {random_state[key].shape}. " 406 "Using `ignore_mismatched_sizes=True` if you really want to load this checkpoint inside this " ValueError: Trying to load the pretrained weight for ('text_projection', 'kernel') failed: checkpoint has shape (768, 512) which is incompatible with the model shape (512, 512). Using `ignore_mismatched_sizes=True` if you really want to load this checkpoint inside this model. ``` but for both of them, I got inconsistent shapes for the text_projection dense layer (it is expected to be (512,512) but BERT has hidden size 768, so in our checkpoints it is (768,512)). If I try to ignore the mismatched shapes it seems to be working, but I think that many of the weights from the checkpoint are not used: ```python model = FlaxCLIPModel.from_pretrained("clip-italian/clip-italian", ignore_mismatched_sizes=True) # Output You are using a model of type hybrid-clip to instantiate a model of type clip. This is not supported for all configurations of models and can yield errors. Some weights of the model checkpoint at clip-italian/clip-italian were not used when initializing FlaxCLIPModel: {('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'embeddings', 'token_type_embeddings', 'embedding'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'embeddings', 'patch_embedding', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'post_layernorm', 'bias'), ('text_model', 'encoder', 'layer', '0', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layer', '2', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'pre_layrnorm', 'scale'), ('text_model', 'encoder', 'layer', '4', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'embeddings', 'position_embedding', 'embedding'), ('text_model', 'encoder', 'layer', '2', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'kernel'), ('text_model', 'embeddings', 'position_embeddings', 'embedding'), ('text_model', 'embeddings', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '6', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '1', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '0', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '4', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '9', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '0', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'pre_layrnorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '0', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '9', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '1', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '5', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'intermediate', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'embeddings', 'class_embedding'), ('text_model', 'encoder', 'layer', '4', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '2', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '0', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '6', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '2', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '9', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '1', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '3', 'output', 'LayerNorm', 'bias'), ('text_model', 'embeddings', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '6', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '11', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '3', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '3', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'bias'), ('text_model', 'pooler', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'bias'), ('text_model', 'pooler', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '8', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '5', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '10', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '11', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '11', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '2', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '3', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '5', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '10', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'output', 'dense', 'kernel'), ('text_model', 'embeddings', 'word_embeddings', 'embedding'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '11', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'post_layernorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '4', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '10', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '9', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '8', 'intermediate', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '11', 'intermediate', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'intermediate', 'dense', 'bias')} - This IS expected if you are initializing FlaxCLIPModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing FlaxCLIPModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of FlaxCLIPModel were not initialized from the model checkpoint at clip-italian/clip-italian and are newly initialized: {('text_model', 'final_layer_norm', 'scale'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '8', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '9', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '9', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'embeddings', 'token_embedding', 'embedding'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '5', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '2', 'layer_norm2', 'bias'), ('vision_model', 'post_layernorm', 'bias'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '5', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '7', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '3', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '0', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '2', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'bias'), ('vision_model', 'post_layernorm', 'scale'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '7', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'kernel'), ('logit_scale',), ('vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'embeddings', 'patch_embedding', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'kernel'), ('text_model', 'final_layer_norm', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '4', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '10', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '4', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'bias'), ('vision_model', 'embeddings', 'position_embedding', 'embedding'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '6', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '3', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '6', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'kernel'), ('vision_model', 'pre_layrnorm', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '6', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '11', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '1', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '11', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '10', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '2', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '4', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '0', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '5', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'scale'), ('vision_model', 'pre_layrnorm', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'embeddings', 'position_embedding', 'embedding'), ('vision_model', 'embeddings', 'class_embedding'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'bias')} You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of FlaxCLIPModel were not initialized from the model checkpoint at clip-italian/clip-italian and are newly initialized because the shapes did not match: - ('text_projection', 'kernel'): found shape (768, 512) in the checkpoint and (512, 512) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` 2. Next, I tried to force our configuration, so that it would force `config.text_config.hidden_size == 768` and let the shapes match at loading time: ```python config = HybridCLIPConfig.from_pretrained("clip-italian/clip-italian") config.logit_scale_init_value = 20 # required by FlaxCLIPModel config.text_config.attention_dropout = 0.0 # required by FlaxCLIPModel model = FlaxCLIPModel.from_pretrained("clip-italian/clip-italian", config=config) # Output Some weights of the model checkpoint at clip-italian/clip-italian were not used when initializing FlaxCLIPModel: {('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'embeddings', 'token_type_embeddings', 'embedding'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'embeddings', 'patch_embedding', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'post_layernorm', 'bias'), ('text_model', 'encoder', 'layer', '0', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layer', '2', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'pre_layrnorm', 'scale'), ('text_model', 'encoder', 'layer', '4', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'embeddings', 'position_embedding', 'embedding'), ('text_model', 'encoder', 'layer', '2', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'kernel'), ('text_model', 'embeddings', 'position_embeddings', 'embedding'), ('text_model', 'embeddings', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '6', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '1', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '0', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '4', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '9', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '0', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'pre_layrnorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '0', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '9', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '1', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '5', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'intermediate', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'embeddings', 'class_embedding'), ('text_model', 'encoder', 'layer', '4', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '2', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '0', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '6', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '2', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '9', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '1', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '3', 'output', 'LayerNorm', 'bias'), ('text_model', 'embeddings', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '6', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '11', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '3', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '3', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'bias'), ('text_model', 'pooler', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'query', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'bias'), ('text_model', 'pooler', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '7', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '4', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'value', 'bias'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'query', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '8', 'intermediate', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layer', '9', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '8', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '5', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '9', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '2', 'attention', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '10', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '9', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '11', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '11', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '2', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '3', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'dense', 'kernel'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '5', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layer', '10', 'intermediate', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'dense', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'self', 'value', 'kernel'), ('text_model', 'encoder', 'layer', '4', 'output', 'dense', 'kernel'), ('text_model', 'embeddings', 'word_embeddings', 'embedding'), ('text_model', 'encoder', 'layer', '0', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '6', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'key', 'bias'), ('text_model', 'encoder', 'layer', '11', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'key', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'post_layernorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '1', 'attention', 'output', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layer', '8', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'output', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'query', 'kernel'), ('text_model', 'encoder', 'layer', '10', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'value', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '4', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layer', '6', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '3', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layer', '10', 'attention', 'output', 'LayerNorm', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layer', '10', 'output', 'LayerNorm', 'bias'), ('text_model', 'encoder', 'layer', '9', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '6', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '1', 'intermediate', 'dense', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layer', '3', 'attention', 'output', 'LayerNorm', 'scale'), ('text_model', 'encoder', 'layer', '8', 'intermediate', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '5', 'attention', 'self', 'query', 'bias'), ('text_model', 'encoder', 'layer', '0', 'attention', 'self', 'value', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layer', '11', 'intermediate', 'dense', 'bias'), ('text_model', 'encoder', 'layer', '2', 'attention', 'self', 'key', 'kernel'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'attention', 'self', 'key', 'kernel'), ('text_model', 'encoder', 'layer', '11', 'output', 'LayerNorm', 'scale'), ('vision_model', 'vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layer', '5', 'intermediate', 'dense', 'bias')} - This IS expected if you are initializing FlaxCLIPModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing FlaxCLIPModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of FlaxCLIPModel were not initialized from the model checkpoint at clip-italian/clip-italian and are newly initialized: {('text_model', 'final_layer_norm', 'scale'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '8', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '9', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '9', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'embeddings', 'token_embedding', 'embedding'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '5', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '2', 'layer_norm2', 'bias'), ('vision_model', 'post_layernorm', 'bias'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '5', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '7', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '3', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '0', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '2', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'bias'), ('vision_model', 'post_layernorm', 'scale'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '7', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'kernel'), ('logit_scale',), ('vision_model', 'encoder', 'layers', '5', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'embeddings', 'patch_embedding', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'v_proj', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'kernel'), ('text_model', 'final_layer_norm', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '4', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '10', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '4', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'bias'), ('vision_model', 'embeddings', 'position_embedding', 'embedding'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '6', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '3', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '6', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'kernel'), ('vision_model', 'pre_layrnorm', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '6', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '11', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '1', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '11', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '10', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'q_proj', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '2', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '11', 'layer_norm1', 'bias'), ('text_model', 'encoder', 'layers', '4', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc1', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '4', 'layer_norm2', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc1', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '4', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '9', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'bias'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'k_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '11', 'mlp', 'fc2', 'kernel'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '8', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '1', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '10', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '4', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '0', 'layer_norm1', 'bias'), ('vision_model', 'encoder', 'layers', '0', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '7', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '3', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'layer_norm1', 'scale'), ('text_model', 'encoder', 'layers', '3', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm2', 'bias'), ('text_model', 'encoder', 'layers', '7', 'self_attn', 'k_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'k_proj', 'kernel'), ('text_model', 'encoder', 'layers', '5', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '8', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '6', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'bias'), ('text_model', 'encoder', 'layers', '4', 'self_attn', 'out_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'scale'), ('vision_model', 'encoder', 'layers', '9', 'self_attn', 'out_proj', 'bias'), ('text_model', 'encoder', 'layers', '5', 'layer_norm2', 'scale'), ('text_model', 'encoder', 'layers', '0', 'self_attn', 'q_proj', 'bias'), ('text_model', 'encoder', 'layers', '2', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'encoder', 'layers', '8', 'self_attn', 'q_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '6', 'layer_norm1', 'scale'), ('vision_model', 'pre_layrnorm', 'scale'), ('text_model', 'encoder', 'layers', '5', 'self_attn', 'v_proj', 'kernel'), ('vision_model', 'encoder', 'layers', '5', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '3', 'self_attn', 'k_proj', 'bias'), ('vision_model', 'encoder', 'layers', '6', 'self_attn', 'out_proj', 'kernel'), ('text_model', 'embeddings', 'position_embedding', 'embedding'), ('vision_model', 'embeddings', 'class_embedding'), ('vision_model', 'encoder', 'layers', '4', 'self_attn', 'v_proj', 'kernel'), ('text_model', 'encoder', 'layers', '11', 'layer_norm1', 'scale'), ('vision_model', 'encoder', 'layers', '6', 'mlp', 'fc2', 'kernel'), ('text_model', 'encoder', 'layers', '10', 'self_attn', 'q_proj', 'kernel'), ('text_model', 'encoder', 'layers', '1', 'self_attn', 'v_proj', 'bias'), ('vision_model', 'encoder', 'layers', '5', 'mlp', 'fc2', 'bias'), ('vision_model', 'encoder', 'layers', '1', 'mlp', 'fc1', 'bias'), ('vision_model', 'encoder', 'layers', '9', 'layer_norm2', 'bias')} You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` In this case, I don't have mismatching sizes but still many weights from our checkpoint are not used. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> 3. My last resource was to load the checkpoint with the hybrid class, transform its weights into f32, save it locally, and load it as pytorch model, but still, I had the same wrong initialization: ```python model = FlaxHybridCLIP.from_pretrained("clip-italian/clip-italian") def to_f32(t): return jax.tree_map(lambda x: x.astype(jnp.float32) if x.dtype == jnp.bfloat16 else x, t) model.params = to_f32(model.params) model.save_pretrained("./clip-italian-f32") vision_config = CLIPVisionConfig.from_pretrained("openai/clip-vit-base-patch32") text_config = CLIPTextConfig.from_pretrained("dbmdz/bert-base-italian-xxl-uncased") config = CLIPConfig.from_text_vision_configs(text_config=text_config, vision_config=vision_config) pt_model = CLIPModel.from_pretrained("./clip-italian-f32/", from_flax=True, config=config) # Output: same as before (no errors, many layers initialized as new), I just didn't copy it here :) ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> This code to run flawlessly: ```python model = FlaxCLIPModel.from_pretrained("clip-italian/clip-italian") # or model = CLIPModel.from_pretrained("clip-italian/clip-italian", from_flax=True) ``` Thank you in advance!
11-16-2021 12:53:53
11-16-2021 12:53:53
Hey @g8a9 It's not possible to load `FlaxHybridCLIP` into `FlaxCLIP` since the module structure is different as the hybrid can use any pre-trained text and vision models. The hybrid clip model will soon be officially supported in `transformers` (see #13511), we are now calling it `VisionTextDualEncoder`. This will be available in both torch and flax. Stay tuned!<|||||>Nice to know, thanks! Do you think that once the `VisionTextDualEncoder` is out, we will be able to load our checkpoint with it? (actually, my final goal is to have access to our two fine-tuned encoders, ViT and BERT, in pytorch)<|||||>>Do you think that once the VisionTextDualEncoder is out, we will be able to load our checkpoint with it? The module structure is pretty much similar, so yes! If not I'll share a script to convert the old hybrid clip weights to this new class.<|||||>Hey @g8a9 clip-italian (or any hybrid clip) model can now be loaded using the new `VisionTextDualEncoderModel`. Converting from flax to pt should also work. ```python3 from transformers import FlaxVisionTextDualEncoderModel, VisionTextDualEncoderModel # `logit_scale` can be initialized using `config.logit_scale_init_value` attribute model = FlaxVisionTextDualEncoderModel.from_pretrained("clip-italian/clip-italian", logit_scale_init_value=1) model.save_pretrained("clip-italin") model_pt = VisionTextDualEncoderModel.from_pretrained("clip-italian", from_flax=True) ``` Let me know if this works for you and if you see any discrepancies in the result. I would like to use clip-Italian to feature this new model class :) <|||||>Do you think you could push the PT checkpoint and the processor (tokenizer/feature-extractor) for `clip-italian`? Going to use this model in doc examples :) <|||||>> Hey @g8a9 > > clip-italian (or any hybrid clip) model can now be loaded using the new `VisionTextDualEncoderModel`. Converting from flax to pt should also work. > > ```python > from transformers import FlaxVisionTextDualEncoderModel, VisionTextDualEncoderModel > > # `logit_scale` can be initialized using `config.logit_scale_init_value` attribute > model = FlaxVisionTextDualEncoderModel.from_pretrained("clip-italian/clip-italian", logit_scale_init_value=1) > model.save_pretrained("clip-italin") > > model_pt = VisionTextDualEncoderModel.from_pretrained("clip-italian", from_flax=True) > ``` > > Let me know if this works for you and if you see any discrepancies in the result. I would like to use clip-Italian to feature this new model class :) this worked for me. Thanks for the solution.
transformers
14,416
closed
Finetune Hubert model : Adding new vocabulary
Environment info ``` transformers version: 4.12.2 Platform: Mac Python version: 3.7 PyTorch version (GPU?): 1.9 Tensorflow version (GPU?): No Using GPU in script?: No Using distributed or parallel setup in script?: No ``` I just run simple code to load Hubert pretrained base model ``` from transformers import Wav2Vec2Processor, HubertForCTC import torch import librosa PROCESSOR = Wav2Vec2Processor.from_pretrained('facebook/hubert-large-ls960-ft') model = HubertForCTC.from_pretrained('facebook/hubert-large-ls960-ft') tokenizer = PROCESSOR.tokenizer ``` On a smaller dataset, i am able to get good WER around 0.0 But if I add new tokens/vocabulary to it by using the below code: ``` from transformers import Wav2Vec2Processor, HubertForCTC import torch import librosa PROCESSOR = Wav2Vec2Processor.from_pretrained('facebook/hubert-large-ls960-ft') model = HubertForCTC.from_pretrained('facebook/hubert-large-ls960-ft') tokenizer = PROCESSOR.tokenizer tokenizer.add_tokens(new_tokens=[' ','Ä','Ö','Ü']) ``` The loss and WER go bad and then worse (clearly), and later loss is NAN. Is it the correct way to add new alphabets? dataset is same in both trainings
11-16-2021 11:15:22
11-16-2021 11:15:22
cc @anton-l @patrickvonplaten<|||||>Hi @harrypotter90! Here you're adding a space (`' '`) as a separate token, while the tokenizer already has a special separator token `'|'`, that is a replacement for all whitespaces. Try adding just the new letters and fine-tune again.<|||||>Also I would recommend resizing the `lm_head` of Hubert to include your newly added tokens. Otherwise the model will never predict those<|||||>Thanks for the prompt reply. Error on this line : `model.resize_token_embeddings(len(tokenizer))` ``` Traceback (most recent call last): File "train-main.py", line 47, in <module> model.resize_token_embeddings(len(tokenizer)) File "/opt/anaconda/envs/opence/lib/python3.8/site-packages/transformers/modeling_utils.py", line 724, in resize_token_embeddings model_embeds = self._resize_token_embeddings(new_num_tokens) File "/opt/anaconda/envs/opence/lib/python3.8/site-packages/transformers/modeling_utils.py", line 738, in _resize_token_embeddings old_embeddings = self.get_input_embeddings() File "/opt/anaconda/envs/opence/lib/python3.8/site-packages/transformers/modeling_utils.py", line 563, in get_input_embeddings return base_model.get_input_embeddings() File "/opt/anaconda/envs/opence/lib/python3.8/site-packages/transformers/modeling_utils.py", line 565, in get_input_embeddings raise NotImplementedError NotImplementedError ``` Using latest transformers: Name: transformers Version: 4.12.3<|||||>Yeah this won't work because there are no input embeddings. Can you try: ``` tokenizer = PROCESSOR.tokenizer tokenizer.add_tokens(new_tokens=['Ä','Ö','Ü']) model = HubertForCTC.from_pretrained('facebook/hubert-large-ls960-ft', vocab_size=len(tokenizer)) ```<|||||>After : `model = HubertForCTC.from_pretrained('facebook/hubert-large-ls960-ft', vocab_size=len(tokenizer)) ` Got error: ``` Traceback (most recent call last): File "train-main.py", line 49, in <module> model = HubertForCTC.from_pretrained(MODEL_NAME, gradient_checkpointing=True, ctc_loss_reduction="mean", vocab_size=len(tokenizer)) File "/opt/anaconda/envs/opence/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1441, in from_pretrained model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_state_dict_into_model( File "/opt/anaconda/envs/opence/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1595, in _load_state_dict_into_model raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") RuntimeError: Error(s) in loading state_dict for HubertForCTC: size mismatch for lm_head.weight: copying a param with shape torch.Size([32, 1024]) from checkpoint, the shape in current model is torch.Size([35, 1024]). ``` size mismatch for lm_head.bias: copying a param with shape torch.Size([32]) from checkpoint, the shape in current model is torch.Size([35]).<|||||>How about `ignore_mismatched_sizes=True,` , wild guess, if it is just a warning ? At least this started the training.. I will update if WER goes down or not .<|||||>Hey @harrypotter90, exactly sorry I forgot to mention this parameter. To summarize, I would recommend to add new tokens and load your model as follows: ```python from transformers import Wav2Vec2Processor, AutoModelForCTC # load tokenizer & feature extractor processor = Wav2Vec2Processor.from_pretrained('facebook/hubert-large-ls960-ft') # add new tokens tokenizer = processor.tokenizer tokenizer.add_tokens(new_tokens=['Ä','Ö','Ü']) # load pretrained model and replace fine-tuned head with resized randomly initialized head model = AutoModelForCTC.from_pretrained("facebook/hubert-large-ls960-ft", vocab_size=len(tokenizer), ignore_mismatched_sizes=True) # now use model for training ```<|||||>Cool, it worked. Thank you
transformers
14,415
closed
[WIP] Ensure TF model configs can be converted to proper JSON
# What does this PR do? This is an extension to https://github.com/huggingface/transformers/pull/14361/files, which hopefully will prevent errors such as https://github.com/huggingface/transformers/issues/14403 from going unnoticed. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @Rocketknight1 I assume this test (if run) will fail for quite some architectures, I will try to see if I can provide a fix on this PR, feel free to review/comment.
11-16-2021 10:29:57
11-16-2021 10:29:57
Thanks for this! I see a few failing tests, but I think something like this should work. One thing I'd suggest: I think the `from_config()` method should probably work whether a dict or a true config object is passed. Can we check the type of the input, and only do the conversion if it's actually a dict?<|||||>Yes, I think these here need to be fixed individually, as the config indeed is not JSONifiable. Regarding your last comment, I'll try to see if I can add that and a test case for it later. ``` FAILED tests/test_modeling_tf_distilbert.py::TFDistilBertModelTest::test_save_load_config FAILED tests/test_modeling_tf_funnel.py::TFFunnelModelTest::test_save_load_config FAILED tests/test_modeling_tf_gpt2.py::TFGPT2ModelTest::test_save_load_config FAILED tests/test_modeling_tf_flaubert.py::TFFlaubertModelTest::test_save_load_config FAILED tests/test_modeling_tf_funnel.py::TFFunnelBaseModelTest::test_save_load_config FAILED tests/test_modeling_tf_led.py::TFLEDModelTest::test_save_load_config FAILED tests/test_modeling_tf_xlm.py::TFXLMModelTest::test_save_load_config FAILED tests/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_save_load_config ``` <|||||>That's strange, the config for at least some of them seems to convert fine for me. For example, this works (after installing from master): ``` from transformers import TFAutoModel import json model = TFAutoModel.from_pretrained('distilbert-base-cased') json.dumps(model.get_config().to_dict()) ```<|||||>It seems the problem rather is on loading the config: When running PretrainedConfig.from_dict(), the returned config will be of class PretrainedConfig, NOT of the specific models class; hence getattr calls to non-standard properties fail. Is there a way to get the correct config class when calling from_config()? Otherwise, we would need to save the class name as part of the get_config(), and when calling from_config() use this to map to the correct class. EDIT: I think we can use cls.config_class. Lets see if the tests go through. ``` src/transformers/models/distilbert/modeling_tf_distilbert.py:347: in __init__ self.num_hidden_layers = config.num_hidden_layers _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = PretrainedConfig { "activation": "gelu", "attention_dropout": 0.1, "dim": 32, "dropout": 0.1, "hidden_act": ..._classif_dropout": 0.2, "sinusoidal_pos_embds": false, "transformers_version": "4.13.0.dev0", "vocab_size": 99 } key = 'num_hidden_layers' def __getattribute__(self, key): if key != "attribute_map" and key in super().__getattribute__("attribute_map"): key = super().__getattribute__("attribute_map")[key] > return super().__getattribute__(key) E AttributeError: 'PretrainedConfig' object has no attribute 'num_hidden_layers' ```<|||||>@Zahlii Since we know things are broken, we're going to merge this PR urgently, and then quickly work on testing to follow it up. I'll tag you in the PR - we're planning a revamp to TF testing, since the tests that would have caught this were marked as tooslow. <|||||>@Zahlii Another update, and a change of plan! We're going to revert the last commit, do the fixes in this PR, and I might add some testing to this PR before it's merged. Is that okay with you?<|||||>Sure, go ahead and let me know if I can support further.<|||||>Cool, thank you!<|||||>Small comment without having checked the code - I observed that the saved model format per default traces all functions. For my use cases, I always disabled that because it added an enormous overhead, and with a correct config handling it wasn't required. On the one hand this rids the requirement for the config stuff, on the other hand it is much slower. How is this currently handled , both inside tests and others? https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model<|||||>> Small comment without having checked the code - I observed that the saved model format per default traces all functions. For my use cases, I always disabled that because it added an enormous overhead, and with a correct config handling it wasn't required. On the one hand this rids the requirement for the config stuff, on the other hand it is much slower. How is this currently handled , both inside tests and others? https://www.tensorflow.org/api_docs/python/tf/keras/models/save_model This is a good point - the short answer is that we want it to work when people save their model like that, but like you we found it was much too slow to test every model with it in the CI. The solution we went with in this PR was to keep it as a 'core' test only for the most commonly used model classes (BERT, GPT2 and BART), and hope that if there's a problem with it that it shows up there.
transformers
14,414
closed
Pipelines fails with IndexError using Bert model with outputs and batch size >= 16
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0.dev0 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @Narsil Library: - Pipelines: @Narsil ## Information Model I am using (Bert, XLNet ...): [FinBert](https://huggingface.co/yiyanghkust/finbert-tone) The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) I came across this issue when setting `output_hidden_states=True` during instantiation of a pretrained model inorder to obtain the inferred CLS sentence embeddings using the following approach: ``` from transformers.pipelines.text_classification import TextClassificationPipeline class FinBertSentimentClassificationPipeline(TextClassificationPipeline): def postprocess(self, model_outputs, function_to_apply=None, return_all_scores=False): prediction = super().postprocess(model_outputs, function_to_apply, return_all_scores) prediction['last_hidden_layer']= model_outputs.hidden_states[0][0][0] return prediction def custom_pipeline(task,model,tokenizer,**kwargs): kwargs['tokenizer'] = tokenizer kwargs['feature_extractor'] = None return FinBertSentimentClassificationPipeline(model=model,framework='pt',task=task,**kwargs) ``` Where I override the postprocess function to also return the last hidden state layer. However, the issue occurs in the pipeline code itself when using a batch size equal to or larger than 16. ## To reproduce Steps to reproduce the behavior: 1. `!pip install git+https://github.com/huggingface/transformers.git` 2. ``` from transformers import BertTokenizerFast, BertForSequenceClassification from transformers import pipeline finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3, output_hidden_states=True) tokenizer = BertTokenizerFast.from_pretrained('yiyanghkust/finbert-tone') nlp = pipeline("sentiment-analysis", model=finbert, tokenizer=tokenizer, device=0) varying_length_sentences = ["there is a shortage of capital, and we need extra financing "*5, "growth is strong and we have plenty of liquidity ", "there are doubts about our finances" * 10, "profits are flat", "profits are flat "*30]*1000 similar_length_sentences = ["there is a shortage", "growth is strong ", "there are doubts", "profits are flat"]*1000 ``` 3. `results = nlp(similar_length_sentences, batch_size=16, num_workers=2)` See the [Colab Notebook](https://colab.research.google.com/drive/1SlntHYK-F8my84rUiS1xuqQN4v7nBgEN?usp=sharing) for reference. Running step 3 produces the following error: ``` /usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py in loader_batch_item(self) 750 if k == "past_key_values": 751 continue --> 752 if isinstance(element[self._loader_batch_index], torch.Tensor): 753 loader_batched[k] = element[self._loader_batch_index].unsqueeze(0) 754 elif isinstance(element[self._loader_batch_index], np.ndarray): IndexError: tuple index out of range ``` Executing the pipeline with batch sizes smaller than 16 seem to work (see colab notebook). ## Expected behavior Pipeline runs successfully with any batch size using a model loaded to output hidden states and attention.
11-16-2021 10:19:25
11-16-2021 10:19:25
Hi @alwayscurious , Yes, the current system for automating batching/unbatching doesn't support `hidden_states` nor `attentions`. I opened up a PR. Currently it explicitly needs specific keys to check for this tuples of tensors since they are not the norm.<|||||>Hi @Narsil Thanks for adding a fix to support this. Once the PR is merged to master I'll check that it works successfully!<|||||>@alwayscurious don't use it before with `batch_size < 16` btw, it's just incorrect, you will receive first layer hidden states (with full batch) as your first item, second layer as second item, so on and so forth.
transformers
14,413
closed
Avoid looping when data exhausted
# What does this PR do? This fix avoids running into a virtually infinite loop when using a finite iterable dataset. When using an iterable dataset `num_epochs` is set to sys.maxsize to make sure all data is consumed (see https://github.com/huggingface/transformers/pull/12561) Likewise I'd like to set `max_steps` large enough to consume all data but still stop when the data is exhausted. In case we don't know how many samples there will be and the iterator stops we might run into a virtually infinite loop (iterating the int range until `sys.maxsize`). See this code snipped to reproduce the behavior: ```python from torch.utils.data import IterableDataset from transformers import BertForMaskedLM, BertConfig, TrainingArguments, Trainer model = BertForMaskedLM(BertConfig()) class FiniteIterableDataset(IterableDataset): def __init__(self, num_samples: int): self.current_sample = 0 self.num_samples = num_samples def __iter__(self): while self.current_sample < self.num_samples: yield {"input_ids": [0, 0, 0, self.current_sample], "labels": [0, 0, 0, 1]} self.current_sample += 1 batch_size = 1 gradient_accumulation_steps = 1 num_samples = 10 available_steps = num_samples // (batch_size * gradient_accumulation_steps) data = FiniteIterableDataset(num_samples) train_args = TrainingArguments( "tmp_dir", max_steps=available_steps, per_device_train_batch_size=batch_size, gradient_accumulation_steps=gradient_accumulation_steps, ) trainer = Trainer(model, train_dataset=data, args=train_args) trainer.train() # works data = FiniteIterableDataset(num_samples) train_args = TrainingArguments( "tmp_dir", max_steps=available_steps + 1, # set a higher number than actually available per_device_train_batch_size=batch_size, gradient_accumulation_steps=gradient_accumulation_steps, ) trainer = Trainer(model, train_dataset=data, args=train_args) trainer.train() # "hangs" at 91% after 10 steps iterating through epochs like wild (until sys.maxsize) ``` With this fix it is checked whether `epoch_iterator` did not produce any samples and accordingly set `control.should_training_stop` to `True`. I don't know if changing the flow control this way is approved of as it's always changed through callback handlers, I'm happy for suggestions how to properly do this. I tried coming up with a test case checking the logs for when training was stopped in this case. Other options would be to measure the time training takes and time out after a while but that wouldn't be a nice test as run time may be affected by other circumstances. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines] https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? - trainer: @sgugger
11-16-2021 09:49:53
11-16-2021 09:49:53
Thanks again for fixing this! :-)
transformers
14,412
closed
Quantization with `transformers.onnx`
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.4.0-1059-aws-x86_64-with-glibc2.27 - Python version: 3.9.5 - PyTorch version (GPU?): 1.9.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help Documentation: @sgugger ## Information Model I am using (XLMRobertaForTokenClassification): The problem arises when using trying to * [x] my own modified scripts: The tasks I am working on is: * [x] my own task or dataset (which I cannot show because of data security reasons) ## Problem At the moment, we are using the old Graph conversion approach `convert_graph_to_onnx.py` to export our models to ONNX. We used the quantized version. Now we would like to update to the new `transformers.onnx` package but we are not sure how to using quantization (see code example 1 below). The documentation is lacking a part how to use quantization with the new package. We tried to use the old method for quantization which worked: we used code example 2 for checking if there were lines which contained the phrase "quantized" (which was true). But when we used the quantized model for inference, the scores dropped massively. **My question**: Is the usage of the quantization correct for the new package or should we wait for an updated version? #### Code example 1 ```python3 from transformers.convert_graph_to_onnx import convert_pytorch, quantize, verify from transformers.onnx.convert import export, validate_model_outputs from transformers.onnx.features import FeaturesManager def onnx_export( model_directory: Path, model_filepath: Path, tokenizer: PreTrainedTokenizer, atol: float = 0.0001, feature: str = "default", opset: int = 12, quantize_model: bool = False, ): """Export model to ONNX. Note ---- Code taken and modified from: https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/__main__.py Parameters ---------- model_directory : Path Path to model directory model_filepath : Path Filepath to save model to tokenizer : PreTrainedTokenizer Pre-trained tokenizer. atol : float, optional Absolute difference tolerence when validating the model, by default 0.0001 feature : str, optional Export the model with some additional feature, by default "default" opset : int, optional ONNX opset to use, by default 12 quantize_model : bool, optional Quantize the model to be run with int8, by default False Raises ------ ValueError If parameter 'opset' is not sufficient to export the chosen kind of model. """ if feature not in [ "default", "causal-lm", "seq2seq-lm", "sequence-classification", "token-classification", "multiple-choice", "question-answering", ]: feature = "default" # Allocate the model model = FeaturesManager.get_model_from_feature(feature, model_directory) model_kind, model_onnx_config = FeaturesManager.check_supported_model_or_raise( model, feature=feature ) onnx_config = model_onnx_config(model.config) # Ensure the requested opset is sufficient if opset < onnx_config.default_onnx_opset: raise ValueError( f"Opset {opset} is not sufficient to export {model_kind}. " f"At least {onnx_config.default_onnx_opset} is required." ) _, onnx_outputs = export(tokenizer, model, onnx_config, opset, model_filepath) validate_model_outputs(onnx_config, tokenizer, model, model_filepath, onnx_outputs, atol) if quantize_model: quantized_model = quantize(model_filepath) verify(quantized_model) # remove the original model model_filepath.unlink() # rename quantized model quantized_model.rename(str(model_filepath.resolve())) ``` #### Code example 2 ```python3 import onnx model = onnx.load("model.onnx") onnx.checker.check_model(model) print(onnx.helper.printable_graph(model.graph)) ```
11-16-2021 08:45:16
11-16-2021 08:45:16
cc @michaelbenayoun <|||||>Hello! The new package does not have a quantization option as we're moving all performance optimization features in a separate library with the sole focus of accelerating the performance of models. The package is the following: https://github.com/huggingface/optimum You can find a bit of documentation about the feature here: https://github.com/huggingface/optimum/tree/main/src/optimum/onnxruntime The docs are currently a work in progress and should improve significantly over the coming weeks/months. As for the questions regarding the quantization, I will let @michaelbenayoun and @mfuntowicz answer :)<|||||>Hello! As Lysandre said, optimization features are currently added to [optimum](https://github.com/huggingface/optimum). That being said, I see one potential reason for the scores dropping: in the old graph conversion script, [you have an optimize step](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_graph_to_onnx.py#L487), which performs many optimizations on the graph. The resulting graph has a different topology than the one initially converted to ONNX: quantization is applied to this optimized version. Now in your **code example 1**, you are applying quantization directly to the converted ONNX model, so one thing you can try is optimizing the converted model (the same way it is done in the old conversion script), then applying quantization to this optimized version. Not only the resulting model will be faster, it might solve your issue as well.<|||||>Thank you all for your help: - @LysandreJik : We are looking forward to the new library, seems really nice! - @michaelbenayoun : Thanks, we will try this on monday and keep you updated!<|||||>@michaelbenayoun : Your solution idea fixed some issues for me and my results got better but in the end, the old approach is the best (at the moment). But this could be due to errors in our pipeline. I think I will use the new [optimum](https://github.com/huggingface/optimum) library in the future. Thanks all for the help, I will close this issue.
transformers
14,411
closed
Fixed a bug for num_return_sequences don't take effect in Text2TextGenerationPipeline.
# Fixed a bug for num_return_sequences don't take effect in Text2TextGenerationPipeline. The previous postprocess function always returns one result. This leads to num_return_sequences doesn't take effect in Text2TextGenerationPipeline and only can get one text result, no matter how we change the parameters. We can get multiple text results in the postprocess method and get multiple results from Text2TextGenerationPipeline. <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/13027#issuecomment-969899988 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests?
11-16-2021 07:50:39
11-16-2021 07:50:39
Hi @enze5088 , Do you mind adding a test ? I think extending tests/test_pipelines_text2text_generation.py::Text2TextGenerationPipelineTests::test_small_model_pt would be enough (just extend with `num_return_sequences=3` and check that the returned results returns 3 strings (ideally they should be different, but given it's a random model, it might not be)<|||||>> Hi @enze5088, > > Do you mind adding a test? I think extending tests/test_pipelines_text2text_generation.py::Text2TextGenerationPipelineTests::test_small_model_pt would be enough (just extend with `num_return_sequences=3` and check that the returned results returns 3 strings (ideally they should be different, but given it's a random model, it might not be) Thank you for your reply, I extend the tests/test_pipelines_text2text_generation.py file. But I don't know what it means when check_code_quality fails and displays "test/test_pipelines_text2text_generation.py will be reformatted" <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing back in favor of https://github.com/huggingface/transformers/pull/14988 (Same commits with code quality)
transformers
14,410
closed
Fixed a bug for num_return_sequences don't take effect in Text2TextGenerationPipeline
# Fixed a bug for num_return_sequences don't take effect in Text2TextGenerationPipeline. The previous postprocess function always returns one result. This leads to num_return_sequences doesn't take effect in Text2TextGenerationPipeline and only can get one text result, no matter how we change the parameters. We can get multiple text results in the postprocess method and get multiple results from Text2TextGenerationPipeline. <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/13027#issuecomment-969899988 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
11-16-2021 06:53:26
11-16-2021 06:53:26
transformers
14,409
closed
support for pytorch-directml
# 🚀 Feature request I would like to be able to work with the available models and trainer classes when using pytorch-directml under WSL ## Motivation importing relevant huggingface packages results in a message that pytorch could not be detected and as such the models would not be available. there is an importable torch module though, provided by the pytorch-directml package. ## Your contribution I'm not sure how many changes would be required to make this work. If the features of the fork of pytorch are close to the newest release, it wouldn't be much, but I do know that the directml implementation for tensorflow is quite out of date. (using 1.15 instead of 2 or greater)
11-15-2021 22:50:10
11-15-2021 22:50:10
Hello! If you can provide a reproducible example, we're happy to take a look! It's probably a matter of adding `pytorch-directml` to this part of the `file_utils.py` file: https://github.com/huggingface/transformers/blob/558f8543ba3860c736a7a9a4176ac20f23f9d5a0/src/transformers/file_utils.py#L67-L77 Like it is done here for many TensorFlow flavors: https://github.com/huggingface/transformers/blob/558f8543ba3860c736a7a9a4176ac20f23f9d5a0/src/transformers/file_utils.py#L80-L112<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>`pytorch-directml` replces `torch` But `transformers` dont recognice/accept it https://pypi.org/project/pytorch-directml/ https://docs.microsoft.com/en-us/windows/ai/directml/dml-intro<|||||>is this already added? i would like to be able to run transformers with `pytorch-directml`
transformers
14,408
closed
Fix gradient_checkpointing backward compatibility
# What does this PR do? This supercedes #14405 and fixes #14388 by going at the root of the problem. When the code for backward compatibility is executed in the main init, the submodules of the model have not been created yet, so there is nothing to do. That code needs to be executed in some kind of `post_init`. We currently don't have a `post_init` in our models, and for another operation that is very similar (`init_weights`, which needs ot be executed at the end of the init), we have a call to that method at the end of the init of every model. The good fix will thus be to replace that call to `init_weights` to a call to `post_init` (which will call `init_weights` internally). This will be a big PR that touches every model, so will implement this for the end of the week. For a quick fix since we need to do a patch release because of the BC problem, this PR uses a forward pre hook (executed before the forward method) that removes itself. So the code is executed just before the first forward (not as clean as in a post init but the next best thing).
11-15-2021 22:12:48
11-15-2021 22:12:48
This broke HF/deepspeed integration with pt-1.8 or pt-1.9 - works fine with pt-1.10. found with git bisecting and reported by @jeffra, as their CI broke with our master. ``` RUN_SLOW=1 pyt tests/deepspeed/test_deepspeed.py::TestDeepSpeedWithLauncher::test_clm_1_zero3 -sv ``` ``` E Traceback (most recent call last): E File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 524, in <module> E main() E File "/mnt/nvme1/code/huggingface/transformers-master/examples/pytorch/language-modeling/run_clm.py", line 472, in main E train_result = trainer.train(resume_from_checkpoint=checkpoint) E File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1316, in train E tr_loss_step = self.training_step(model, inputs) E File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1849, in training_step E loss = self.compute_loss(model, inputs) E File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 1881, in compute_loss E outputs = model(**inputs) E File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl E return forward_call(*input, **kwargs) E File "/mnt/nvme1/code/github/00optimize/deepspeed/deepspeed/runtime/engine.py", line 1580, in forward E loss = self.module(*inputs, **kwargs) E File "/home/stas/anaconda3/envs/py38-pt19/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1057, in _call_impl E for hook in itertools.chain( E RuntimeError: OrderedDict mutated during iteration ```
transformers
14,407
closed
[Wav2Vec2] Make sure that gradient checkpointing is only run if needed
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Further fixes: https://github.com/huggingface/transformers/issues/14388 - thanks @MarktHart for noticing the degradation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-15-2021 19:50:23
11-15-2021 19:50:23
This PR further reduces the required memory by 20%<|||||>cc @anton-l FYI
transformers
14,406
closed
Revert "Fix weight loading issue"
Reverts huggingface/transformers#14016
11-15-2021 18:33:35
11-15-2021 18:33:35
Sorry for reverting the PR here - it's on me! I merged it too quickly. We had some internal discussion and came to the conclusion that this hack is probably not worth the functionality it would give us here. Saving and loading a model with `tempfile` inside the `from_encoder_decoder_pretrained(...)` function is a big hack and it's questionable whether it's worth it. Just to compare the current design to how it would look like if we revert the PR for @LysandreJik @sgugger @Rocketknight1 If we leave `master` as it is, one can convert a PyTorch model checkpoint **correctly** as follows: ### current design ```python from transformers import EncoderDecoderModel, TFEncoderDecoderModel _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") model = TFEncoderDecoderModel.from_encoder_decoder_pretrained( "./encoder", "./decoder", encoder_from_pt=True, decoder_from_pt=True ) # then this works: model.save_pretrained("./") model = TFEncoderDecoderModel.from_pretrained("./") ``` If we remove the hack, the (in my opinion currently only way) to convert a PT checkpoint to TF is the following: ### design after removing hack ```python from transformers import EncoderDecoderModel, TFEncoderDecoderModel, TFAutoModel, TFAutoModelForCausalLM _model = EncoderDecoderModel.from_pretrained("patrickvonplaten/bert2bert-cnn_dailymail-fp16") _model.encoder.save_pretrained("./encoder") _model.decoder.save_pretrained("./decoder") # all these lines are currently done automatically. There is not really a way around doing them if we remove the hack IMO _encoder = TFAutoModel.from_pretrained("./encoder", from_pt=True) _decoder = TFAutoModelForCausalLM.from_pretrained("./decoder", from_pt=True) _encoder.save_pretrained("./encoder") _decoder.save_pretrained("./decoder") model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("./encoder", "./decoder") # then this works: model.save_pretrained("./") model = TFEncoderDecoderModel.from_pretrained("./") ``` So we can see that removing the hack would force the user to do the exact same thing we are doing right now<|||||>Give that the hack only lives in `modeling_tf_encoder_decoder.py` and having thought about it again, I'm actually in favor of not merging this PR, but I defer to @LysandreJik and @sgugger to decide here.<|||||>No problem for me. I leave HF members to make the decision. Just to make this clear to users would be fine on my side 😀<|||||>Agree to close this and keep the current hack, *as long as* we mention that the `TFEncoderDecoder` is experimental. In my opinion, TensorFlow is globally ill-suited for managing several models into a single one like it is done here, and it will always have some hacky/kind-of-broken edge cases. I would advocate for keeping the work @ydshieh has done so far and see if the community appreciates/uses the feature before spending time refactoring this complex piece of software.<|||||>I agree with you two @LysandreJik and @patrickvonplaten, even if I'm really not a fan of the hack behind the scenes. Let's worry about making it better when we have a wide adoption of the TFEncoderDecoder :-)
transformers
14,405
closed
[Gradient checkpointing] Restore backwards compatibility until v5
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #14388 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-15-2021 18:11:09
11-15-2021 18:11:09
transformers
14,404
closed
`np.ndarray` not supported anymore for optional arguments at inference for Bert
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.12.3 (maybe even earlier) - Python version: >= 3.7 - Tensorflow version (GPU?): >= 2.2 ### Who can help @ydshieh @patrickvonplaten ## Information Model I am using: Bert (probably the issue arises for other models too) The error arises when using when arguments such as `attention_mask` are of type `np.ndarray`. The conversion to `list` of the `attention_mask` shape fails here: https://github.com/huggingface/transformers/blob/f5af87361718be29a1d3ddb2d8ef23f85b1c70c3/src/transformers/models/bert/modeling_tf_bert.py#L798-L803 The method `shape_list` expects a `tf.Tensor` although it should support both `tf.Tensor` and `np.ndarray`. https://github.com/huggingface/transformers/blob/f5af87361718be29a1d3ddb2d8ef23f85b1c70c3/src/transformers/modeling_tf_utils.py#L1805 The casting fails at the following line: https://github.com/huggingface/transformers/blob/f5af87361718be29a1d3ddb2d8ef23f85b1c70c3/src/transformers/modeling_tf_utils.py#L1820 as regular tuples (as is the case for `np.array` shape) do not have `as_list`, and work only with `tf.Tensor`. Note that for version [4.6.1](https://github.com/huggingface/transformers/commit/fb27b276e7babc2249abbf79e6efb23b9611da10), both `tf.Tensor` and `np.ndarray` works as expected. ## To reproduce Run [this](https://github.com/SeldonIO/alibi/blob/master/examples/integrated_gradients_transformers.ipynb) notebook. The following cells: ```python def get_embeddings(X_train, model, batch_size=50): args = X_train['input_ids'] kwargs = {k:v for k, v in X_train.items() if k != 'input_ids'} dataset = tf.data.Dataset.from_tensor_slices((args, kwargs)).batch(batch_size) dataset = dataset.as_numpy_iterator() embbedings = [] for X_batch in dataset: args_b, kwargs_b = X_batch batch_embeddings = model(args_b, **kwargs_b) embbedings.append(batch_embeddings.last_hidden_state.numpy()) return np.concatenate(embbedings, axis=0) ``` ```python train_embbedings = get_embeddings(X_train, modelBert, batch_size=100) test_embbedings = get_embeddings(X_test, modelBert, batch_size=100) ``` The error can be fixed by converting the `kwargs_b` values (i.e., `attention_mask`) to `tf.Tensor` before calling `forward` on the model: ```python kwargs_b = {k: tf.constant(v) for k, v in kwargs_b.items()} ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The function call should be working with `np.ndarray` as well as with `tf.Tensor` as the typing suggests: https://github.com/huggingface/transformers/blob/f5af87361718be29a1d3ddb2d8ef23f85b1c70c3/src/transformers/models/bert/modeling_tf_bert.py#L728-L732
11-15-2021 15:55:38
11-15-2021 15:55:38
Link to `alibi` issue: https://github.com/SeldonIO/alibi/issues/527<|||||>@RobertSamoilescu Thank you for this, it's definitely a bug. I think the best solution is probably: 1) Rewrite `shape_list` to support `np.ndarray` as well as `tf.Tensor`. 2) Add a test with Numpy inputs for our TF models to ensure this doesn't happen again. Would you be interested in trying to submit that PR? If not, we can try to get to it ourselves, but things are quite busy right now!<|||||>@Rocketknight1, I can submit a PR. I will start working on it later this week or most probably at the beginning of next week.<|||||>@RobertSamoilescu That's great, thank you! If you encounter any problems, or you find you don't have time to work on it after all, let me know here and I'll see what I can do.<|||||>@Rocketknight1, unfortunately, at the moment I do not have the time to do the PR. <|||||>@RobertSamoilescu No problem! We'll see if anyone can submit a PR, and try to get to it ourselves if not<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>(@Rocketknight1 assigning to me :) )<|||||>@RobertSamoilescu fixed with #15074
transformers
14,403
closed
TF models save_pretrained() failed when saved_model=True
## Environment info - `transformers` version: 4.13.0.dev0 - Platform: Windows-10-10.0.19042-SP0 - Python version: 3.9.5 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.6.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: None - Using distributed or parallel set-up in script?: None ### Who can help TensorFlow: @Rocketknight1 ## To reproduce ``` from transformers import TFBertModel import tensorflow as tf from PIL import Image import requests model = TFBertModel.from_pretrained("bert-base-uncased") model.save_pretrained("tmp", saved_model=True) # this also failed for x in model.config.items(): print(x) ``` Error messages: ``` Traceback (most recent call last): File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\del.py", line 7, in <module> model.save_pretrained("tmp", saved_model=True) File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\modeling_tf_utils.py", line 1227, in save_pretrained self.save(saved_model_dir, include_optimizer=False, signatures=self.serving) File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\engine\training.py", line 2145, in save save.save_model(self, filepath, overwrite, include_optimizer, save_format, File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\save.py", line 149, in save_model saved_model_save.save(model, filepath, overwrite, include_optimizer, File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\save.py", line 94, in save metadata = generate_keras_metadata(saved_nodes, node_paths) File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\save.py", line 123, in generate_keras_metadata metadata=node._tracking_metadata) # pylint: disable=protected-access File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\engine\base_layer.py", line 3078, in _tracking_metadata return self._trackable_saved_model_saver.tracking_metadata File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\base_serialization.py", line 54, in tracking_metadata return json_utils.Encoder().encode(self.python_properties) File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\layer_serialization.py", line 37, in python_properties return self._python_properties_internal() File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\model_serialization.py", line 31, in _python_properties_internal metadata = super(ModelSavedModelSaver, self)._python_properties_internal() File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\layer_serialization.py", line 54, in _python_properties_internal metadata.update(get_serialized(self.obj)) File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\saving\saved_model\layer_serialization.py", line 113, in get_serialized return generic_utils.serialize_keras_object(obj) File "C:\Users\33611\miniconda3\envs\py39\lib\site-packages\keras\utils\generic_utils.py", line 510, in serialize_keras_object for key, item in config.items(): File "C:\Users\33611\Desktop\Projects\transformers-dev-2\transformers\src\transformers\configuration_utils.py", line 237, in __getattribute__ return super().__getattribute__(key) AttributeError: 'BertConfig' object has no attribute 'items' ``` ## Expected behavior model.save_pretrained(..., saved_model=True) should work, because it is used in `test_saved_model_creation_extended()` in `test_modeling_tf_common.py`.
11-15-2021 15:28:54
11-15-2021 15:28:54
This should have been caught by these tests: https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_common.py#L239-L306 But these tests are deactivated as they're too slow, unfortunartely.<|||||>Why are they there then? :D<|||||>Because they were not deactivated at some point, but ended up taking 6+ hours as the number of architectures grew, which we can't do. We need to refactor them to be simpler and faster. Want to take a stab at it? :)<|||||>If I won't ultimately be stabbing myself, yes!<|||||>You can start by running the tests above for a single model, and see if they can't be refactored in a single test, or if some of the time spent cannot be reduced. Also pinging @Rocketknight1 as it might be of interest to him!<|||||>> > > You can start by running the tests above for a single model, and see if they can't be refactored in a single test, or if some of the time spent cannot be reduced. Also pinging @Rocketknight1 as it might be of interest to him! Alright. Thanks for the tip!<|||||>And thank you for offering to help :)<|||||>I feel like this definitely worked in the past. I confirmed that no models are saving correctly with `saved_model=True`, and the problem is occurring when we call `model.save()` in the `save_pretrained()` function. Calling `model.save()` alone also causes this bug. I believe the underlying issue is that Keras is attempting to serialize all of the `Model` object's attributes, and doesn't know what to do with a `BertConfig` object.<|||||>Since this worked before (presumably that test passed at -some- point!), most likely something has changed regarding TensorFlow's save logic in 2.6, which means we might have to reassess how we use decorators like `@keras_serializable`. I'll put this on the list to investigate, but if anyone else has any insight, let me know!<|||||>Alternatively, this may be caused by my PR [here](https://github.com/huggingface/transformers/pull/14361), which made changes to the saving/loading of TF models. I'll try a version of Transformers before that to see if the issue is still there. EDIT: Still happens before my PR, so that's not the problem.<|||||>Update: Bug still occurs in TF 2.5.<|||||>@shabie Just to let you know, we refactored a lot of those tests quite urgently when we realized that lack of coverage was causing serious problems! This issue should hopefully be resolved now, but if people encounter further difficulties with saving TF models, please comment or file a new issue.
transformers
14,402
closed
AttributeError: 'NoneType' object has no attribute 'encode_plus' with XLNet tonkenizer
Hi everyone, I am using XLNet tokenizer , however I am receiving this error. ``` **from transformers import XLNetTokenizer, XLNetModel tokenizer = XLNetTokenizer.from_pretrained('xlnet-base-cased') input_txt = "India is my country. All Indians are my brothers and sisters" encodings = tokenizer.encode_plus(input_txt, add_special_tokens=True, max_length=16, return_tensors='pt', return_token_type_ids=False, return_attention_mask=True, pad_to_max_length=False)** ``` --------------------------------------------------------------------------- ``` AttributeError Traceback (most recent call last) <ipython-input-64-4b204650a45a> in <module>() 1 input_txt = "India is my country. All Indians are my brothers and sisters" ----> 2 encodings = tokenizer.encode_plus(input_txt, add_special_tokens=True, max_length=16, return_tensors='pt', return_token_type_ids=False, return_attention_mask=True, pad_to_max_length=False) AttributeError: 'NoneType' object has no attribute 'encode_plus' ```
11-15-2021 14:52:51
11-15-2021 14:52:51
Hey @kutayoncuyilmaz, could you share your software versions as asked in the template? Thanks<|||||>You can do so by running `transformers-cli env` in your environment.<|||||>Thank you. I solved the problem
transformers
14,401
closed
Quick fix to TF summarization example
Fixes #14297
11-15-2021 13:44:57
11-15-2021 13:44:57
transformers
14,400
closed
add embed_scale for bert
# What does this PR do? `embed_scale` has been widely used in many variation of bert and transformer. This commit add `embed_scale` as an option feature for bert. Existing implementions - [fairseq roberta](https://github.com/pytorch/fairseq/tree/main/fairseq/models/roberta), - https://github.com/huggingface/transformers/search?q=embed_scale ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? - albert, bert, xlm: @LysandreJik
11-15-2021 13:40:39
11-15-2021 13:40:39
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>1<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,399
closed
Fix TFViT
# What does this PR do? Fix the code example of `TFViTForImageClassification` + fix the integration test. cc @ydshieh
11-15-2021 13:28:22
11-15-2021 13:28:22
LGTM, thanks! <|||||> I open an issue #14403 for the following. _________________________________ Hi, I found that there is something wrong (not introduced in this PR). `model.save_pretrained("tmp", saved_model=True)` will give an error `AttributeError: 'ViTConfig' object has no attribute 'items'`. (I found this when running `test_saved_model_creation_extended` in TF ViT test script) Maybe it's better to open a new issue for this. **Update** Not specific to `TFViT`, even occurs for `TFBert` ``` from transformers import TFBertModel model = TFBertModel.from_pretrained("bert-base-uncased") # failed for x in model.config.items(): print(x) # failed model.save_pretrained("tmp", saved_model=True) ``` This probably suggests that `test_saved_model_creation_extended` (@tooslow) in `test_modeling_tf_common.py` will fail for all TF models. Full example: ``` from transformers import TFViTForImageClassification model = TFViTForImageClassification.from_pretrained('google/vit-base-patch16-224') model.save_pretrained("tmp", saved_model=True) ```
transformers
14,398
closed
TrainingArguments docstring typo fix
# What does this PR do? This PR fixes `TrainingArguments` docstring, particulary the part that describes `ignore_data_skip` parameter. No related issues found. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
11-15-2021 12:46:19
11-15-2021 12:46:19
No the docstring is correct: it is when setting that argument to `True` that the data is ignored and training begins faster.<|||||>oh, now I see the logic. still, the phrasing is a bit odd because parameter name `ignore_data_skip` implies the opposite of the first phrase in the description: "whether or not **to skip** the epochs and batches" (instead of "whether or not **to ignore the skip** the epochs and batches"). I think it could be slightly rephrased to improve readability. thanks anyway<|||||>Feel free to amend your PR to rephrase then :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,397
open
[Consistency] Automatically set decoder_input_ids for TFEncoderDecoderModel
# 🚀 Feature request Currently, the `EncoderDecoderModel` class in PyTorch automatically creates the `decoder_input_ids` based on the `labels` provided by the user (similar to how this is done for T5/BART). This should also be implemented for `TFEncoderDecoderModel`, because currently users should manually provide `decoder_input_ids` to the model. One can take a look at the TF implementation of BART for example to see how to shift the labels in order to automatically create the `decoder_input_ids`, namely [here](https://github.com/huggingface/transformers/blob/29dfb2dbb10cdba6327ff287db56b182c1db29b1/src/transformers/models/bart/modeling_tf_bart.py#L1120). One should then also update the docstring correspondingly.
11-15-2021 12:20:44
11-15-2021 12:20:44
Hi, I would like to work on this. Is there anyone already working on it?<|||||>No, you can work on it if you want!<|||||>Hi, Is it still open? I can work on this issue<|||||>There's a PR open (#14469), however it's not finished yet.<|||||>Hi @NielsRogge Can I finish up this issue with the closed PR as reference? I'd like to make a PR soon.<|||||>I believe this has already been resolved in #15175.
transformers
14,396
closed
FlaxGPTJ
# What does this PR do? This PR adds the GPTJ model in flax.
11-15-2021 10:16:18
11-15-2021 10:16:18
Looks great - just one small test for PT<>Flax compatibility should be added in `test_modeling_gptj` as well :-)
transformers
14,395
closed
force_bos_token_to_be_generated is depricated, should be replaced by forced_bos_token_id in BART documentation
Replace force_bos_token_to_be_generated in the following file https://github.com/huggingface/transformers/blob/29dfb2dbb10cdba6327ff287db56b182c1db29b1/docs/source/model_doc/bart.rst
11-15-2021 08:54:14
11-15-2021 08:54:14
Thanks for spotting. Feel free to open a PR to fix this!<|||||>Closing, as it's fixed by #14434
transformers
14,394
closed
It seems that `RagSequenceForGeneration.generate` is computing inaccurate loss value
Hello, I think `generate` function of `RagSequenceForGeneration` does not seem to compute loss value properly. As mentioned in the original [RAG paper](https://arxiv.org/pdf/2005.11401.pdf), thorough decoding of Rag-Sequence model needs 2 forward passes. 1. Find candidate sequences through beam search for each document 2. Compute loss for each candidate sequence with another forward pass Then, return the sequence with the highest likelihood. I found that `n_docs` argument used in beam search does not passed when `forward` function called for loss computation. From `RagSequenceForGeneration.generate`, ``` @torch.no_grad() def generate( self, input_ids: Optional[torch.LongTensor] = None, attention_mask: Optional[torch.LongTensor] = None, context_input_ids=None, context_attention_mask=None, doc_scores=None, do_deduplication=None, # defaults to True num_return_sequences=None, # defaults to 1 num_beams=None, # defaults to 1 n_docs=None, **model_kwargs ): ... # then, run model forwards to get nll scores: if input_ids is not None: new_input_ids = input_ids[index : index + 1].repeat(num_candidates, 1) outputs = self(new_input_ids, labels=output_sequences, exclude_bos_score=True) ... ``` I think same documents used for beam search, also need to be used for loss computation. If `n_docs` other than default value is passed to the `generate` function, it will end up computing loss somewhat inaccurate, possibly can change the final generated sequence. If this is the case, I think `generate` function should be fixed, which can be simply done by just passing `n_docs` argument to `forward` inside `generate`. Please correct me if there are any mistakes. Thanks @patrickvonplaten
11-15-2021 07:37:51
11-15-2021 07:37:51
Maybe of interest to @lhoestq too<|||||>Looking into it next week - sorry for being so late here<|||||>Hey @repun, I'm not 100% following everything here. Could you maybe open a PR so that we can look into it with some code changes? In general, I think it's totally fine if the number of documents to retrieve during inference in `generate()` differs from the number of documents to retrieve during training. Could you explain in a bit more detail why the loss value is not accurate and how the loss value has to do with `generate()` or are we just talking about inference (and not training at all here?)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,393
closed
Can `BartForConditionalGeneration` use the `sample()` method ?
Hi . I have a question about `BartForConditionalGeneration`. I wonder if the `BartForConditionalGeneration` can use the `sample()` method in the `generation_utils.py`? The Code is: ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer from transformers import ( LogitsProcessorList, MinLengthLogitsProcessor, TopKLogitsWarper, TemperatureLogitsWarper, ) tokenizer = AutoTokenizer.from_pretrained('facebook/bart-base') model = AutoModelForSeq2SeqLM.from_pretrained('facebook/bart-base') input_prompt = "Today is a beautiful day, and" input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids logits_processor = LogitsProcessorList([ MinLengthLogitsProcessor(15, eos_token_id=model.config.eos_token_id), ]) logits_warper = LogitsProcessorList([ TopKLogitsWarper(50), TemperatureLogitsWarper(0.7), ]) outputs = model.sample(input_ids=test_input_ids, logits_processor=logits_processor, logits_warper=logits_warper) print('Generated:', tokenizer.batch_decode(outputs, skip_special_tokens=True)) ``` and get: ```python ValueError Traceback (most recent call last) <ipython-input-26-6384a499566d> in <module> 1 print(input_ids) ----> 2 outputs = model.sample(input_ids=test_input_ids, logits_processor=logits_processor, logits_warper=logits_warper) 3 print('Generated:', tokenizer.batch_decode(outputs, skip_special_tokens=True)) /data/yuf/transformers/src/transformers/generation_utils.py in sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs) 1533 return_dict=True, 1534 output_attentions=output_attentions, -> 1535 output_hidden_states=output_hidden_states, 1536 ) 1537 ~/anaconda3/envs/irl/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /data/yuf/transformers/src/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict) 1306 output_attentions=output_attentions, 1307 output_hidden_states=output_hidden_states, -> 1308 return_dict=return_dict, 1309 ) 1310 lm_logits = self.lm_head(outputs[0]) + self.final_logits_bias ~/anaconda3/envs/irl/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /data/yuf/transformers/src/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, head_mask, decoder_head_mask, cross_attn_head_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, use_cache, output_attentions, output_hidden_states, return_dict) 1171 output_attentions=output_attentions, 1172 output_hidden_states=output_hidden_states, -> 1173 return_dict=return_dict, 1174 ) 1175 # If the user passed a tuple for encoder_outputs, we wrap it in a BaseModelOutput when return_dict=True ~/anaconda3/envs/irl/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 887 result = self._slow_forward(*input, **kwargs) 888 else: --> 889 result = self.forward(*input, **kwargs) 890 for hook in itertools.chain( 891 _global_forward_hooks.values(), /data/yuf/transformers/src/transformers/models/bart/modeling_bart.py in forward(self, input_ids, attention_mask, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict) 751 input_shape = inputs_embeds.size()[:-1] 752 else: --> 753 raise ValueError("You have to specify either input_ids or inputs_embeds") 754 755 if inputs_embeds is None: ValueError: You have to specify either input_ids or inputs_embeds ``` To implement the REINFORCE algorithm on summarization, it might be a good way to use the `sample()` method and get the output `score`. But it cannot work well. I guess it might because of the `prepare_inputs_for_generation` in the `BartForConditionalGeneration`. In the Line 1528 of `sample()` method , it call the `prepare_inputs_for_generation` function and only give the `input_ids`, but for `BartForConditionalGeneration`, it might use the `input_ids` as the `decoder_input_ids`, and set the `encoder_outputs` None. I try to find an example about REINFORCE algorithms in the repository, but failed. Could you please give me an example? @patrickvonplaten
11-15-2021 04:48:07
11-15-2021 04:48:07
Hey @FYYFU, Yes `BartForConditionalGeneration` can use `sample(...)`. Could you please make use of the forum: https://discuss.huggingface.co/ for questions on how to use it?<|||||>Thank you!<|||||>> Hey @FYYFU, > > Yes `BartForConditionalGeneration` can use `sample(...)`. Could you please make use of the forum: https://discuss.huggingface.co/ for questions on how to use it? ok.. Thanks for you reply! :>
transformers
14,392
closed
[Wav2Vec2] Add New Wav2Vec2 Translation
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR extends Wav2Vec2 with an Adapter module that allows to take any pretrained Wav2Vec2 checkpoint and down-project to a more suitable size for the encoder-decoder design. Since, this code originates from the original Wav2Vec2 Fairseq implementation and is applicable to all existing Wav2Vec2 checkpoints, I think it's ok to put it in the existing `modeling_wav2vec2.py` even if it adds some ugly flags to the config. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-15-2021 00:32:17
11-15-2021 00:32:17
transformers
14,391
closed
[doc] performance and parallelism updates
This PR: - updates the performance doc to break down all the memory used by the model (+ mentioned `bitsandbytes` which saves 3/4 optim memory) - updates the parallelism doc to introduce Varuna and expand on Sagemaker model parallelism solutions - both published a paper just recently @sgugger
11-14-2021 23:07:47
11-14-2021 23:07:47
transformers
14,390
closed
[Speech2Text2] Enable tokenizers
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Tokenizer files have been added here: - https://huggingface.co/facebook/s2t-wav2vec2-large-en-tr/commit/ae0ccd057a5c698ddb7fd439c9238ae49b8865d8 for all s2t2 models: https://huggingface.co/models?other=speech2text2 This PR is the follow-up PR of this one: https://github.com/huggingface/transformers/pull/13186 It is made sure that full backwards compatibility is kept which means that a tokenizer can still be instantiated without a merges file for decoding only. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-14-2021 20:09:29
11-14-2021 20:09:29
transformers
14,389
closed
--config_overrides doesn't appear to work in run_clm.py when trying to specify a larger GPT model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.2 - Platform: Linux-5.4.0-84-generic-x86_64-with-Ubuntu-20.04-focal - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.0+cu111 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - GPT-2, GPT: @patrickvonplaten, @LysandreJik If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Using the following to train GPT-2 from scratch: ``` python3.7 run_clm.py \ --model_type "gpt2" \ --tokenizer_name "gpt2" \ --train_file "train_tmp.txt" \ --validation_file "eval_tmp.txt" \ --pad_to_max_length yes \ --do_train \ --do_eval \ --max_seq_length=1024 \ --per_gpu_train_batch_size 1 \ --save_steps -1 \ --num_train_epochs 10 \ --fp16_full_eval \ --output_dir=checkpoints \ --config_overrides="n_embd=1024,n_head=16,n_layer=24,n_positions=1024,n_ctx=1024,layer_norm_epsilon=1e-5,initializer_range=0.02" ``` The --config_overrides doesn't appear to take effect: Starting the training o/p's : ``` Model config GPT2Config { "activation_function": "gelu_new", "architectures": [ "GPT2LMHeadModel" ], "attn_pdrop": 0.1, "bos_token_id": 50256, "embd_pdrop": 0.1, "eos_token_id": 50256, "gradient_checkpointing": false, "initializer_range": 0.02, "layer_norm_epsilon": 1e-05, "model_type": "gpt2", "n_ctx": 1024, "n_embd": 768, "n_head": 12, "n_inner": null, "n_layer": 12, "n_positions": 1024, "resid_pdrop": 0.1, "scale_attn_weights": true, "summary_activation": null, "summary_first_dropout": 0.1, "summary_proj_to_labels": true, "summary_type": "cls_index", "summary_use_proj": true, "task_specific_params": { "text-generation": { "do_sample": true, "max_length": 50 } }, "transformers_version": "4.10.2", "use_cache": true, "vocab_size": 50257 } ``` ## To reproduce Steps to reproduce the behavior: 1. Running the above training script ignore the parameters in --config_overrides <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior I was expecting the --config_overrides string to override the training parameters. Although I see documentation suggesting that it is possible to specify something like --model_type="gpt2-medium" this produces an error such as no such model. Perhaps there is an alternative way to specify a medium or large GPT-2 model ? Thanks.
11-14-2021 19:21:57
11-14-2021 19:21:57
Hey @Adrian-1234, we recommend using the `model_name_or_path` parameter to specify a particular checkpoint. cc @sgugger <|||||>`--config_overrides` is actually an addition by @stas00 <|||||>I will try to reproduce the issue and will then follow up. I edited the OP to add formatting.<|||||>There is no problem, other than logger misinformation. Please see https://github.com/huggingface/transformers/pull/14466 for details.
transformers
14,388
closed
Wav2Vec2 CUDA memory usage doubled in v4.11.3 compared to v4.10.3 with the same batch size
## Environment info - `transformers` version: 4.11.3 - Platform: Linux-5.11.0-40-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.8.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes, 3090 - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @anton-l ## Information When using Wav2vec2 the memory usage roughly doubles when going from Huggingface v4.10.3 to v4.11.3 Whereas my 3090 (24GB memory) in v4.10.3 could handle a batchsize of ~32, in 4.11.3 this is reduced to ~10. The problem arises when using: * my own modified scripts The tasks I am working on is: * ASR ## To reproduce Steps to reproduce the behavior: 1. Run script with v4.10 and v4.11 and watch CUDA memory usage Reproduce script (relatively minimal): ``` from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor, TrainingArguments from transformers.trainer import Trainer from torch.utils.data.dataset import Dataset import numpy as np class ProcessedDataset(Dataset): def __init__(self, processor): self.processor = processor def __getitem__(self, i): x = np.ones(16000 * 10) # 10 seconds y = "this is a random sentence" with self.processor.as_target_processor(): batch= {"labels": self.processor(y).input_ids} batch["input_values"] = self.processor(x, sampling_rate=16000).input_values return batch def __len__(self): return 10000 class DataCollator: def __init__(self, processor): self.processor = processor def __call__(self, features): input_features = [{"input_values": feature["input_values"][0]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=True, max_length=None, pad_to_multiple_of=None, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=True, max_length=None, pad_to_multiple_of=None, return_tensors="pt", ) labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch proc = Wav2Vec2Processor.from_pretrained("wietsedv/wav2vec2-large-xlsr-53-dutch") model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-large-nl-voxpopuli", attention_dropout=0, hidden_dropout=0, feat_proj_dropout=0, mask_time_prob=0, layerdrop=0, activation_dropout=0, gradient_checkpointing=True, ctc_loss_reduction="mean", pad_token_id=proc.tokenizer.pad_token_id, vocab_size=len(proc.tokenizer), ctc_zero_infinity=True ) ds = ProcessedDataset(proc) data_collator = DataCollator(processor=proc) args = TrainingArguments( output_dir="/tmp/tmp_model", per_device_train_batch_size=8, gradient_accumulation_steps=1, do_eval=False, num_train_epochs=1, fp16=True, group_by_length=False, save_steps=-1, eval_steps=1024, logging_steps=1024, warmup_steps=128, save_total_limit=1, dataloader_num_workers=1, seed=11 ) trainer = Trainer(model=model, args=args, train_dataset=ds, data_collator=data_collator) trainer.train() ``` ## Expected behavior Upgrading Huggingface Transformers from 4.10 to a later version should keep the memory usage in the same ballpark
11-14-2021 18:36:10
11-14-2021 18:36:10
Looking into it now<|||||>Benchmarking your script on current master gives me a peak GPU mem usage of `20068MiB` .<|||||>And with `4.10` it gives me `10738MiB` => so this seems like a pretty heavy bug! Thanks for the heads-up!<|||||>Will investigate now<|||||>No problem at all! If there is anything I can do to assist I would be happy to help. <|||||>Ok I think I already found one problem. It seems like the `gradient_checkpointing` [PR](https://github.com/huggingface/transformers/pull/13657) refactor wasn't 100% backward compatible. @MarktHart - could you add ```python model.gradient_checkpointing_enable() ``` before this line: ``` trainer = Trainer(model=model, args=args, train_dataset=ds, data_collator=data_collator) ``` this should more or less solve the problem<|||||>That does solve the issue. Thanks a bunch! <|||||>@patrickvonplaten do you decide whether to close the issue or that backward compatibility should be restored? <|||||>@sgugger - this is a weird issue. For some reason `from_pretrained(...)` doesn't currently set `gradient_checkpointing` to `True` at the first init since the main models does not have the `nn.Modules` attached yet. Will open a hacky PR to fix it<|||||>> Ok I think I already found one problem. It seems like the `gradient_checkpointing` [PR](https://github.com/huggingface/transformers/pull/13657) refactor wasn't 100% backward compatible. > > @MarktHart - could you add > > ```python > model.gradient_checkpointing_enable() > ``` > > before this line: > > ``` > trainer = Trainer(model=model, args=args, train_dataset=ds, data_collator=data_collator) > ``` > > this should more or less solve the problem I have this issue in 4.14.1 when i set group_by_length=True. Adding model.gradient_checkpointing_enable() can't solve this problem. <|||||>@voidful - can you provide a reproducible script here? :-) Thanks a lot!<|||||>> @voidful - can you provide a reproducible script here? :-) Thanks a lot! It turn out to be length issue on my custom dataset, simplify apply .filter can solve this problem~~~~ Sorry for misleading.
transformers
14,387
closed
Wav2Vec2 Speech Pre-Training After a few epochs the contrastive loss was decreased to zero and the model stopped changing
I tried to re-run the demo script with the same parameters on Colab. After a few epochs, the contrastive loss was decreased to zero and the model stopped changing. Original Script can be found here: https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-pretraining Here is sample code [Colab](https://colab.research.google.com/drive/1IDwie_Te_2GntHay_zZ-oPvriNsacQ8d?usp=sharing): sample output: ``` | loss: 9.969e-02| constrast_loss: 0.000e+00| div_loss: 9.969e-01| %_mask_idx: 5.137e-01| ppl: 2.000e+00| lr: 1.572e-03| temp: 1.902e+00| grad_norm: 8.068e-19 | loss: 9.969e-02| constrast_loss: 0.000e+00| div_loss: 9.969e-01| %_mask_idx: 4.952e-01| ppl: 2.000e+00| lr: 1.572e-03| temp: 1.902e+00| grad_norm: 4.017e-19 | loss: 9.969e-02| constrast_loss: 0.000e+00| div_loss: 9.969e-01| %_mask_idx: 4.831e-01| ppl: 2.000e+00| lr: 1.572e-03| temp: 1.902e+00| grad_norm: 5.166e-19 ``` I used the parameters given in the README file so this behavior is unexpected and may indicate a different problem. Is this a bug in the official feature or am i doing some mistake if so please help how can I fix this problem? @patrickvonplaten @anton-l
11-14-2021 09:52:22
11-14-2021 09:52:22
Hey @umairahmad-ua, I was using 8 GPUs - in a simple colab you get just a single GPU, so I think the effective batch_size is very much different which then quickly loads to a collapse of the `contrastive_loss`. TBH, I dont' think you can reproduce the training in a single colab (you would have to let it run for 4 weeks to see significant changes)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It's interesting, I have the same problem when pre-train wav2vec2 on my own datasets, and the div_loss cannot go down at same value (9.969e-01). I set batch_size=16, single GPU.
transformers
14,386
closed
Raise exceptions instead of using asserts in modeling_openai #12789
# What does this PR do? Replaces control flow assertions in modeling_openai.py to address issue #12789. Contributes to #12789 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. [link](https://github.com/huggingface/transformers/issues/12789) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Tagging @sgugger I saw you commenting on the issue and some other PRs for this issue : ).
11-13-2021 22:31:15
11-13-2021 22:31:15
Thanks for your help on this!
transformers
14,385
closed
Replace BertLayerNorm with LayerNorm
Running Movement pruning experiments with the newest HuggingFace would crash due to non-existing BertLayerNorm.
11-13-2021 16:32:39
11-13-2021 16:32:39
@VictorSanh @eldarkurtic Hello guys! Thanks for your effort for such a great repo! I'm wondering why the implementation of LayerNorm in BERT seems not to be identical to what's commonly discussed version of `LayerNorm`. I've read several tutorials about `LayerNorm`, and according to their explanation, the implementation seems to be like following: ``` input_x = torch.rand(batch_size, sequence_length, hidden_size) layer_norm = torch.nn.LayerNorm([sequence_length, hidden_size]) output = layer_norm(input_x) ``` In above implementation, the normalization is calculated through the dim of `sequence_length` as well as `hidden_size`, but the implementation in this repo only normalize the tensors in the last dim, which I think is somehow identical to `InstanceNorm`?
transformers
14,384
closed
Token indices sequence length is longer than the specified maximum sequence length
## Environment info - `transformers` version: 4.12.3 - Platform: Linux-5.10.0-9-amd64-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): 1.9.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No Models: sshleifer/distill-pegasus-cnn-16-4 @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import pipeline model='sshleifer/distill-pegasus-cnn-16-4' pipe = pipeline( 'summarization', model=model, device=0, max_length=1024, ) text = """ Znaczenie Państwowych Funduszy Majątkowych w Globalnym Zarządzaniu Nadmiernymi Rezerwami Walutowymi w W artykule przedstawione zostały motywy gromadzenia przez kraje rezerw walutowych oraz poziom tych rezerw w wybranych krajach w 2008 roku w odniesieniu do najczęściej występujących w literaturze poziomów referencyjnych. Przedstawione analizy dowodzą, że w grupie krajów, które zgromadziły ponad 60% ogólnoświatowych rezerw walutowych przekroczone zostały mierniki uznawane za optymalne, co dowodzi, że kraje te posiadają nadmierne rezerwy. Dotyczy to takich krajów jak: Chiny, Japonia, Rosja, Arabia Saudyjska, Hong Kong, Indie, Korea Południowa, Brazylia, Singapur oraz Tajlandia. W kolejnej części przedstawiona została krótka charakterystyka państwowych funduszy majątkowych, które są jednocześnie instytucjonalną innowacją na globalnych rynkach finansowych oraz alternatywnym narzędziem zarządzania nadmiernymi rezerwami walutowymi. W następnej części artykułu przybliżone zostały korzyści płynące dla gospodarki z tytułu posiadania tego typu podmiotów. Wymienić wśród nich należy m.in. możliwość inwestowania rezerw walutowych w szerszą grupę aktywów o wyższym ryzyku oraz wyższej stopie zwrotu niż ma to miejsce w przypadku tradycyjnego zarządzania rezerwami walutowymi prowadzonego przez krajowe władze monetarne. Podmioty te ułatwiają ponadto absorpcję napływającego do gospodarki strumienia kapitału bez wystąpienia takich negatywnych konsekwencji jak aprecjacja kursu walutowego, powstawanie baniek spekulacyjnych czy inflacja. Dzięki inwestowaniu w szeroką gamę aktywów na rynkach międzynarodowych państwowe fundusze majątkowe zmniejszają lub wręcz eliminują koszty alternatywne związane z utrzymywaniem rezerw. Fundusze te ułatwiają międzypokoleniowy transfer środków pochodzących z eksploatacji zasobów nieodnawialnych jak również mogą być wykorzystywane do wspierania gospodarki podczas kryzysów kiedy to jako inwestorzy ostatniej instancji zapewniają płynność zarówno sektora finansowego jak i pozostałych gałęzi gospodarki. Państwowe fundusze majątkowe postrzegane są jako narzędzie wspierające stabilność makroekonomiczną gospodarki oraz forma zabezpieczenia przyszłego dobrobytu ekonomicznego kraju. Podmioty te wnoszą ponadto istotny wkład w funkcjonowanie gospodarki światowej. Jako długoterminowi, pasywni inwestorzy, którzy nie stosują w swoich strategiach inwestycyjnych dźwigni, państwowe fundusze majątkowe wywierać mogą stabilizujący wpływ na międzynarodowe rynki finansowe zwiększając ich płynność oraz obniżając wahania rynkowe. Wnioski wyciągnięte w artykule wskazują, że w najbliższym latach możliwy jest dalszy rozwój rynku państwowych funduszy majątkowych i wzrost ich znaczenia na międzynarodowych rynkach finansowych. """ output = pipe(text, min_length=5, max_length=1024) ``` Stack error: ``` python problem.py Token indices sequence length is longer than the specified maximum sequence length for this model (1214 > 1024). Running this sequence through the model will result in indexing errors /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [160,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [132,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [107,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [108,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [109,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [110,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [111,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [112,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [113,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [114,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [115,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [116,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [117,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [118,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [119,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [120,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [121,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [122,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [123,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [124,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [125,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [126,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [162,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [39,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [40,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [41,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [42,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [43,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [44,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [45,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [46,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [47,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [48,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [49,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [50,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [51,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [52,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [53,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [54,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [55,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [56,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [57,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [58,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [59,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [60,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [61,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [62,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [63,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [64,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [65,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [66,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [67,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [68,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [69,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [70,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [71,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [72,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [73,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [74,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [75,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [76,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [77,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [78,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [79,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [80,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [81,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [82,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [83,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [84,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [85,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [86,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [87,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [88,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [89,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [90,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [91,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [92,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [93,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [94,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [95,0,0] Assertion `srcIndex < srcSelectDimSize` failed. Traceback (most recent call last): File "/home/scampo01/Code/tldr/problem.py", line 20, in <module> output = pipe(text, min_length=5, max_length=1024) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/pipelines/text2text_generation.py", line 223, in __call__ return super().__call__(*args, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/pipelines/text2text_generation.py", line 136, in __call__ result = super().__call__(*args, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/pipelines/base.py", line 924, in __call__ return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/pipelines/base.py", line 931, in run_single model_outputs = self.forward(model_inputs, **forward_params) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/pipelines/base.py", line 880, in forward model_outputs = self._forward(model_inputs, **forward_params) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/pipelines/text2text_generation.py", line 154, in _forward output_ids = self.model.generate(**model_inputs, **generate_kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/generation_utils.py", line 907, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/generation_utils.py", line 416, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(input_ids, return_dict=True, **encoder_kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 797, in forward layer_outputs = encoder_layer( File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 321, in forward hidden_states, attn_weights, _ = self.self_attn( File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/transformers/models/pegasus/modeling_pegasus.py", line 193, in forward query_states = self.q_proj(hidden_states) * self.scaling File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96, in forward return F.linear(input, self.weight, self.bias) File "/scratch/scampo01/condaenvs/torchA100/lib/python3.8/site-packages/torch/nn/functional.py", line 1847, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)` /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [0,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [1,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [2,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [3,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [4,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [5,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [6,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [7,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [8,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [9,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [10,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [11,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [12,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [13,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [14,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [15,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [16,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [17,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [18,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [19,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [20,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [21,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [22,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [23,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [24,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [25,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [26,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [27,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [28,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [29,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [30,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [176,0,0], thread: [31,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [96,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [97,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [98,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [99,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [100,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [101,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [102,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [103,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [104,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [105,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /tmp/pip-req-build-pma2oi4d/aten/src/ATen/native/cuda/Indexing.cu:702: indexSelectLargeIndex: block: [134,0,0], thread: [106,0,0] Assertion `srcIndex < srcSelectDimSize` failed. ``` ## Expected behavior The number of token should be automatically reduced by truncation.
11-13-2021 11:46:25
11-13-2021 11:46:25
You can toggle truncation on: ```diff from transformers import pipeline model='sshleifer/distill-pegasus-cnn-16-4' pipe = pipeline( 'summarization', model=model, device=0, max_length=1024, + truncation=True ) text = """ Znaczenie Państwowych Funduszy Majątkowych w Globalnym Zarządzaniu Nadmiernymi Rezerwami Walutowymi w W artykule przedstawione zostały motywy gromadzenia przez kraje rezerw walutowych oraz poziom tych rezerw w wybranych krajach w 2008 roku w odniesieniu do najczęściej występujących w literaturze poziomów referencyjnych. Przedstawione analizy dowodzą, że w grupie krajów, które zgromadziły ponad 60% ogólnoświatowych rezerw walutowych przekroczone zostały mierniki uznawane za optymalne, co dowodzi, że kraje te posiadają nadmierne rezerwy. Dotyczy to takich krajów jak: Chiny, Japonia, Rosja, Arabia Saudyjska, Hong Kong, Indie, Korea Południowa, Brazylia, Singapur oraz Tajlandia. W kolejnej części przedstawiona została krótka charakterystyka państwowych funduszy majątkowych, które są jednocześnie instytucjonalną innowacją na globalnych rynkach finansowych oraz alternatywnym narzędziem zarządzania nadmiernymi rezerwami walutowymi. W następnej części artykułu przybliżone zostały korzyści płynące dla gospodarki z tytułu posiadania tego typu podmiotów. Wymienić wśród nich należy m.in. możliwość inwestowania rezerw walutowych w szerszą grupę aktywów o wyższym ryzyku oraz wyższej stopie zwrotu niż ma to miejsce w przypadku tradycyjnego zarządzania rezerwami walutowymi prowadzonego przez krajowe władze monetarne. Podmioty te ułatwiają ponadto absorpcję napływającego do gospodarki strumienia kapitału bez wystąpienia takich negatywnych konsekwencji jak aprecjacja kursu walutowego, powstawanie baniek spekulacyjnych czy inflacja. Dzięki inwestowaniu w szeroką gamę aktywów na rynkach międzynarodowych państwowe fundusze majątkowe zmniejszają lub wręcz eliminują koszty alternatywne związane z utrzymywaniem rezerw. Fundusze te ułatwiają międzypokoleniowy transfer środków pochodzących z eksploatacji zasobów nieodnawialnych jak również mogą być wykorzystywane do wspierania gospodarki podczas kryzysów kiedy to jako inwestorzy ostatniej instancji zapewniają płynność zarówno sektora finansowego jak i pozostałych gałęzi gospodarki. Państwowe fundusze majątkowe postrzegane są jako narzędzie wspierające stabilność makroekonomiczną gospodarki oraz forma zabezpieczenia przyszłego dobrobytu ekonomicznego kraju. Podmioty te wnoszą ponadto istotny wkład w funkcjonowanie gospodarki światowej. Jako długoterminowi, pasywni inwestorzy, którzy nie stosują w swoich strategiach inwestycyjnych dźwigni, państwowe fundusze majątkowe wywierać mogą stabilizujący wpływ na międzynarodowe rynki finansowe zwiększając ich płynność oraz obniżając wahania rynkowe. Wnioski wyciągnięte w artykule wskazują, że w najbliższym latach możliwy jest dalszy rozwój rynku państwowych funduszy majątkowych i wzrost ich znaczenia na międzynarodowych rynkach finansowych. """ output = pipe(text, min_length=5, max_length=1024) ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,382
closed
[M2M100Tokenizer] fix _build_translation_inputs
# What does this PR do? The `_build_translation_inputs` method of `M2M100Tokenizer` hardcodes the `return_tensors` argument, but it's also passed as a `extra_kwargs` in translation pipeline. So the pipeline fails with the error ```TypeError: M2M100Tokenizer object got multiple values for keyword argument 'return_tensors'``` This PR adds `return_tensors` parameter to `_build_translation_inputs`.
11-13-2021 09:10:21
11-13-2021 09:10:21
transformers
14,381
closed
Which language is available for EncoderDecoderModel pre-trained model?
Hi, I have a question. From the example code of [EncoderDecoderModel](https://huggingface.co/transformers/model_doc/encoderdecoder.html#transformers.EncoderDecoderModel.forward), pre-trained model 'bert-base-uncased' looks trained by EN-FR sentence pairs. Is pre-trained model for decoder is trained in French? How can I load pre-trained model in EN-EN pair? Am I confusing? ``` from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'bert-base-uncased') # initialize Bert2Bert from pre-trained checkpoints # training model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.pad_token_id = tokenizer.pad_token_id model.config.vocab_size = model.config.decoder.vocab_size input_ids = tokenizer("Hello, my dog is cute", return_tensors="pt").input_ids labels = tokenizer("Salut, mon chien est mignon", return_tensors="pt").input_ids outputs = model(input_ids=input_ids, labels=input_ids) loss, logits = outputs.loss, outputs.logits # save and load from pretrained model.save_pretrained("bert2bert") model = EncoderDecoderModel.from_pretrained("bert2bert") # generation generated = model.generate(input_ids, decoder_start_token_id=model.config.decoder.pad_token_id) ```
11-13-2021 09:03:31
11-13-2021 09:03:31
Hi, Not sure what you mean, `bert-base-uncased` is the English pretrained BERT checkpoint. So in the code example above, we initialize the weights of the encoder and the decoder with the weights of BERT, and the weights of the cross-attention layers of the decoder are randomly initialized. One should fine-tune this warm-started model on an English downstream dataset, like summarization.<|||||>Thank you for reply @NielsRogge . Then I want to ask, why the decoder output is French in example code? `labels = tokenizer("Salut, mon chien est mignon", return_tensors="pt").input_ids`<|||||>Oh I get your confusion. The `EncoderDecoderModel` framework is meant to be fine-tuned on text-to-text datasets, such as machine translation. Of course, if you want to fine-tune an `EncoderDecoderModel` to perform translation from English to French, it makes sense to warm-start the decoder with a pre-trained French checkpoint, e.g.: `model = EncoderDecoderModel.from_encoder_decoder_pretrained('bert-base-uncased', 'camembert-base') ` I'll update the code example. Also note that you then should use CamemBERT's tokenizer to create the labels.<|||||>@hansd410 do you mind opening a PR to fix this is in the docs?<|||||>@NielsRogge Not at all. Please do so.
transformers
14,380
closed
Using Bart with input_embeds to generate text without input_id return error
This is my code ``` generated_ids = plm.generate(input_ids=None, inputs_embeds=student_embeddings, attention_mask=node_masks, num_beams=4, max_length=config["max_seq_length"], early_stopping=True) ``` I want to generate text with inputs_embeds, but in generation_utils.py have problem. The function with `_prepare_decoder_input_ids_for_generation` use input id, but I put the input_ids is None. ``` def _prepare_decoder_input_ids_for_generation( self, input_ids: torch.LongTensor, decoder_start_token_id: int = None, bos_token_id: int = None ) -> torch.LongTensor: decoder_start_token_id = self._get_decoder_start_token_id(decoder_start_token_id, bos_token_id) decoder_input_ids = ( torch.ones((input_ids.shape[0], 1), dtype=torch.long, device=input_ids.device) * decoder_start_token_id ) return decoder_input_ids ``` ``` # add encoder_outputs to model_kwargs model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation(input_ids, model_kwargs) # set input_ids as decoder_input_ids if "decoder_input_ids" in model_kwargs: input_ids = model_kwargs.pop("decoder_input_ids") else: input_ids = self._prepare_decoder_input_ids_for_generation( input_ids, decoder_start_token_id=decoder_start_token_id, bos_token_id=bos_token_id ) ``` So, what should I do ?
11-13-2021 07:35:22
11-13-2021 07:35:22
@Shj451148969 hello, I also face the same problem with `T5ForConditionalGeneration`. I found that the error doesn't occur if I pass `decoder_input_ids` consisting of `pad_token_id` to the `generate` as start tokens. https://github.com/huggingface/transformers/issues/12218 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,379
closed
Tokenizers docs: Specify which class contains `__call__` method
Currently, the docs specify the following: > BatchEncoding holds the output of the tokenizer’s encoding methods (__call__, encode_plus and batch_encode_plus) and is derived from a Python dictionary. It's not clear what tokenizer class this is referring to. Moreover, the [main `tokenizer`](https://huggingface.co/transformers/main_classes/tokenizer.html) page does not have any documentation for `__call__`; instead it is found in `PreTrainedTokenizerBase`. The proposed change in this PR will make it clear where the user can found documentation about the `__call__` function, which is very widely used now.
11-13-2021 00:04:09
11-13-2021 00:04:09
@n1t0 This is one instance of `__call__`, but maybe it would be beneficial if all `__call__` in the docs links to [this docstring](https://huggingface.co/transformers/internal/tokenization_utils.html#transformers.tokenization_utils_base.PreTrainedTokenizerBase.__call__)? Let me know your thoughts!<|||||>Cool, I think this is a welcome change! cc @sgugger Could you run the style utilities to fix the code quality issues? You can do so by running this from the root of the repository: ``` pip install -e ".[quality]" make fixup ```<|||||>Hi @LysandreJik I'm having trouble with that command. Got the following issue: ``` (venv) xhlu@XHL-Desktop:~/dev/transformers$ make fixup No library .py files were modified python utils/custom_init_isort.py python utils/style_doc.py src/transformers docs/source --max_len 119 running deps_table_update updating src/transformers/dependency_versions_table.py python utils/check_copies.py python utils/check_table.py None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. python utils/check_dummies.py python utils/check_repo.py None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. Checking all models are included. Checking all models are public. Checking all models are properly tested. Checking all objects are properly documented. Checking all models are in at least one auto class. utils/check_repo.py:400: UserWarning: Full quality checks require all backends to be installed (with `pip install -e .[dev]` in the Transformers repo, the following are missing: PyTorch, TensorFlow, Flax. While it's probably fine as long as you didn't make any change in one of those backends modeling files, you should probably execute the command above to be on the safe side. warnings.warn( python utils/check_inits.py python utils/tests_fetcher.py --sanity_check Traceback (most recent call last): File "utils/tests_fetcher.py", line 23, in <module> from git import Repo ModuleNotFoundError: No module named 'git' make: *** [Makefile:42: repo-consistency] Error 1 ``` Not sure what's causing. I do have `git` installed, but this seems like it's trying to import some `git` module<|||||>I believe the package to install is `gitpython`, we should add this to the setup <|||||>If you rebase on `master` and re-run the commands above it should work!<|||||>@LysandreJik thanks. I applied the `make fixup` and commited the change.<|||||>Thanks again for your PR!<|||||>Glad it was helpful :)
transformers
14,378
closed
Use cross_attention_hidden_size in Encoder-Decoder models
# What does this PR do? - Add a projection layer (`enc_to_dec_proj`) between encoder and decoder models in composite models, incorporating the attribute `cross_attention_hidden_size`. - add some `pt/tf equivalence` and `pt/flax equivalence` tests in tf/flax composite model test scripts. - also make some logging and ValueError messages consistent across composite model scripts.
11-12-2021 10:50:17
11-12-2021 10:50:17
I ran slow tests for all the encoder-decoder models test scripts, and it is fine. (e.g. `RUN_SLOW=1 python -m pytest ...`) BTW, is there an easy way to run all cross tests in a test script, i.e. disabling `@is_pt_tf_cross_test` or `@is_pt_flax_cross_test`?<|||||>Hey @ydshieh, We need to slightly update this PR for the speech encoder decoder classes sadly so that the newly introduced variable `config.output_hidden_size` as shown here: https://github.com/huggingface/transformers/blob/cea17acd8cc5190684da944924116fe10742ad81/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L227 is compatible with it. The other files can stay the same :-)<|||||>> Hey @ydshieh, > > We need to slightly update this PR for the speech encoder decoder classes sadly so that the newly introduced variable `config.output_hidden_size` as shown here: > > https://github.com/huggingface/transformers/blob/cea17acd8cc5190684da944924116fe10742ad81/src/transformers/models/speech_encoder_decoder/modeling_speech_encoder_decoder.py#L227 > is compatible with it. > > The other files can stay the same :-) No problem, @patrickvonplaten. But I have a slight doubt at this line: ``` self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size) ``` Should it be ``` self.enc_to_dec_proj = nn.Linear(self.encoder_output_dim, self.decoder.config.hidden_size) ``` if `config.output_hidden_size` is introduced in the config and used here? I didn't go through the speech model, but it looks more natural to do so. <|||||>I made the necessary updates where `config.output_hidden_size` is involved. I didn't change the line ``` self.enc_to_dec_proj = nn.Linear(self.encoder.config.hidden_size, self.decoder.config.hidden_size) ``` despite a slight doubt.<|||||>(Fixed) The failed TF/Torch test is due to #14016 being merged to master (and I rebased this PR on master), which is expected. I will take care of this issue.
transformers
14,377
closed
minor doc fix
# What does this PR do? Fix some docs
11-12-2021 10:02:40
11-12-2021 10:02:40
transformers
14,376
closed
Add support for WMT21 tokenizer in M2M100Tokenizer
# What does this PR do? The tokenizer for WMT21 translation models is similar to `M2M100Tokenizer` with the only difference being it uses different language codes. This PR adds support for wmt21 tokenizers in `M2M100Tokenizer` by adding the `language_codes` attribute which specifies what language codes to use.
11-12-2021 09:53:18
11-12-2021 09:53:18
transformers
14,375
closed
Data type error while fine-tuning Deberta v3 Large using code provided
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0.dev0 - Platform: Ubuntu 18.04 - Python version: Python 3.6.9 - PyTorch version (GPU?): 1.11.0.dev20211110+cu111 - Tensorflow version (GPU?): 2.6.2 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik ## Information Model I am using (Bert, XLNet ...): microsoft/deberta-v3-large The problem arises when using: * [x] the official example scripts: (give details below): https://huggingface.co/microsoft/deberta-v3-large#fine-tuning-with-hf-transformers * [] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) mnli * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. go to transformers/examples/pytorch/text-classification/ 2. Run - `python3 run_glue.py --model_name_o r_path microsoft/deberta-v3-large --task_name mnli --do_train --do_eval --evaluation_strategy steps --max_seq_length 25 6 --warmup_steps 50 --learning_rate 6e-5 --num_train_epochs 3 --output_dir outputv3 --overwrite_output_dir --logging_ steps 10000 --logging_dir outputv3/` or run the script given in the model card - https://huggingface.co/microsoft/deberta-v3-large#fine-tuning-with-hf-transformers <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Training of microsoft/deberta-v3-large on the mnli dataset. The error I am getting- Traceback (most recent call last): File "run_glue.py", line 568, in <module> main() File "run_glue.py", line 486, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/nikhil/.local/lib/python3.6/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs) File "/home/nikhil/.local/lib/python3.6/site-packages/transformers/trainer.py", line 1867, in training_step loss.backward() File "/home/nikhil/.local/lib/python3.6/site-packages/torch/_tensor.py", line 352, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs) File "/home/nikhil/.local/lib/python3.6/site-packages/torch/autograd/__init__.py", line 175, in backward allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass File "/home/nikhil/.local/lib/python3.6/site-packages/torch/autograd/function.py", line 199, in apply return user_fn(self, *args) File "/home/nikhil/.local/lib/python3.6/site-packages/transformers/models/deberta_v2/modeling_deberta_v2.py", line 114, in backward inputGrad = _softmax_backward_data(grad_output, output, self.dim, output) TypeError: _softmax_backward_data(): argument 'input_dtype' (position 4) must be torch.dtype, not Tensor 0%| I am also getting the same error when trying to train Deberta-v2 <!-- A clear and concise description of what you would expect to happen. -->
11-12-2021 09:10:18
11-12-2021 09:10:18
The main issue is that run_glue.py is not usable on Deberta models probably cause they require torch arrays and not tensors. although I am not sure where is it getting tensors from though.<|||||>cc'ing @BigBird01 <|||||>Hello @NIKHILDUGAR, thanks for opening an issue! I'm trying to get the same error as you but I'm failing at doing so: the training runs correctly. I wonder if it isn't because you're on the bleeding edge with a PyTorch dev version? We recommend using a PyTorch stable release as those are heavily tested in our CI. Do you get the same error when using PyTorch 1.10?<|||||>I can't test that at the moment as i am facing a few CUDA issues on my system but I think you are right.<|||||>Okay, please let us know if we can help further.<|||||>Fourth argument of _softmax_backward_data is now torch.dtype. https://github.com/pytorch/pytorch/blob/a34d2849cd3d39c2ce912402bfd90aea75162d1f/tools/autograd/derivatives.yaml#L1852 Changing `inputGrad = _softmax_backward_data(grad_output, output, self.dim, output)` to `inputGrad = _softmax_backward_data(grad_output, output, self.dim, output.dtype)` seems to work. <|||||>> Fourth argument of _softmax_backward_data is now torch.dtype. > > https://github.com/pytorch/pytorch/blob/a34d2849cd3d39c2ce912402bfd90aea75162d1f/tools/autograd/derivatives.yaml#L1852 > > Changing `inputGrad = _softmax_backward_data(grad_output, output, self.dim, output)` to `inputGrad = _softmax_backward_data(grad_output, output, self.dim, output.dtype)` seems to work. this solve my probleam<|||||>i got same error how can i avoid this error<|||||>`python run_glue.py --model_name_or_path microsoft/deberta-v3-large --task_name mnli --train_file snli --do_train --do_eval --evaluation_strategy epoch --max_seq_length 256 --warmup_steps 50 --per_device_train_batch_size 8 --learning_rate 6e-6 --num_train_epochs 2 --output_dir tmp/mnlilearn --overwrite_output_dir --logging_steps 30000 --save_total_limit 3 --save_strategy epoch --logging_dir tmp/mnlilearn ` This code worked for me. I would recommend trying it for your own dataset and models.<|||||>> i am using kaggle kernel so do i need to run that command in kaggle kernel ?<|||||>@arvind-nd Hi, you can change the code in modeling_deberta_v2.py: https://github.com/huggingface/transformers/blob/main/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L120 by either override the ``DebertaSelfAttention`` module or copy the script and then change it. This works for me.
transformers
14,374
closed
`eos_mask` is possibly supposed to be taken from `decoder_input_ids`
I'm trying to use `BartForSequenceClassification`, and I keep getting a shape mismatch error: <img width="1020" alt="Screen Shot 2021-11-11 at 3 19 39 PM" src="https://user-images.githubusercontent.com/33379057/141363523-0a919bb1-3935-45b5-8f45-5ab420751ba4.png"> or I think the reason for the error is that the `eos_mask` is [errantly constructed](https://github.com/huggingface/transformers/blob/v4.10.0/src/transformers/models/bart/modeling_bart.py#L1452) using the `input_ids`, while it should have been constructed with the `decoder_input_ids` whose shape matches the hidden states from the decoder. I think the solution would simply be to change that line to `eos_mask = decoder_input_ids.eq(self.config.eos_token_id)`
11-11-2021 20:21:46
11-11-2021 20:21:46
cc @patrickvonplaten @patil-suraj <|||||>I think another problem if using finding eos in decoder_input_ids is like this: When eos_token_id is 2: decoder_input_ids = tensor([ 0, 7, 6, 13, 4, 7, 13, 14, 21, 7, 13, 5, 14, 20]) labels = tensor([ 7, 6, 13, 4, 7, 13, 14, 21, 7, 13, 5, 14, 20, 2]) Eos token could be removed after shifting, ans could not be found in decoder_input_ids.<|||||>Hey @JamesDeAntonis, Could you provide a code snippet that reproduces the error? Thank you! :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, revisiting this. Here's a snippet: ```python from transformers import AutoTokenizer, BartForSequenceClassification model_name = "facebook/bart-large" tokenizer = AutoTokenizer.from_pretrained(model_name) model = BartForSequenceClassification.from_pretrained(model_name) inputs = tokenizer(["These are the encoder inputs"], return_tensors="pt") inputs["decoder_input_ids"] = tokenizer( ["These are the decoder inputs which have a difference shape from the encoder inputs"], return_tensors="pt" ).input_ids print(f"Notice how {inputs['input_ids'].shape=} and {inputs['decoder_input_ids'].shape=}") model(**inputs) ``` This gives ``` Notice how inputs['input_ids'].shape=torch.Size([1, 8]) and inputs['decoder_input_ids'].shape=torch.Size([1, 18]) Traceback (most recent call last): File "/workspace/cortx-models/src/cortxmodels/metrics/fluency_metric/sentence/scripts/tmp.py", line 16, in <module> model(**inputs) File "/usr/local/lib/python3.9/dist-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) File "/home/jamie/.local/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py", line 1518, in forward sentence_representation = hidden_states[eos_mask, :].view(hidden_states.size(0), -1, hidden_states.size(-1))[ IndexError: The shape of the mask [1, 8] at index 1 does not match the shape of the indexed tensor [1, 18, 1024] at index 1 ``` Both intuitively and based on this error, doesn't it make sense that [this line](https://github.com/huggingface/transformers/blob/v4.21-release/src/transformers/models/bart/modeling_bart.py#L1514) instead be ```python eos_mask = decoder_input_ids.eq(self.config.eos_token_id) ``` Thanks!<|||||>cc @ArthurZucker <|||||>Hey! Thanks for providing a reproducing script 😉 Is there a reason why you are feeding the model with `decoder_input_ids`? My question is related to the fact that for sequence classification, I don't really know why you should need these? <|||||>Thanks for the reply! The classifier is supposed to measure generation quality of a separate encoder-decoder generator. I was planning to leverage the pretraining of the generator by making the inputs of each model similar and then fine-tune off the generator model. Hence, decoder input is the generation to evaluate, while encoder input is the context. Do you recommend a different strategy?<|||||>I saw [this](https://github.com/osainz59/t5-encoder) repo, inspired by [this paper](https://arxiv.org/abs/2110.08426), and have been experimenting with it as well. On the one hand, it seems to give good results without the decoder. On the other, the formulation wouldn't be the same so the pre-training idea wouldn't apply as elegantly.
transformers
14,373
closed
[Wav2Vec2 Example] Improve fine-tuning script
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Add more training parameters to specify and allow to display both wer and cer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-11-2021 18:47:46
11-11-2021 18:47:46
transformers
14,372
closed
Fixing requirements for TF LM models and use correct model mappings
null
11-11-2021 14:46:44
11-11-2021 14:46:44
transformers
14,371
closed
run_translation.py englisht-german translation failed. RuntimeError: CUDA error: device-side assert triggered
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: '4.13.0.dev0' - Platform: Linux - Python version: 3.6 - PyTorch version (GPU): '1.8.2' (GPU) - Using GPU in script: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @patil-suraj - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using: BERT The problem arises when using: * [*] my own modified scripts: (give details below) The official script is to run **ro-en translation** (This works well): I just changed the source language to English and the target language to German (**en-de translation**). I got the following error: ``` [INFO|trainer.py:1196] 2021-11-11 02:51:42,765 >> ***** Running training ***** [INFO|trainer.py:1197] 2021-11-11 02:51:42,765 >> Num examples = 4548885 [INFO|trainer.py:1198] 2021-11-11 02:51:42,765 >> Num Epochs = 3 [INFO|trainer.py:1199] 2021-11-11 02:51:42,765 >> Instantaneous batch size per device = 4 [INFO|trainer.py:1200] 2021-11-11 02:51:42,765 >> Total train batch size (w. parallel, distributed & accumulation) = 4 [INFO|trainer.py:1201] 2021-11-11 02:51:42,765 >> Gradient Accumulation steps = 1 [INFO|trainer.py:1202] 2021-11-11 02:51:42,765 >> Total optimization steps = 3411666 0%| | 0/3411666 [00:00<?, ?it/s] 0%| | 1/3411666 [00:00<577:21:16, 1.64it/s] 0%| | 3/3411666 [00:00<204:19:40, 4.64it/s] 0%| | 5/3411666 [00:00<139:07:25, 6.81it/s] 0%| | 7/3411666 [00:01<113:23:47, 8.36it/s] 0%| | 9/3411666 [00:01<99:22:10, 9.54it/s] 0%| | 11/3411666 [00:01<91:45:24, 10.33it/s] 0%| | 13/3411666 [00:01<86:15:44, 10.99it/s] ... 0%| | 993/3411666 [01:15<94:20:15, 10.04it/s] 0%| | 995/3411666 [01:15<94:43:16, 10.00it/s] 0%| | 997/3411666 [01:15<94:12:00, 10.06it/s]/opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [33,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [34,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [35,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [36,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [37,0,0] Assertion `srcIndex < srcSelectDimSize` failed. /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [446,0,0], thread: [38,0,0] Assertion `srcIndex < srcSelectDimSize` failed. / ... /opt/conda/conda-bld/pytorch_1627336334951/work/aten/src/ATen/native/cuda/Indexing.cu:660: indexSelectLargeIndex: block: [445,0,0], thread: [127,0,0] Assertion `srcIndex < srcSelectDimSize` failed. {'loss': 2.1577, 'learning_rate': 4.999267220179232e-05, 'epoch': 0.0} Traceback (most recent call last): File "run_translation.py", line 622, in <module> main() File "run_translation.py", line 539, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/trainer.py", line 1316, in train tr_loss_step = self.training_step(model, inputs) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/trainer.py", line 1849, in training_step loss = self.compute_loss(model, inputs) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/trainer.py", line 1881, in compute_loss outputs = model(**inputs) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/models/marian/modeling_marian.py", line 1305, in forward return_dict=return_dict, File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/models/marian/modeling_marian.py", line 1181, in forward return_dict=return_dict, File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/models/marian/modeling_marian.py", line 968, in forward attention_mask, input_shape, inputs_embeds, past_key_values_length File "/home/guest/anaconda3/envs/huggingface_NMT/lib/python3.6/site-packages/transformers/models/marian/modeling_marian.py", line 849, in _prepare_decoder_attention_mask ).to(self.device) RuntimeError: CUDA error: device-side assert triggered 0%| | 998/3411666 [01:16<72:19:19, 13.10it/s] ``` The tasks I am working on is: * [*] an official GLUE/SQUaD task: English to German translation ## To reproduce Steps to reproduce the behavior: 1. use the latest transformers code 2. run the following command: ``` CUDA_VISIBLE_DEVICES=3 python run_translation.py \ --model_name_or_path Helsinki-NLP/opus-mt-en-de \ --do_train \ --source_lang en \ --target_lang de \ --dataset_name wmt16 \ --dataset_config_name de-en \ --output_dir output_NMT/tst-translation-en-de \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --overwrite_output_dir \ --predict_with_generate ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
11-11-2021 11:19:34
11-11-2021 11:19:34
Can you run the code on CPU, please? This will give a more informative error message.<|||||>> Can you run the code on CPU, please? > > This will give a more informative error message. on CPU, it works. ``` 11/11/2021 08:24:42 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0distributed training: False, 16-bits training: False 11/11/2021 08:24:42 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments( _n_gpu=0, adafactor=False, adam_beta1=0.9, ... 0%| | 500/3411666 [03:56<464:46:33, 2.04it/s][INFO|trainer.py:1995] 2021-11-11 08:28:46,449 >> Saving model checkpoint to output_NMT/tst-translation-en-de/checkpoint-500 [INFO|configuration_utils.py:417] 2021-11-11 08:28:46,454 >> Configuration saved in output_NMT/tst-translation-en-de/checkpoint-500/config.json [INFO|modeling_utils.py:1060] 2021-11-11 08:28:47,440 >> Model weights saved in output_NMT/tst-translation-en-de/checkpoint-500/pytorch_model.bin [INFO|tokenization_utils_base.py:2037] 2021-11-11 08:28:47,442 >> tokenizer config file saved in output_NMT/tst-translation-en-de/checkpoint-500/tokenizer_config.json [INFO|tokenization_utils_base.py:2043] 2021-11-11 08:28:47,442 >> Special tokens file saved in output_NMT/tst-translation-en-de/checkpoint-500/special_tokens_map.json 0%| | 501/3411666 [04:00<1332:20:21, 1.41s/it] 0%| | 502/3411666 [04:00<1064:13:24, 1.12s/it] 0%| | 503/3411666 [04:01<907:08:17, 1.04it/s] 0%| | 504/3411666 [04:01<784:55:18, 1.21it/s] ``` Any suggestion for GPU modification or test? It's really slow on CPU. Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,370
closed
[flax generate] allow passing params to encode
# What does this PR do? Allows passing the user-provided `params` to the `encode` method for seq-2-seq generation. This is required to be able to `pjit` the `generate` method as we need to explicitly pass the sharded parameters.
11-11-2021 10:03:25
11-11-2021 10:03:25
transformers
14,369
closed
fix loading flax bf16 weights in pt
# What does this PR do? #13098 now enables saving flax weights in bf16. But converting flax bf16 weight to pt fails because `torch.from_numpy` can not handle `bfloat16` and `bfloat16` is also not fully supported by PT. This PR fixes this by casting flax `bf16` weights to `fp32` when converting flax weights to PT.
11-11-2021 09:54:58
11-11-2021 09:54:58
transformers
14,368
open
Export LayoutLMv2 to onnx
I am trying to export LayoutLMv2 model to onnx but there is no support for that available in transformers library. I have tried to follow the method available for layoutLM but that is not working. Here is config class for LayoutLMv2 ``` class LayoutLMv2OnnxConfig(OnnxConfig): def __init__( self, config: PretrainedConfig, task: str = "default", patching_specs: List[PatchingSpec] = None, ): super().__init__(config, task=task, patching_specs=patching_specs) self.max_2d_positions = config.max_2d_position_embeddings - 1 @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("image", {0: "batch", 1: "sequence"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ] ) def generate_dummy_inputs( self, tokenizer: PreTrainedTokenizer, batch_size: int = -1, seq_length: int = -1, is_pair: bool = False, framework: Optional[TensorType] = None, ) -> Mapping[str, Any]: """ Generate inputs to provide to the ONNX exporter for the specific framework Args: tokenizer: The tokenizer associated with this model configuration batch_size: The batch size (int) to export the model for (-1 means dynamic axis) seq_length: The sequence length (int) to export the model for (-1 means dynamic axis) is_pair: Indicate if the input is a pair (sentence 1, sentence 2) framework: The framework (optional) the tokenizer will generate tensor for Returns: Mapping[str, Tensor] holding the kwargs to provide to the model's forward function """ input_dict = super().generate_dummy_inputs(tokenizer, batch_size, seq_length, is_pair, framework) # Generate a dummy bbox box = [48, 84, 73, 128] if not framework == TensorType.PYTORCH: raise NotImplementedError("Exporting LayoutLM to ONNX is currently only supported for PyTorch.") if not is_torch_available(): raise ValueError("Cannot generate dummy inputs without PyTorch installed.") import torch batch_size, seq_length = input_dict["input_ids"].shape input_dict["bbox"] = torch.tensor([*[box] * seq_length]).tile(batch_size, 1, 1) return input_dict onnx_config = LayoutLMv2OnnxConfig(model.config) export(tokenizer=tokenizer, model=model, config=onnx_config, opset=12, output=Path('onnx/layoutlmv2.onnx')) ``` Running the export line is raising this error, ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-25-99a1f167e396> in <module>() ----> 1 export(tokenizer=tokenizer, model=model, config=onnx_config, opset=12, output=Path('onnx/layoutlmv2.onnx')) 3 frames /usr/local/lib/python3.7/dist-packages/transformers/models/layoutlmv2/tokenization_layoutlmv2.py in __call__(self, text, text_pair, boxes, word_labels, add_special_tokens, padding, truncation, max_length, stride, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 449 450 words = text if text_pair is None else text_pair --> 451 assert boxes is not None, "You must provide corresponding bounding boxes" 452 if is_batched: 453 assert len(words) == len(boxes), "You must provide words and boxes for an equal amount of examples" AssertionError: You must provide corresponding bounding boxes ```
11-11-2021 08:54:39
11-11-2021 08:54:39
I believe @NielsRogge can help out here<|||||>I'm not an ONNX expert, however. Pinging @michaelbenayoun for this.<|||||>@michaelbenayoun can you please help here. <|||||>I think it might have to do with the fact that your dummy inputs don't have the image field, so the inputs might be off? <|||||>It seems to come from the `LayoutLMv2Tokenizer` which takes boxes (bbox) as inputs. Here you are calling `super().generate_dummy_inputs` which [uses the tokenizer to create dummy inputs](https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/config.py#L220), but this does not provide the boxes to the tokenizer, hence the error. There are two ways of solving this issue: 1. Make this supported in the base class, that could somehow take other keyword arguments for these kind of cases. 2. Not using the super method, and implementing everything in the LayoutLMv2 OnnxConfig<|||||>Hi @michaelbenayoun , I have made the recommended changes in the LayoutLMv2 config file. ``` # coding=utf-8 # Copyright Microsoft Research and The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """ LayoutLMv2 model configuration """ from ...configuration_utils import PretrainedConfig from ...file_utils import is_detectron2_available from ...utils import logging from ...onnx import OnnxConfig, PatchingSpec from typing import Any, List, Mapping, Optional from transformers import TensorType from transformers import LayoutLMv2Processor from datasets import load_dataset from PIL import Image from ... import is_torch_available from collections import OrderedDict logger = logging.get_logger(__name__) LAYOUTLMV2_PRETRAINED_CONFIG_ARCHIVE_MAP = { "layoutlmv2-base-uncased": "https://huggingface.co/microsoft/layoutlmv2-base-uncased/resolve/main/config.json", "layoutlmv2-large-uncased": "https://huggingface.co/microsoft/layoutlmv2-large-uncased/resolve/main/config.json", # See all LayoutLMv2 models at https://huggingface.co/models?filter=layoutlmv2 } # soft dependency if is_detectron2_available(): import detectron2 class LayoutLMv2Config(PretrainedConfig): r""" This is the configuration class to store the configuration of a :class:`~transformers.LayoutLMv2Model`. It is used to instantiate an LayoutLMv2 model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the LayoutLMv2 `microsoft/layoutlmv2-base-uncased <https://huggingface.co/microsoft/layoutlmv2-base-uncased>`__ architecture. Configuration objects inherit from :class:`~transformers.PretrainedConfig` and can be used to control the model outputs. Read the documentation from :class:`~transformers.PretrainedConfig` for more information. Args: vocab_size (:obj:`int`, `optional`, defaults to 30522): Vocabulary size of the LayoutLMv2 model. Defines the number of different tokens that can be represented by the :obj:`inputs_ids` passed when calling :class:`~transformers.LayoutLMv2Model` or :class:`~transformers.TFLayoutLMv2Model`. hidden_size (:obj:`int`, `optional`, defaults to 768): Dimension of the encoder layers and the pooler layer. num_hidden_layers (:obj:`int`, `optional`, defaults to 12): Number of hidden layers in the Transformer encoder. num_attention_heads (:obj:`int`, `optional`, defaults to 12): Number of attention heads for each attention layer in the Transformer encoder. intermediate_size (:obj:`int`, `optional`, defaults to 3072): Dimension of the "intermediate" (i.e., feed-forward) layer in the Transformer encoder. hidden_act (:obj:`str` or :obj:`function`, `optional`, defaults to :obj:`"gelu"`): The non-linear activation function (function or string) in the encoder and pooler. If string, :obj:`"gelu"`, :obj:`"relu"`, :obj:`"selu"` and :obj:`"gelu_new"` are supported. hidden_dropout_prob (:obj:`float`, `optional`, defaults to 0.1): The dropout probabilitiy for all fully connected layers in the embeddings, encoder, and pooler. attention_probs_dropout_prob (:obj:`float`, `optional`, defaults to 0.1): The dropout ratio for the attention probabilities. max_position_embeddings (:obj:`int`, `optional`, defaults to 512): The maximum sequence length that this model might ever be used with. Typically set this to something large just in case (e.g., 512 or 1024 or 2048). type_vocab_size (:obj:`int`, `optional`, defaults to 2): The vocabulary size of the :obj:`token_type_ids` passed when calling :class:`~transformers.LayoutLMv2Model` or :class:`~transformers.TFLayoutLMv2Model`. initializer_range (:obj:`float`, `optional`, defaults to 0.02): The standard deviation of the truncated_normal_initializer for initializing all weight matrices. layer_norm_eps (:obj:`float`, `optional`, defaults to 1e-12): The epsilon used by the layer normalization layers. max_2d_position_embeddings (:obj:`int`, `optional`, defaults to 1024): The maximum value that the 2D position embedding might ever be used with. Typically set this to something large just in case (e.g., 1024). max_rel_pos (:obj:`int`, `optional`, defaults to 128): The maximum number of relative positions to be used in the self-attention mechanism. rel_pos_bins (:obj:`int`, `optional`, defaults to 32): The number of relative position bins to be used in the self-attention mechanism. fast_qkv (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not to use a single matrix for the queries, keys, values in the self-attention layers. max_rel_2d_pos (:obj:`int`, `optional`, defaults to 256): The maximum number of relative 2D positions in the self-attention mechanism. rel_2d_pos_bins (:obj:`int`, `optional`, defaults to 64): The number of 2D relative position bins in the self-attention mechanism. image_feature_pool_shape (:obj:`List[int]`, `optional`, defaults to [7, 7, 256]): The shape of the average-pooled feature map. coordinate_size (:obj:`int`, `optional`, defaults to 128): Dimension of the coordinate embeddings. shape_size (:obj:`int`, `optional`, defaults to 128): Dimension of the width and height embeddings. has_relative_attention_bias (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not to use a relative attention bias in the self-attention mechanism. has_spatial_attention_bias (:obj:`bool`, `optional`, defaults to :obj:`True`): Whether or not to use a spatial attention bias in the self-attention mechanism. has_visual_segment_embedding (:obj:`bool`, `optional`, defaults to :obj:`False`): Whether or not to add visual segment embeddings. detectron2_config_args (:obj:`dict`, `optional`): Dictionary containing the configuration arguments of the Detectron2 visual backbone. Refer to `this file <https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/models/layoutlmv2/detectron2_config.py>`__ for details regarding default values. Example:: >>> from transformers import LayoutLMv2Model, LayoutLMv2Config >>> # Initializing a LayoutLMv2 microsoft/layoutlmv2-base-uncased style configuration >>> configuration = LayoutLMv2Config() >>> # Initializing a model from the microsoft/layoutlmv2-base-uncased style configuration >>> model = LayoutLMv2Model(configuration) >>> # Accessing the model configuration >>> configuration = model.config """ model_type = "layoutlmv2" def __init__( self, vocab_size=30522, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072, hidden_act="gelu", hidden_dropout_prob=0.1, attention_probs_dropout_prob=0.1, max_position_embeddings=512, type_vocab_size=2, initializer_range=0.02, layer_norm_eps=1e-12, pad_token_id=0, max_2d_position_embeddings=1024, max_rel_pos=128, rel_pos_bins=32, fast_qkv=True, max_rel_2d_pos=256, rel_2d_pos_bins=64, convert_sync_batchnorm=True, image_feature_pool_shape=[7, 7, 256], coordinate_size=128, shape_size=128, has_relative_attention_bias=True, has_spatial_attention_bias=True, has_visual_segment_embedding=False, detectron2_config_args=None, **kwargs ): super().__init__( vocab_size=vocab_size, hidden_size=hidden_size, num_hidden_layers=num_hidden_layers, num_attention_heads=num_attention_heads, intermediate_size=intermediate_size, hidden_act=hidden_act, hidden_dropout_prob=hidden_dropout_prob, attention_probs_dropout_prob=attention_probs_dropout_prob, max_position_embeddings=max_position_embeddings, type_vocab_size=type_vocab_size, initializer_range=initializer_range, layer_norm_eps=layer_norm_eps, pad_token_id=pad_token_id, **kwargs, ) self.max_2d_position_embeddings = max_2d_position_embeddings self.max_rel_pos = max_rel_pos self.rel_pos_bins = rel_pos_bins self.fast_qkv = fast_qkv self.max_rel_2d_pos = max_rel_2d_pos self.rel_2d_pos_bins = rel_2d_pos_bins self.convert_sync_batchnorm = convert_sync_batchnorm self.image_feature_pool_shape = image_feature_pool_shape self.coordinate_size = coordinate_size self.shape_size = shape_size self.has_relative_attention_bias = has_relative_attention_bias self.has_spatial_attention_bias = has_spatial_attention_bias self.has_visual_segment_embedding = has_visual_segment_embedding self.detectron2_config_args = ( detectron2_config_args if detectron2_config_args is not None else self.get_default_detectron2_config() ) @classmethod def get_default_detectron2_config(self): return { "MODEL.MASK_ON": True, "MODEL.PIXEL_STD": [57.375, 57.120, 58.395], "MODEL.BACKBONE.NAME": "build_resnet_fpn_backbone", "MODEL.FPN.IN_FEATURES": ["res2", "res3", "res4", "res5"], "MODEL.ANCHOR_GENERATOR.SIZES": [[32], [64], [128], [256], [512]], "MODEL.RPN.IN_FEATURES": ["p2", "p3", "p4", "p5", "p6"], "MODEL.RPN.PRE_NMS_TOPK_TRAIN": 2000, "MODEL.RPN.PRE_NMS_TOPK_TEST": 1000, "MODEL.RPN.POST_NMS_TOPK_TRAIN": 1000, "MODEL.POST_NMS_TOPK_TEST": 1000, "MODEL.ROI_HEADS.NAME": "StandardROIHeads", "MODEL.ROI_HEADS.NUM_CLASSES": 5, "MODEL.ROI_HEADS.IN_FEATURES": ["p2", "p3", "p4", "p5"], "MODEL.ROI_BOX_HEAD.NAME": "FastRCNNConvFCHead", "MODEL.ROI_BOX_HEAD.NUM_FC": 2, "MODEL.ROI_BOX_HEAD.POOLER_RESOLUTION": 14, "MODEL.ROI_MASK_HEAD.NAME": "MaskRCNNConvUpsampleHead", "MODEL.ROI_MASK_HEAD.NUM_CONV": 4, "MODEL.ROI_MASK_HEAD.POOLER_RESOLUTION": 7, "MODEL.RESNETS.DEPTH": 101, "MODEL.RESNETS.SIZES": [[32], [64], [128], [256], [512]], "MODEL.RESNETS.ASPECT_RATIOS": [[0.5, 1.0, 2.0]], "MODEL.RESNETS.OUT_FEATURES": ["res2", "res3", "res4", "res5"], "MODEL.RESNETS.NUM_GROUPS": 32, "MODEL.RESNETS.WIDTH_PER_GROUP": 8, "MODEL.RESNETS.STRIDE_IN_1X1": False, } def get_detectron2_config(self): detectron2_config = detectron2.config.get_cfg() for k, v in self.detectron2_config_args.items(): attributes = k.split(".") to_set = detectron2_config for attribute in attributes[:-1]: to_set = getattr(to_set, attribute) setattr(to_set, attributes[-1], v) return detectron2_config class LayoutLMv2OnnxConfig(OnnxConfig): def __init__( self, config: PretrainedConfig, task: str = "default", patching_specs: List[PatchingSpec] = None, ): super().__init__(config, task=task, patching_specs=patching_specs) self.max_2d_positions = config.max_2d_position_embeddings - 1 @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("image", {0:"batch"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ] ) def generate_dummy_inputs( self, processor: LayoutLMv2Processor, batch_size: int = -1, seq_length: int = -1, is_pair: bool = False, framework: Optional[TensorType] = None, ) -> Mapping[str, Any]: """ Generate inputs to provide to the ONNX exporter for the specific framework Args: tokenizer: The tokenizer associated with this model configuration batch_size: The batch size (int) to export the model for (-1 means dynamic axis) seq_length: The sequence length (int) to export the model for (-1 means dynamic axis) is_pair: Indicate if the input is a pair (sentence 1, sentence 2) framework: The framework (optional) the tokenizer will generate tensor for is_pair Returns: Mapping[str, Tensor] holding the kwargs to provide to the model's forward function """ datasets = load_dataset("nielsr/funsd") labels = datasets['train'].features['ner_tags'].feature.names example = datasets["test"][0] # print(example.keys()) image = Image.open(example['image_path']) image = image.convert("RGB") if not framework == TensorType.PYTORCH: raise NotImplementedError("Exporting LayoutLM to ONNX is currently only supported for PyTorch.") if not is_torch_available(): raise ValueError("Cannot generate dummy inputs without PyTorch installed.") import torch input_dict = processor(image, example['words'], boxes=example['bboxes'], word_labels=example['ner_tags'], return_tensors=framework) axis = 0 for key_i in input_dict.data.keys(): input_dict.data[key_i] = torch.cat((input_dict.data[key_i], input_dict.data[key_i]), axis) return input_dict.data ``` Now when I am trying to run the below code, ``` processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") model = LayoutLMv2ForTokenClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", torchscript=True) onnx_config = LayoutLMv2OnnxConfig(model.config) export(tokenizer=processor, model=model, config=onnx_config, opset=13, output=Path('onnx/layout.onnx')) ``` I am facing the below error. ``` Traceback (most recent call last): File "/home/muhammad/PycharmProjects/js_labs /Layoutv2/convert_lmv2.py", line 11, in <module> export(tokenizer=processor, model=model, config=onnx_config, opset=9, output=Path('onnx/layout.onnx')) File "/home/muhammad/PycharmProjects/js_labs /anaconda3/envs/onnx-env/lib/python3.7/site-packages/transformers/onnx/convert.py", line 125, in export opset_version=opset, File "/home/muhammad/PycharmProjects/js_labs /anaconda3/envs/onnx-env/lib/python3.7/site-packages/torch/onnx/_init_.py", line 320, in export custom_opsets, enable_onnx_checker, use_external_data_format) File "/home/muhammad/PycharmProjects/js_labs /anaconda3/envs/onnx-env/lib/python3.7/site-packages/torch/onnx/utils.py", line 111, in export custom_opsets=custom_opsets, use_external_data_format=use_external_data_format) File "/home/muhammad/PycharmProjects/js_labs /anaconda3/envs/onnx-env/lib/python3.7/site-packages/torch/onnx/utils.py", line 740, in _export val_add_node_names, val_use_external_data_format, model_file_location) RuntimeError: ONNX export failed: Couldn't export operator aten::adaptive_avg_pool2d ``` One more thing, for dummy input I have provide image as `"image", {0:"batch"}`, is this mapping right or do we have to provide image in a different manner.<|||||>+1<|||||>+1<|||||>Hi, Would be great if you could Google the errors before pinging us (because we at Huggingface are pretty busy). Eg in this case, you can find the answer in the first result on Google: https://github.com/onnx/tutorials/issues/63#issuecomment-559007498 => The reason is that LayoutLMv2 uses a visual backbone, which includes layers like AdapativeAvgPool2d which aren't supported natively by ONNX.<|||||>Hi @NielsRogge , I followed your guide and made the required changes. I updated the pooling layer and now I am faced with the below error. I had googled the previous issue as well but was not kind of sure where to make pooling layer changes. This time I had searched for the subjected issue but to no avail as I am kind of new to to onnx. Would you please point out where I am making error in the code below. ``` from transformers.onnx import OnnxConfig, PatchingSpec from transformers.configuration_utils import PretrainedConfig from typing import Any, List, Mapping, Optional, Tuple, Union, Iterable from collections import OrderedDict from transformers import LayoutLMv2Processor from datasets import load_dataset from PIL import Image import torch from transformers import PreTrainedModel, TensorType from torch.onnx import export from transformers.file_utils import torch_version, is_torch_onnx_dict_inputs_support_available from pathlib import Path from transformers.utils import logging from inspect import signature from itertools import chain from transformers import LayoutLMv2ForTokenClassification from torch import nn from torch.onnx import OperatorExportTypes logger = logging.get_logger(__name__) # pylint: disable=invalid-name class LayoutLMv2OnnxConfig(OnnxConfig): def __init__( self, config: PretrainedConfig, task: str = "default", patching_specs: List[PatchingSpec] = None, ): super().__init__(config, task=task, patching_specs=patching_specs) self.max_2d_positions = config.max_2d_position_embeddings - 1 @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("image", {0: "batch"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ] ) def generate_dummy_inputs( self, processor: LayoutLMv2Processor, batch_size: int = -1, seq_length: int = -1, is_pair: bool = False, framework: Optional[TensorType] = None, ) -> Mapping[str, Any]: datasets = load_dataset("nielsr/funsd") example = datasets["test"][0] image = Image.open(example['image_path']) image = image.convert("RGB") if not framework == TensorType.PYTORCH: raise NotImplementedError("Exporting LayoutLM to ONNX is currently only supported for PyTorch.") input_dict = processor(image, example['words'], boxes=example['bboxes'], word_labels=example['ner_tags'], return_tensors=framework) axis = 0 for key_i in input_dict.data.keys(): input_dict.data[key_i] = torch.cat((input_dict.data[key_i], input_dict.data[key_i]), axis) return input_dict.data class pool_layer(nn.Module): def __init__(self): super(pool_layer, self).__init__() self.fc = nn.AvgPool2d(kernel_size=[8, 8], stride=[8, 8]) def forward(self, x): output = self.fc(x) return output def ensure_model_and_config_inputs_match( model: PreTrainedModel, model_inputs: Iterable[str] ) -> Tuple[bool, List[str]]: """ :param model: :param model_inputs: :return: """ forward_parameters = signature(model.forward).parameters model_inputs_set = set(model_inputs) # We are fine if config_inputs has more keys than model_inputs forward_inputs_set = set(forward_parameters.keys()) is_ok = model_inputs_set.issubset(forward_inputs_set) # Make sure the input order match (VERY IMPORTANT !!!!) matching_inputs = forward_inputs_set.intersection(model_inputs_set) ordered_inputs = [parameter for parameter in forward_parameters.keys() if parameter in matching_inputs] return is_ok, ordered_inputs def export_model( processor: LayoutLMv2Processor, model: PreTrainedModel, config: LayoutLMv2OnnxConfig, opset: int, output: Path ) -> Tuple[List[str], List[str]]: """ Export a PyTorch backed pipeline to ONNX Intermediate Representation (IR Args: processor: model: config: opset: output: Returns: """ if not is_torch_onnx_dict_inputs_support_available(): raise AssertionError(f"Unsupported PyTorch version, minimum required is 1.8.0, got: {torch_version}") logger.info(f"Using framework PyTorch: {torch.__version__}") with torch.no_grad(): model.config.return_dict = True model.eval() # Check if we need to override certain configuration item if config.values_override is not None: logger.info(f"Overriding {len(config.values_override)} configuration item(s)") for override_config_key, override_config_value in config.values_override.items(): logger.info(f"\t- {override_config_key} -> {override_config_value}") setattr(model.config, override_config_key, override_config_value) model_inputs = config.generate_dummy_inputs(processor, framework=TensorType.PYTORCH) inputs_match, matched_inputs = ensure_model_and_config_inputs_match(model, model_inputs.keys()) print(matched_inputs) onnx_outputs = list(config.outputs.keys()) if not inputs_match: raise ValueError("Model and config inputs doesn't match") config.patch_ops() model_inputs.pop("labels") export( model, (model_inputs,), f=output.as_posix(), input_names=list(config.inputs.keys()), output_names=onnx_outputs, dynamic_axes={name: axes for name, axes in chain(config.inputs.items(), config.outputs.items())}, do_constant_folding=True, use_external_data_format=config.use_external_data_format(model.num_parameters()), enable_onnx_checker=True, opset_version=opset, # operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK ) config.restore_ops() return matched_inputs, onnx_outputs if __name__ == '__main__': processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") model = LayoutLMv2ForTokenClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", torchscript = True) model.layoutlmv2.visual.pool = torch.nn.Sequential(pool_layer()) onnx_config = LayoutLMv2OnnxConfig(model.config) export_model(processor=processor, model=model, config=onnx_config, opset=13, output=Path('onnx/layout.onnx')) ``` Running the above code is raising the below error, ``` RuntimeError Traceback (most recent call last) <ipython-input-6-134631b21e61> in <module>() 168 model.layoutlmv2.visual.pool = torch.nn.Sequential(pool_layer()) 169 onnx_config = LayoutLMv2OnnxConfig(model.config) --> 170 export_model(processor=processor, model=model, config=onnx_config, opset=13, output=Path('onnx/layout.onnx')) 4 frames <ipython-input-6-134631b21e61> in export_model(processor, model, config, opset, output) 154 use_external_data_format=config.use_external_data_format(model.num_parameters()), 155 enable_onnx_checker=True, --> 156 opset_version=opset, 157 # operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK 158 ) /usr/local/lib/python3.7/dist-packages/torch/onnx/__init__.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 274 do_constant_folding, example_outputs, 275 strip_doc_string, dynamic_axes, keep_initializers_as_inputs, --> 276 custom_opsets, enable_onnx_checker, use_external_data_format) 277 278 /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py in export(model, args, f, export_params, verbose, training, input_names, output_names, aten, export_raw_ir, operator_export_type, opset_version, _retain_param_name, do_constant_folding, example_outputs, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, custom_opsets, enable_onnx_checker, use_external_data_format) 92 dynamic_axes=dynamic_axes, keep_initializers_as_inputs=keep_initializers_as_inputs, 93 custom_opsets=custom_opsets, enable_onnx_checker=enable_onnx_checker, ---> 94 use_external_data_format=use_external_data_format) 95 96 /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py in _export(model, args, f, export_params, verbose, training, input_names, output_names, operator_export_type, export_type, example_outputs, opset_version, _retain_param_name, do_constant_folding, strip_doc_string, dynamic_axes, keep_initializers_as_inputs, fixed_batch_size, custom_opsets, add_node_names, enable_onnx_checker, use_external_data_format, onnx_shape_inference, use_new_jit_passes) 696 training=training, 697 use_new_jit_passes=use_new_jit_passes, --> 698 dynamic_axes=dynamic_axes) 699 700 # TODO: Don't allocate a in-memory string for the protobuf /usr/local/lib/python3.7/dist-packages/torch/onnx/utils.py in _model_to_graph(model, args, verbose, input_names, output_names, operator_export_type, example_outputs, _retain_param_name, do_constant_folding, _disable_torch_constant_prop, fixed_batch_size, training, use_new_jit_passes, dynamic_axes) 498 if do_constant_folding and _export_onnx_opset_version in torch.onnx.constant_folding_opset_versions: 499 params_dict = torch._C._jit_pass_onnx_constant_fold(graph, params_dict, --> 500 _export_onnx_opset_version) 501 torch._C._jit_pass_dce_allow_deleting_nodes_with_side_effects(graph) 502 RuntimeError: Tensors must have same number of dimensions: got 2 and 1 ```<|||||>@fadi212 Have you tried using another `opset` version, such as 11? Speaking from complete ignorance here, but maybe worth a try :)<|||||>my model is converted to onnx but at time of loading model to onnxruntime I am getting below error. Type Error: Type parameter (T) bound to different types (tensor(double) and tensor(float) in node () @michaelbenayoun @wilbry @fadi212 <|||||>Hi, Can you check out the solution provided [here](https://github.com/microsoft/onnxruntime/issues/649)? Also, if you managed to convert the model to ONNX, feel free to open a PR which we can review, it will benefit the community a lot. Thanks!<|||||>Hi @lalitr994 , I did not face this error. I was able to convert my model to onnx and loading and predicting correctly. I am working on creating a PR but facing some issues as the conversion process for this model is a bit different than others.<|||||>@fadi212 can you share your repo. how you have converted and loaded onnx model to onnx runtime? I am stucked at loading model to run time. <|||||>Hi @lalitr994 , You can use this script to convert the code for now. ` from transformers.onnx import OnnxConfig, PatchingSpec from transformers.configuration_utils import PretrainedConfig from typing import Any, List, Mapping, Optional, Tuple, Iterable from collections import OrderedDict from transformers import LayoutLMv2Processor from datasets import load_dataset from PIL import Image import torch from transformers import PreTrainedModel, TensorType from torch.onnx import export from transformers.file_utils import torch_version, is_torch_onnx_dict_inputs_support_available from pathlib import Path from transformers.utils import logging from inspect import signature from itertools import chain from transformers import LayoutLMv2ForTokenClassification from torch import nn from torch.onnx import OperatorExportTypes logger = logging.get_logger(__name__) # pylint: disable=invalid-name class LayoutLMv2OnnxConfig(OnnxConfig): def __init__( self, config: PretrainedConfig, task: str = "default", patching_specs: List[PatchingSpec] = None, ): super().__init__(config, task=task, patching_specs=patching_specs) @property def inputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ ("input_ids", {0: "batch", 1: "sequence"}), ("bbox", {0: "batch", 1: "sequence"}), ("image", {0: "batch"}), ("attention_mask", {0: "batch", 1: "sequence"}), ("token_type_ids", {0: "batch", 1: "sequence"}), ] ) @property def outputs(self) -> Mapping[str, Mapping[int, str]]: return OrderedDict( [ # ("loss", {}), ("logits", {0: "batch", 1: "sequence"}), # ("hidden_states", {}), # ("attentions", {}) ] ) def generate_dummy_inputs( self, processor: LayoutLMv2Processor, batch_size: int = -1, seq_length: int = -1, is_pair: bool = False, framework: Optional[TensorType] = None, ) -> Mapping[str, Any]: # datasets = load_dataset("nielsr/funsd") # example = datasets["test"][0] # image = Image.open(example['image_path']) # image = image.convert("RGB") if not framework == TensorType.PYTORCH: raise NotImplementedError("Exporting LayoutLM to ONNX is currently only supported for PyTorch.") # input_dict = processor(image, example['words'], boxes=example['bboxes'], word_labels=example['ner_tags'], # return_tensors=framework) # axis = 0 # for key_i in input_dict.data.keys(): # input_dict.data[key_i] = torch.cat((input_dict.data[key_i], input_dict.data[key_i]), axis) return dict( input_ids=torch.zeros((2, 8), dtype=torch.int64), token_type_ids=torch.zeros((2, 8), dtype=torch.int64), attention_mask=torch.zeros((2, 8), dtype=torch.float), bbox=torch.zeros((2, 8, 4), dtype=torch.int64), labels=torch.zeros((2, 8), dtype=torch.int64), image=torch.zeros((2, 3, 224, 224), dtype=torch.int64), ) class pool_layer(nn.Module): def __init__(self): super(pool_layer, self).__init__() self.pool = nn.AvgPool2d(kernel_size=[8, 8], stride=[8, 8]) def forward(self, x): output = self.pool(x) return output def ensure_model_and_config_inputs_match( model: PreTrainedModel, model_inputs: Iterable[str] ) -> Tuple[bool, List[str]]: """ :param model: :param model_inputs: :return: """ forward_parameters = signature(model.forward).parameters model_inputs_set = set(model_inputs) # We are fine if config_inputs has more keys than model_inputs forward_inputs_set = set(forward_parameters.keys()) is_ok = model_inputs_set.issubset(forward_inputs_set) # Make sure the input order match (VERY IMPORTANT !!!!) matching_inputs = forward_inputs_set.intersection(model_inputs_set) ordered_inputs = [parameter for parameter in forward_parameters.keys() if parameter in matching_inputs] return is_ok, ordered_inputs def export_model( processor: LayoutLMv2Processor, model: PreTrainedModel, config: LayoutLMv2OnnxConfig, opset: int, output: Path ) -> Tuple[List[str], List[str]]: """ Export a PyTorch backed pipeline to ONNX Intermediate Representation (IR Args: processor: model: config: opset: output: Returns: """ if not is_torch_onnx_dict_inputs_support_available(): raise AssertionError(f"Unsupported PyTorch version, minimum required is 1.8.0, got: {torch_version}") # logger.info(f"Using framework PyTorch: {torch.__version__}") with torch.no_grad(): model.config.return_dict = True model.eval() # Check if we need to override certain configuration item if config.values_override is not None: logger.info(f"Overriding {len(config.values_override)} configuration item(s)") for override_config_key, override_config_value in config.values_override.items(): logger.info(f"\t- {override_config_key} -> {override_config_value}") setattr(model.config, override_config_key, override_config_value) model_inputs = config.generate_dummy_inputs(processor, framework=TensorType.PYTORCH) inputs_match, matched_inputs = ensure_model_and_config_inputs_match(model, model_inputs.keys()) onnx_outputs = list(config.outputs.keys()) if not inputs_match: raise ValueError("Model and config inputs doesn't match") model_inputs.pop("labels") config.patch_ops() export( model, (model_inputs,), f=output.as_posix(), input_names=list(config.inputs.keys()), output_names=onnx_outputs, dynamic_axes={name: axes for name, axes in chain(config.inputs.items(), config.outputs.items())}, do_constant_folding=True, use_external_data_format=config.use_external_data_format(model.num_parameters()), enable_onnx_checker=True, opset_version=opset, verbose=True # operator_export_type=OperatorExportTypes.ONNX_ATEN_FALLBACK ) config.restore_ops() return matched_inputs, onnx_outputs if __name__ == '__main__': processor = LayoutLMv2Processor.from_pretrained("microsoft/layoutlmv2-base-uncased", revision="no_ocr") model = LayoutLMv2ForTokenClassification.from_pretrained("microsoft/layoutlmv2-base-uncased", num_labels=7) model.layoutlmv2.visual.pool = pool_layer() onnx_config = LayoutLMv2OnnxConfig(model.config) export_model(processor=processor, model=model, config=onnx_config, opset=13, output=Path('onnx/layout2.onnx')) ` Also you will have to make the change these lines in modeling_layoutlmv2 in transformers library. ` visual_shape = deepcopy(list(input_shape)) #line 859 visual_shape[1] = self.config.image_feature_pool_shape[0] * self.config.image_feature_pool_shape[1] visual_shape = torch.Size(visual_shape) final_shape = deepcopy(list(input_shape)) #line 862 final_shape[1] += visual_shape[1] final_shape = torch.Size(final_shape) `<|||||>Hi @fadi212! Thanks for your script! I'm having some trouble exporting the `microsoft/layoutlmv2-base-uncased` model (just testing it works ok before exporting my model). I have discarded any errors in your code, as it works perfectly, but it ends up failing with a segmentation fault deep into some `pytorch` C bindings. May I ask you what versions of the libraries have you installed, in particular `pytorch` and `onnx`? --- Just for the record, the segfault happens consistently at line 218 of the picture, which is located inside an optimization routine called `_optimize_graph` at `torch.onnx.utils`. ![image](https://user-images.githubusercontent.com/42223959/143627403-87c2e60f-addf-461f-9a55-f8f5dbe956a1.png) Interestingly, by explicitly setting the `operator_export_type` to `OperatorExportTypes.ONNX_ATEN` on the `export` function, it manages to go through that line, but fails again a little further down at line 238 of the picture, albeit without a segfault (just a regular Python exception Traceback): ![image](https://user-images.githubusercontent.com/42223959/143644752-55de4011-7547-4598-8e5f-80b4fbac71ce.png) I think I have narrowed down the problem to the generation of invalid ONNX code (in particular, some `UNKNOWN_SCALAR`s), most likely due to some unsupported operation similar to the `AdaptiveAvgPool2d` -> `AvgPool2d` issue. <|||||>Hi @viantirreau @lalitr994 , You can take a look at this PR and convert your model with this branch. https://github.com/huggingface/transformers/pull/14555<|||||>Thanks @fadi212 I will try my model with this brach<|||||>In Onnx conversion I got warning like **/torch/onnx/symbolic_helper.py:258: UserWarning: ONNX export failed on adaptive_avg_pool2d because input size not accessible not supported warnings.warn("ONNX export failed on " + op + " because " + msg + " not supported") Warning: Shape inference does not support models with experimental operators: ATen** During infer from model I got the error below **onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from onnx_model.onnx failed:Fatal error: adaptive_avg_pool2d is not a registered function/op** @fadi212 I followed your code but facing this issue. <|||||>Hi @riqui-puig, I have created a PR to add support for LayoutLMv2 you can use that. https://github.com/huggingface/transformers/pull/14555 The code is not merged yet but you can install that particualr branch and then you can convert your model using command line. <|||||>Hi @fadi212 I have tried to convert **LayoutLMv2** Q-A model into **onnx** but still showing errors. Could you please guide here. Thanks in advance. **Command**: !python -m transformers.onnx --model=microsoft/layoutlmv2-base-uncased onnx/ **Error log**: Some weights of the model checkpoint at microsoft/layoutlmv2-base-uncased were not used when initializing LayoutLMv2Model: ['layoutlmv2.visual.backbone.bottom_up.res4.15.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.14.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.15.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.10.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.stem.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.8.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.17.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.3.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.19.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.12.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.5.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.18.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.11.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.1.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.13.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.shortcut.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.2.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.20.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.9.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.4.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.22.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res3.0.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.6.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.16.conv1.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.1.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res5.0.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.7.conv3.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res2.2.conv2.norm.num_batches_tracked', 'layoutlmv2.visual.backbone.bottom_up.res4.21.conv2.norm.num_batches_tracked'] - This IS expected if you are initializing LayoutLMv2Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing LayoutLMv2Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Using framework PyTorch: 1.12.0+cu113 Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 107, in <module> main() File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/__main__.py", line 94, in main args.output, File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 352, in export return export_pytorch(preprocessor, model, config, opset, output, tokenizer=tokenizer, device=device) File "/usr/local/lib/python3.7/dist-packages/transformers/onnx/convert.py", line 168, in export_pytorch model.layoutlmv2.visual.pool = PoolerLayer() File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1208, in __getattr__ type(self).__name__, name)) AttributeError: 'LayoutLMv2Model' object has no attribute 'layoutlmv2'
transformers
14,367
closed
Weird assumptions in the PLM collator
Hi, I'm a researcher working with the HF codebase. I'm using the PLM collator, but some assumptions of that module confuse me and block my experiments. Can you explain? The PLM collator assumes the input length is even so it can split it into two halves. The explanation starts [here](https://github.com/huggingface/transformers/blob/1c76a51615ccd5c8e60570771b29ef16a9c3bc17/src/transformers/data/data_collator.py#L1269). It says that we need to avoid leaking information so we can only permute half of the sequences if we assume half of the sequence is mems, if I understand correctly. However, it confuses me: 1. The input sequence here contains only the new tokens, and the mems should be a separate variable. So why do we assume half of the sequence is going to be reused? 2. It points me to read the documentation of `mems`, while the documentation of `mems` is copied from other transformer models. It says that `mems` is used to speed up decoding by avoiding repeated computation, which doesn't fit the situation of XLNet, where mems are used to store history information. Is the documentation correct? If not, where can I find the correct version? Thanks in advance! @sgugger
11-11-2021 08:16:18
11-11-2021 08:16:18
This is community contributed so you should ask the person that contributed it :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,366
closed
run_mlm.py Issue | MODEL_FOR_MASKED_LM_MAPPING is None
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.13.0.dev0 - Platform: Linux-3.10.0-1160.25.1.el7.x86_64-x86_64-with-centos-7.6.1810-Core - Python version: 3.7.3 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> @sgugger @Rocketknight1 @Elysium1436 ## Information Model I am using Bert: The problem arises when using: * [ run_mlm.py ] the official example scripts: (give details below) The tasks I am working on is: * [ language-modeling ] an official GLUE/SQUaD task: (give the name) ## To reproduce Steps to reproduce the behavior: 1. prepare the env ``` python3 -m venv venv git clone https://github.com/huggingface/transformers cd transformers pip install . pip install tensorflow pip install datasets pip install sklearn ``` 2. run the script ``` python run_mlm.py \ --model_name_or_path distilbert-base-cased \ --output_dir output \ --dataset_name wikitext \ --dataset_config_name wikitext-103-raw-v1 ``` 3. get some error information ``` Traceback (most recent call last): File "run_mlm.py", line 63, in <module> MODEL_CONFIG_CLASSES = list(MODEL_FOR_MASKED_LM_MAPPING.keys()) AttributeError: 'NoneType' object has no attribute 'keys' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I want to judge whether two lines of text should be merged into one line. For example: ``` input: The preparations for the Beijing Winter Olympics are progressing smoothly and are fully recognized by the International Olympic Committee, he said. output: The preparations for the Beijing Winter Olympics are progressing smoothly and are fully recognized by the International Olympic Committee, he said. ``` I think maybe the masked language model can do this. I insert a `[MERGE]` or `[SPLIT]` special token into the gap of two lines and only masked these two tokens when construct masked tokens like this: ``` source input: The preparations for the Beijing [MERGE] Winter Olympics are progressing smoothly and are [MERGE] fully recognized by the International Olympic [MERGE] Committee, he said. masked input: The preparations for the Beijing [mask] Winter Olympics are progressing smoothly and are [mask] fully recognized by the International Olympic [mask] Committee, he said. ``` But when I try to execute the original script `run_mlm.py` by Tutorials, I get the above error. What do I need to do to perform the training correctly? And do you think the task of merging sentences can be solved by language models?
11-11-2021 08:13:24
11-11-2021 08:13:24
I sovled the error by execute `pip install -r examples/pytorch/language-modeling/requirements.txt`, why should I install the requirements of pytorch example for tensorflow example?<|||||>Indeed the TensorFlow examples should use the TF mappings, cc @Rocketknight1 <|||||>Thank you for this bug report! We've added a PR to fix it, hopefully it will be merged soon.
transformers
14,365
closed
Use `AlbertConverter` for FNet instead of using FNet's own converter
# What does this PR do? **Edited: Finally using `AlbertConverter` directly because the slow tokenizers between `Albert` and `FNet` are basically identical. (FNet doesn't have `bos_token` and `eos_token` while Albert does, but this difference doesn't matter.)** --- This PR adds normalizer to `FNetConverter`, making `do_lower_case` and `keep_accents` options of `FNetTokenizerFast` work. This normalizer is copied from that of `Albert` as FNetTokenizerFast is adepted from `Albert`. https://github.com/huggingface/transformers/blob/3ea15d27832d47d44ad046c3a776c7b582b0984b/src/transformers/models/fnet/tokenization_fnet_fast.py#L57-L61 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @SaulLu @sgugger
11-11-2021 07:38:33
11-11-2021 07:38:33
@SaulLu if this looks good to you, feel free to merge
transformers
14,364
closed
Fix mask token handling
# What does this PR do? This PR fixes the problem that the mask token is trying to incorrectly match normalized input texts. This is a related PR of #13594 and #13930. @SaulLu @LysandreJik
11-11-2021 07:13:12
11-11-2021 07:13:12
Hey @qqaatw, are you sure this is an issue with all tokenizers you refactored here? If so, then ideally there would be a test for all of them. The test would fail on current `master`, and would be solved by your PR. If the problem is as widespread as you show it here, then it might even make sense to add it to the common tests. <|||||>Hey @LysandreJik, thank you for your response. I think this change will be covered in this test after we extend the test for both python and rust tokenizers. (discussed on this [thread](https://github.com/huggingface/transformers/pull/13594#discussion_r714795220)) https://github.com/huggingface/transformers/blob/1cc453d33c5d0be01eaf3050082c125ce87491aa/tests/test_tokenization_common.py#L651-L656 As a matter of fact, not all tokenizers would fail on current master as it depends on different kinds of special tokens. For example, if the mask token is `[MASK]`, then on current master the tokenizer will incorrectly normalize input texts first, and then try to match the mask token, resulting in: `Today is a [MASK] day` normalized-> `today is a [mask] day` -> cannot match `[MASK]` However, if the mask token is `<mask>`, whether we're on current master or this PR, the test will always pass as there is no difference on the mask token before and after normalization. `Today is a <mask> day` normalized-> `today is a <mask> day` -> can match `<mask>` Therefore, changing all tokenizers with special mask token handling just wants to make sure they have a consistent behavior throughout the codebase.<|||||>Thank you very much for the additional information @qqaatw . I agree with you that it is more "intuitive" that the default behavior for a mask token is `Normalized=False`. However, since this doesn't necessarily solve a problem and potentially introduce changes for our users, maybe it's worth leaving the settings as they were before. What do you think about it?<|||||>@SaulLu I agree with your point. Except for `Albert` and `FNet` tokenizers, other tokenizers having specifal mask token handling don't have the `do_lower_case` option; therefore, they would not fail `test_added_tokens_do_lower_case` test. So we only need to modify `FNet` because `Albert` was already addressed by another PR.
transformers
14,363
closed
Comparison Chart (Table) for all the existing BERT models.
# 🚀 Feature request Can we have a Comparison Chart (Table) for all the existing BERT models in one place. For an example: - Data the model is trained on? - Hidden layers? - Architecture difference? - How it can be loaded for classification/generation - 2-3 lines of Code ? - Does it has pooler layer or not? - Parameters - Model Size - General Inference time - Training time The chart table then will have link to deep-dive in details directed to the model page. ## Motivation Sometimes it becomes overwhelming to go through each of the models ( too many of them). If its in a comparison chart then it becomes very simplified. It's just an idea. We can elaborate or work on this idea.
11-11-2021 04:14:54
11-11-2021 04:14:54
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,362
closed
[testing] solve the port conflict
This is a fix for https://github.com/huggingface/transformers/pull/14331 which accidentally collapsed 2 distinct ports and the deepspeed tests started to fail. Since `torch.dist` never releases the port, those tests that run emulated launcher env never release the port (but can re-use it). But then it prevents other tests that run with `python -m torch.distributed` to succeed and ending up with `Address already in use` error. I'm going to merge this to unblock the CIs (ours and Deepspeed's) since tomorrow is a holiday, but would be happy to do a follow up PR if something can be improved. @sgugger
11-11-2021 03:03:28
11-11-2021 03:03:28
Thanks for fixing!
transformers
14,361
closed
Experimenting with adding proper get_config() and from_config() methods
Should fix issues with saving/loading Keras models containing Transformers models as layers, among other problems.
11-10-2021 18:16:47
11-10-2021 18:16:47
It would be ideal to add tests to ensure that the model's serialization works correctly.<|||||>Just a small comment from my side - I think I stumbled across the problem that the returned get_config was not fully JSON serializable (as it contained a reference to a tf DType class) - I circumvented this by adding a custom JSONEncoder for it; but it may be worthwhile testing for json.dumps(config) as well?<|||||>@Zahlii That's a great idea! Would you be willing to write a test for it, or modify the existing test, and tag me for review? <|||||>See https://github.com/huggingface/transformers/pull/14415
transformers
14,360
closed
The pytorch example summarization/run_summarization.py do not work with MBart
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: git+https://github.com/huggingface/transformers - Platform: - Python version: 3.8 - PyTorch version (GPU?): 1.10.0 - Using GPU in script?: yes - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help @sgugger @NielsRogge @Narsil @patrickvonplaten Models: - MBart: facebook/mbart-large-cc25 - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): facebook/mbart-large-cc25 - Pytorch: 1.10.0 If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) Examples: - maintained examples (not research project or legacy): [summarization/run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py) ## Information Model I am using (Bert, XLNet ...): MBart The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name): summarization * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. run the [summarization/run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py) with facebook/mbart-large-cc25 as model 2. receive the error: ``` INFO|tokenization_utils_base.py:888] 2021-11-10 17:36:01,873 >> Assigning ['ar_AR', 'cs_CZ', 'de_DE', 'en_XX', 'es_XX', 'et_EE', 'fi_FI', 'fr_XX', 'gu_IN', 'hi_IN', 'it_IT', 'ja_XX', 'kk_KZ', 'ko_KR', 'lt_LT', 'lv_LV', 'my_MM', 'ne_NP', 'nl_XX', 'ro_RO', 'ru_RU', 'si_LK', 'tr_TR', 'vi_VN', 'zh_CN'] to the additional_special_tokens key of the tokenizer [INFO|modeling_utils.py:1342] 2021-11-10 17:36:02,289 >> loading weights file https://huggingface.co/facebook/mbart-large-cc25/resolve/main/pytorch_model.bin from cache at /home/super/.cache/huggingface/transformers/58963b41815ac5618d9910411e018d60a3ae7d4540a66e6cf70adf29a748ca1b.bef0d2e3352d6c4bf1213c6207738ec5ecf458de355c65b2aead6671bc612138 [INFO|modeling_utils.py:1609] 2021-11-10 17:36:07,745 >> All model checkpoint weights were used when initializing MBartForConditionalGeneration. [INFO|modeling_utils.py:1617] 2021-11-10 17:36:07,745 >> All the weights of MBartForConditionalGeneration were initialized from the model checkpoint at facebook/mbart-large-cc25. If your task is similar to the task the model of the checkpoint was trained on, you can already use MBartForConditionalGeneration for predictions without further training. Traceback (most recent call last): File "src/run_summarization_bart.py", line 645, in <module> main() File "src/run_summarization_bart.py", line 371, in main raise ValueError("Make sure that `config.decoder_start_token_id` is correctly defined") ValueError: Make sure that `config.decoder_start_token_id` is correctly defined ``` MBart is in the list of model to use with this script, but this multilingual model seams different. ## Expected behavior Train summarization with pre-trained MBart correctly.
11-10-2021 17:18:27
11-10-2021 17:18:27
Good point! @patil-suraj - let's try to fix this :-)<|||||>@sgugger @patrickvonplaten @patil-suraj any news about the problem?<|||||>I also faced the same problem, @patrickvonplaten, @patil-suraj, any news? :)<|||||>Sorry about the super late response. We will need to support the language codes for mBART as we do in the `run_translation.py` script. Will update the script in a couple of days or feel free to open a PR @nicolalandro @banda-larga if you want, happy to help with it :) <|||||>Hi @patrickvonplaten @patil-suraj Do you have any updates on the mbart compatibility? Appreciate that you are looking into this. Thank you very much. <|||||>Sorry about being late here again, will try to add it this week :) Thanks!<|||||>Hi @patil-suraj . Is there an update on this? Thanks!<|||||>Hey @Nikoschenk, messed up my schedule again! Sorry about the delay. I will take a look at it this week for sure <|||||>The `run_summarization.py` now supports `mBART` thanks to @banda-larga ! Fixed by #15125<|||||>@banda-larga @patil-suraj thank you very much! That‘s great.
transformers
14,359
closed
How FNet handle PAD token?
How to avoid [FNet](https://huggingface.co/transformers/model_doc/fnet.html#transformers.FNetForPreTraining.forward) process `PAD` token as it has no `attention_mask` alternative argument in `forward` method? ```python from transformers import FNetTokenizer, FNetModel tokenizer = FNetTokenizer.from_pretrained('google/fnet-base') model = FNetModel.from_pretrained('google/fnet-base') features=tokenizer.encode(text="This is new encoder", max_length=16, padding="max_length", truncation=True, return_tensors="pt") # tensor([[ 4, 325, 65, 351, 1703, 242, 14, 5, 3, 3, 3, 3, # 3, 3, 3, 3]]) ``` Or does this model handle `PAD` token (with `token_id=3`) internally?
11-10-2021 16:04:05
11-10-2021 16:04:05
cc @gchhablani @patrickvonplaten <|||||>Good observation! The model actually processes the PAD token just like any other token, which means that padding an input of length 32 to 128 yields different results to just forwarded the input of length 32. For both pre-training and fine-tuning each sequence length was always padded to 512 tokens, so it's recommended to do the same for inference<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,358
closed
Added support for other features for already supported models
# What does this PR do? This PR adds support for almost all the features available for already supported models. Main contributions: - `OnnxSeq2SeqConfigWithPast`: a new class inheriting from `OnnxConfigWithPast` designed specifically for seq2seq models, this should make things easier for the community to contribute. - Tests refactoring and parameterization: now every (model, feature) export pair is tested, and is considered as a standalone test (compared to before when everything was considered to be one big test). - A lot of new features (a feature is a task plus the choice or not to use `past_key_values`), that have been requested by the community (check the list of supported feautres below) Features now supported: - For BERT like models: default, sequence-classification, token-classification and question-answering (multiple-choice will be added later). - For causal language models (GPT-2 and GPT-neo): default, default-with-past, causal-lm, causal-lm-with-past, sequence-classification and token-classification (only for GPT2). - For Seq2Seq models (T5, BART, mBART): - T5, BART, mBART: default, default-with-past, seq2seq-lm, seq2seq-lm-with-past - BART, mBART: causal-lm, causal-lm-with-past, sequence-classification, question-answering
11-10-2021 15:06:32
11-10-2021 15:06:32
@michaelbenayoun @lewtun @Albertobegue any idea when this PR will be merged ?<|||||>> @michaelbenayoun @lewtun @Albertobegue any idea when this PR will be merged ? Hey @girishnadiger-gep this PR was superseded by #14700 which has been merged sometime ago. Is there a specific issue or feature missing that you're interested in?<|||||>Hi @lewtun , Thanks for getting back. I was trying to implement BART Summarization on onnx. Facing a weird issue where the inputs to the model are 4 `('input_ids', 'attention_mask', 'decoder_input_ids', 'decoder_attention_mask')`, but unable to figure out how to get `'decoder_input_ids'` and `'decoder_attention_mask'` values to feed to the onnx model. I've converted a BART Large model to onnx model using 'seq2seq-lm' feature, but thought i'm missing something here so asked in this forum.<|||||>Ah for that you can probably adapt the example that I used for the Marian PR in https://github.com/huggingface/transformers/pull/14586 FYI we also have a forum (https://discuss.huggingface.co/) which is better suited for these type of questions - we try to use GitHub issues for bug reports / feature requests
transformers
14,357
closed
TFEncoderDecoder not handling labels correctly
## Environment info Google Colab - `transformers` version: master branch. With the latest release (4.12.3) you can't replicate this problem, as it fails with other issue that has already been fixed in master (support for cross-attention in TF GPT2) - Tensorflow version: 2.7.0 ## Who can help Tagging @patrickvonplaten as he has done the latest merges on TFEncoderDecoder. ## Information In TFEncoderDecoder, when the input is passed as dict, the encoder `input_processing` function "unpacks it", also unpacking the labels (if they are there). The labels end up being passed to the encoder call, which shouldn't happen, as the labels are only needed for the decoder, and causes the encoder call to fail. The consequence is that trying to fit a TFEncoderDecoder using `.fit()` with a tf.data.Dataset results in this error. ## To reproduce ``` from transformers import TFEncoderDecoderModel model = TFEncoderDecoderModel.from_encoder_decoder_pretrained("bert-base-uncased", "gpt2") model(model.dummy_inputs) # works fine with_labels = dict(labels=model.dummy_inputs["decoder_input_ids"], **model.dummy_inputs) model(**with_labels) # works fine model(with_labels) # fails with the error bellow ``` ``` /usr/local/lib/python3.7/dist-packages/transformers/models/encoder_decoder/modeling_tf_encoder_decoder.py in call(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs) 493 decoder_attention_mask = encoder_inputs.pop("decoder_attention_mask") 494 --> 495 encoder_outputs = self.encoder(**encoder_inputs) 496 497 encoder_hidden_states = encoder_outputs[0] /usr/local/lib/python3.7/dist-packages/transformers/models/bert/modeling_tf_bert.py in call(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict, training, **kwargs) 1125 return_dict=return_dict, 1126 training=training, -> 1127 kwargs_call=kwargs, 1128 ) 1129 outputs = self.bert( /usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py in input_processing(func, config, input_ids, **kwargs) 386 if len(kwargs["kwargs_call"]) > 0: 387 raise ValueError( --> 388 f"The following keyword arguments are not supported by this model: {list(kwargs['kwargs_call'].keys())}." 389 ) 390 ValueError: Exception encountered when calling layer "encoder" (type TFBertModel). The following keyword arguments are not supported by this model: ['labels']. Call arguments received: • input_ids=tf.Tensor(shape=(3, 5), dtype=int32) • attention_mask=None • token_type_ids=None • position_ids=None • head_mask=None • inputs_embeds=None • encoder_hidden_states=None • encoder_attention_mask=None • past_key_values=None • use_cache=True • output_attentions=False • output_hidden_states=False • return_dict=True • training=False • kwargs={'labels': 'tf.Tensor(shape=(3, 5), dtype=int32)'} ``` ## Expected behavior This should handle labels correctly, as they are needed in order to fit the model. A workaround that works is adding this bit on the call: ``` encoder_inputs = input_processing(**encoder_processing_inputs) # start new code if "labels" in encoder_inputs: labels = encoder_inputs.pop("labels") # end new code ... ```
11-10-2021 14:52:01
11-10-2021 14:52:01
Hi, I think all the inputs should be unpacked as keyword arguments before inputted into `TFEncoderDecoderModel.__call__`, as stated in the [docs](https://huggingface.co/transformers/model_doc/encoderdecoder.html#tfencoderdecodermodel), Is there any reason that you want to pass a dict directly?<|||||>The inputs are not unpacked in the model train_step(), which is what is used when you train the model using fit(). See TFPretrainedModel.train_step (line 802): ``` with tf.GradientTape() as tape: y_pred = self(x, training=True) ``` <|||||>@Rocketknight1, Do you maybe find some time to look into this? :-)<|||||>@Rocketknight1 , @NielsRogge @ydshieh - I think we can solve this issue with the new design now no?<|||||>I didn't follow this issue until now. I can try to look at this if @Rocketknight1 is OK.<|||||>@ydshieh Sure, yes! I'm sorry I've been slow with it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>activate :-)
transformers
14,356
open
[WIP] Adding support for `flax` for `pipelines`.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ```python from transformers import FlaxRobertaModel, RobertaTokenizerFast, RobertaForMaskedLM, pipeline import numpy as np import datetime import tqdm import random def data(n): for _ in range(n): yield "JAX/Flax is amazing <mask>" def flax(n): print("----") print("Flax") start = datetime.datetime.now() pipe = pipeline( model="distilbert-base-uncased-finetuned-sst-2-english", device=0, framework="flax", model_kwargs={"from_pt": True}, ) print("Loading flax", datetime.datetime.now() - start) for out in tqdm.tqdm(pipe(data(n), batch_size=512)): pass def tf(n): print("----") print("TF") start = datetime.datetime.now() pipe = pipeline(model="distilbert-base-uncased-finetuned-sst-2-english", device=0, framework="tf") print("Loading TF", datetime.datetime.now() - start) for out in tqdm.tqdm(pipe(data(n), batch_size=512)): pass def pt(n): print("----") print("PT") start = datetime.datetime.now() pipe = pipeline(model="distilbert-base-uncased-finetuned-sst-2-english", device=0, framework="pt") print("Loading PT", datetime.datetime.now() - start) for out in tqdm.tqdm(pipe(data(n), batch_size=512)): pass # print(out) if __name__ == "__main__": n = 20000 # pt(n) # flax(n) tf(n) ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2021 14:10:21
11-10-2021 14:10:21
@Narsil - please ping me once you would like to have a review on this :-)<|||||>@patrickvonplaten I think this should be ready for a first pass review. Let's focus on the big picture and overall architecture for this. The main pain points I am thinking of: - Should we force padding to get compilation cache hit on jax. - Should we enable JIT automatically or not? (If we want `pjit` or others maybe we should delegate that choice to users ?)<|||||>> @patrickvonplaten I think this should be ready for a first pass review. > > Let's focus on the big picture and overall architecture for this. The main pain points I am thinking of: > > * Should we force padding to get compilation cache hit on jax. > * Should we enable JIT automatically or not? (If we want `pjit` or others maybe we should delegate that choice to users ?) Good questions! IMO: For a) -> yes I would definitely do this. The by far most important application will be to provide a simple and fast demo for large LM models, such as https://huggingface.co/Cedille/fr-boris?text=Mon+nom+est+Thomas+et+mon+principal (which currently doesn't work with PT on GPU). I woulcd actually force padding to a large multiple to something like 64 by default (cc @patil-suraj what do you think?) For b) -> yeah good question. I think we should delegate it to the user meaning that there will be both a `jit=True/False` and a `parallel=True/False` input that the user can specify. IMO both should default to False, but for the inference widget we should default jit to True I think (if JAX is used for large models) <|||||>> > @patrickvonplaten I think this should be ready for a first pass review. > > Let's focus on the big picture and overall architecture for this. The main pain points I am thinking of: > > > > * Should we force padding to get compilation cache hit on jax. > > * Should we enable JIT automatically or not? (If we want `pjit` or others maybe we should delegate that choice to users ?) > > Good questions! > > IMO: > > For a) -> yes I would definitely do this. The by far most important application will be to provide a simple and fast demo for large LM models, such as https://huggingface.co/Cedille/fr-boris?text=Mon+nom+est+Thomas+et+mon+principal (which currently doesn't work with PT on GPU). I woulcd actually force padding to a large multiple to something like 64 by default (cc @patil-suraj what do you think?) Makes sense to me this was in the first iteration, and I removed it since during compilation during tests is still painfully slow because no cache hits there (we're mostly loading different models and sending a handful of inference at them). > > For b) -> yeah good question. I think we should delegate it to the user meaning that there will be both a `jit=True/False` and a `parallel=True/False` input that the user can specify. IMO both should default to False, but for the inference widget we should default jit to True I think (if JAX is used for large models) Ok, since this is specific to `jax` and I don't intend to add similar `torchscript` stuff to `pt` for instance, I am guessing I will aim for something that would look like ```python pipe = pipeline(....) pipe._enable_compilation(type="jit", ) # or pipe._enable_compilation(type="pjit", **extra_mapping) # Or add the jitting automatically if it wasn't called by the user with align to = 64 ```<|||||>@Narsil - that looks good to me. Let's see what @patil-suraj thinks about the API<|||||>0Still valuable<|||||>Super sorry about being so late here, will take a look this week :) <|||||>Also think this would be a nice addition still - gently ping for @patil-suraj here.
transformers
14,355
closed
Improve semantic segmentation models
# What does this PR do? * add SegFormer documentation (including a figure) * fix padding (fixes #14332) for SegFormer * add attribute to the configuration of SegFormer and BEiT called `semantic_ignore_index`, which defaults to 255. The loss functions of semantic segmentation models typically use 255 instead of -100. The reason for this is that some datasets include a 0 in the annotated segmentation maps to indicate "background", however it can be that background is not included in any of the labels of the dataset. e.g. ADE20k has 150 labels, but "background" is not included. Therefore, one reduces the labels of all segmentation maps by 1 value, and replaces the 0 by 255 as shown [here](https://github.com/open-mmlab/mmsegmentation/blob/441be4e435127868a0c72a4e0e6b87662a4c415b/mmseg/datasets/pipelines/loading.py#L140-L145). It's only after that that images get resized using PIL. However, if we replace values by -100, PIL can't read these images, and will thrown an error. * add option to pass `segmentation_maps` to `BeitFeatureExtractor`, and add corresponding tests. To do: `SegformerFeatureExtractor` currently includes the `align`, `do_random_crop` and `do_pad` arguments at initialization, however I wonder whether it's maybe better to remove those, and only include the bare minimum in the feature extractors (similar to `ViTFeatureExtractor`) to get started (i.e. resizing, center cropping, normalizing). Things like random cropping and padding is maybe already a bit too much and makes the feature extractors more complex. It's also not easy to determine good default values for this feature extractor; should it randomly crop + pad by default, or not? - [x] remove random cropping and padding from `SegformerFeatureExtractor`, if one agrees on this. - [x] make tests of SegformerFeatureExtractor and BeitFeatureExtractor consistent. - [ ] update the `preprocessor_config.json` of the semantic segmentation models on the hub.
11-10-2021 12:20:52
11-10-2021 12:20:52
transformers
14,354
closed
Add WavLM
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Paper: https://arxiv.org/pdf/2110.13900.pdf Checkpoints: https://huggingface.co/models?other=wavlm ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2021 12:02:48
11-10-2021 12:02:48
Hey, any idea when WavLM will be integrated to the 🤗-platform? <|||||>Can't reproduce doc failures locally<|||||>Merging now. @anton-l - feel free to make some final changes when you add the WavLM heads
transformers
14,353
closed
Wav2Vec2 meets phonemes
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds Wav2Vec2 with Phoneme support and adds the checkpoints of https://arxiv.org/abs/2109.11680 (See last table: https://github.com/pytorch/fairseq/tree/main/examples/wav2vec#pre-trained-models) The speech recognition script is adapted and tested to be sure phoneme recognition works as expected. See: https://huggingface.co/patrickvonplaten/wav2vec2-xls-r-phoneme-300m-tr and https://huggingface.co/patrickvonplaten/wav2vec2-xls-r-phoneme-300m-sv ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2021 12:01:59
11-10-2021 12:01:59
Failing tests are unrelated.<|||||>Model can be advertised next week with author<|||||>> Thanks for adding this. I'm not following why there is a new entry for XLS-R which doesn't get its model folder or its model type in the config mapping. Yeah I'm adding XLS-R to give it a bit more visibility (similar to how we have [DialoGPT](https://huggingface.co/docs/transformers/model_doc/dialogpt) and XLSR-Wav2Vec2 in the docs even though they have 0 added code). Think a lot of people do decide on which checkpoints to use depending on the docs so I think it's good to add more information to the docs. Happy to revert it though if you guys prefer<|||||>I'm getting an error when running the tests locally: ``` ɛ l o h aʊ a ʁ j u != ɛ l o h a w a ʁ j u Expected :ɛ l o h a w a ʁ j u Actual :ɛ l o h aʊ a ʁ j u <Click to see difference> Traceback (most recent call last): File "/home/lysandre/transformers/tests/test_tokenization_wav2vec2_phoneme.py", line 214, in test_change_phonemizer_lang self.assertEqual(text_fr, "ɛ l o h aʊ a ʁ j u") AssertionError: 'ɛ l o h a w a ʁ j u' != 'ɛ l o h aʊ a ʁ j u' - ɛ l o h a w a ʁ j u ? ^^ + ɛ l o h aʊ a ʁ j u ? ^ ```<|||||>> I'm getting an error when running the tests locally: > > ``` > ɛ l o h aʊ a ʁ j u != ɛ l o h a w a ʁ j u > > Expected :ɛ l o h a w a ʁ j u > Actual :ɛ l o h aʊ a ʁ j u > <Click to see difference> > > Traceback (most recent call last): > File "/home/lysandre/transformers/tests/test_tokenization_wav2vec2_phoneme.py", line 214, in test_change_phonemizer_lang > self.assertEqual(text_fr, "ɛ l o h aʊ a ʁ j u") > AssertionError: 'ɛ l o h a w a ʁ j u' != 'ɛ l o h aʊ a ʁ j u' > - ɛ l o h a w a ʁ j u > ? ^^ > + ɛ l o h aʊ a ʁ j u > ? ^ > ``` Hmm interesting - what does: ```bash python -c "import phonemizer; print(phonemizer.__version__)" ``` give you? and is it on Windows ? Or Ubuntu?<|||||>On an arch-based distro, returns `3.0`! And `espeak --version` returns ``` eSpeak text-to-speech: 1.48.03 04.Mar.14 Data at: /usr/share/espeak-data ```<|||||>> espeak --version Hmm, interesting. I think the library to be installed should be `espeak-ng` however as stated here: https://github.com/bootphon/phonemizer#dependencies Could you try one last thing: ```bash sudo apt-get install espeak-ng ``` which should give version: ``` eSpeak NG text-to-speech: 1.50 ``` and try the test again.<|||||>Thank you for showing me the way, installing with `espeak-ng` passes all test!
transformers
14,352
closed
Adding support for raw python `generator` in addition to `Dataset` for pipelines
The main goal is to ease the create of streaming data to the pipe. `Dataset` is more involved and pytorch specific. This PR, provides a way to use a python iterator too. This enabled #14250 but can be proposed as a standalone PR. ```python from transformers import pipeline def read_data(filename): with open(filename, 'r') as f: for line in f: yield f pipe = pipeline("text-classification") for classified in pipe(read_data("large_file.txt")): print("Success ! ", classified) ``` The main caveat of this, is the interaction with `DataLoader` with `num_workers>1`. When you have multiple workers, each receive a copy of the generator (like `IterableDataset`). That means the naive Iterator will fail since all workers iterate on all items of the generator. There are ways to do clever "skipping", but it could be bad still because all workers still do have to pass through all items of the generator (they just ignore items they don't handle), depending on the case it might be bad. Using `num_workers=1` is the simplest fix and if the cost of loading your data is small enough should be good enough. In the above example trying to do smart tricks to skip some lines is unlikely to be a net positive for instance. If there are better ways to do "jumps" on some data, then using `Dataset` is more advised (since then differents workers can just jump themselves). <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
11-10-2021 09:50:22
11-10-2021 09:50:22
transformers
14,351
closed
Quantize t5 v1_1 generates nonsense
## Environment info - `transformers` version: 4.2.0 - Platform: Linux - Python version: 2.10.0 Models: - T5 v1_1 @patrickvonplaten, @patil-suraj Hi, when quantizing DenseReluDense in T5 v1_1, it generates nonsense I guess it's related to #10830 Just wanted to check if there is an easy solution. If anyone is interested, I wrote this code to quantize all layers except for the DenseReluDense, it decreases the size to about 30 GB instead of 44 GB for the xxl version of t5-v1_1 ``` import functools def rsetattr(obj, attr, val): pre, _, post = attr.rpartition('.') return setattr(rgetattr(obj, pre) if pre else obj, post, val) def rgetattr(obj, attr, *args): def _getattr(obj, attr): return getattr(obj, attr, *args) return functools.reduce(_getattr, [obj] + attr.split('.')) toFix=[] for name, param in model.named_parameters(): if not "DenseReluDense" in name and not "layer_norm" in name: name=name.replace(".weight","") if len(name.split(".")) >2: name=name[:-2] toFix.append(name) toFix=set(toFix) for item in toFix: try: cat=torch.quantization.quantize_dynamic( rgetattr(model,item), {torch.nn.Linear}, dtype=torch.qint8) rsetattr(model,item,cat) except: pass ```
11-10-2021 08:20:21
11-10-2021 08:20:21
Hey @ViktorThink, Could you try to use a newer version of `transformers` and upgrade to Python 3? <|||||>I'm tried using transformers=4.12.3, but I get this model when using the model even without quantization: File "train-vt5-data-5-pretrained.py", line 143, in trainOnText loss = model(b_input_ids, attention_mask=b_input_mask, labels=lm_labels).loss File "/usr/local/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 1578, in forward return_dict=return_dict, File "/usr/local/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.7/site-packages/transformers/models/t5/modeling_t5.py", line 977, in forward if self.gradient_checkpointing and self.training: File "/usr/local/lib64/python3.7/site-packages/torch/nn/modules/module.py", line 1178, in __getattr__ type(self).__name__, name)) AttributeError: 'T5Stack' object has no attribute 'gradient_checkpointing' In my tests 'google/t5-xxl-lm-adapt' doesn't work with transformers 4.12.3, maybe none of the v1_1 does? I tried 4.9.2, and there the quantized version prints nonsense, and I had python 3.6 or something like that.<|||||>Quantization is still very experimental in general and is not 100% supported by `transformers`. We will have better support for it in the `optimum` library: https://github.com/huggingface/optimum . Until then could you please try to use `transformers` on `master` so that we can debug together? :-)<|||||>The optimum library seems really interesting! I saw it already supports quantization with Intel Neural Compressor, so I could try it out. I tested google/t5-v1_1-small in a colab, using transformers on master, and the model runs, but still it prints nonsense if the DenseReluDense layers are quantized. https://colab.research.google.com/drive/1lJofhzTJd4Suym4OMgvKPlaKmk1VkgVR?usp=sharing <|||||>Gently pinging @michaelbenayoun @echarlaix here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,350
closed
Unable to load/use TFWav2Vec2ForCTC TFLite-model
## Environment info - `transformers` version: - Platform: Ubuntu 20.04 - Python version: 3.6 - PyTorch version (GPU?): 1.9.1 - Tensorflow version (GPU?): 2.6.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @anton-l ### Description > **General question first:** > > Is there actually _any_ speech-to-text model on 🤗 that is TFLite-ready which I could use right away? I am trying to save a TFWav2Vec2 model as a `.tflite` but I am unable to load and run it after saving. It appears that I am running into this problem once I use `tf.lite.OpsSet.TFLITE_BUILTINS`: ```python converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] ``` See the following stacktrace: ```none Traceback (most recent call last): File "/Users/sfalk/workspaces/git/stt/src/ml/speech/bin/example.py", line 90, in <module> main() File "/Users/sfalk/workspaces/git/stt/src/ml/speech/bin/example.py", line 84, in main run_model(model_fp, model_id, audio_fp) File "/Users/sfalk/workspaces/git/stt/src/ml/speech/bin/example.py", line 64, in run_model interpreter.invoke() File "/Users/sfalk/miniconda3/envs/stt/lib/python3.9/site-packages/tensorflow/lite/python/interpreter.py", line 875, in invoke self._interpreter.Invoke() RuntimeError: tensorflow/lite/kernels/conv.cc:349 input->dims->data[3] != filter->dims->data[3] (768 != 48)Node number 90 (CONV_2D) failed to prepare. ``` ### Dependencies ```toml [tool.poetry.dependencies] python = "~3.9" pydantic = "~1.8" torchaudio = "~0.9.1" sentencepiece = "~0.1.96" librosa = "~0.8.1" # Tensorflow tensorflow = "~2.6.0" # Huggingface datasets = "~1.15.1" transformers= "~4.12.3" ``` ### Reproduce ```python import os import torch import librosa as lb import tensorflow as tf from transformers import TFWav2Vec2ForCTC, Wav2Vec2Processor from tensorflow.keras import layers from tensorflow import keras import numpy as np def save_model(model_fp, model_id): model = TFWav2Vec2ForCTC.from_pretrained(model_id) inputs = layers.Input(shape=(None,), dtype=tf.float32) x = model(inputs) model = keras.Model(inputs=inputs, outputs=x) converter = tf.lite.TFLiteConverter.from_keras_model(model) converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] converter.target_spec.supported_types = [tf.float16] # converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.experimental_new_converter = True tflite_model = converter.convert() print(f"Writing model {model_fp}") with tf.io.gfile.GFile(model_fp, "wb") as f: f.write(tflite_model) def run_model(model_fp, model_id, audio_fp: str = None, sampling_rate = 16000): # Initialize the tokenizer tokenizer = Wav2Vec2Processor.from_pretrained(model_id, sampling_rate=sampling_rate) if audio_fp is not None: waveform, rate = lb.load(audio_fp, sr=sampling_rate) else: waveform = np.random.rand(sampling_rate) input_values = tokenizer(waveform, return_tensors="tf").input_values interpreter = tf.lite.Interpreter(model_path=model_fp) # Get input and output tensors. input_details = interpreter.get_input_details() output_details = interpreter.get_output_details() # Test model on random input data. b, _ = input_details[0]["shape"] input_shape = b, len(waveform) interpreter.resize_tensor_input(0, input_shape, strict=True) interpreter.allocate_tensors() input_data = input_values # np.array(np.random.random_sample(input_shape), dtype=np.float32) interpreter.set_tensor(input_details[0]["index"], input_data) interpreter.invoke() # The function `get_tensor()` returns a copy of the tensor data. # Use `tensor()` in order to get a pointer to the tensor. output_data = interpreter.get_tensor(output_details[0]["index"]) predicted_ids = torch.argmax(torch.Tensor(output_data), dim=-1) transcription = tokenizer.batch_decode(predicted_ids) print(transcription) def main(): model_fp = "/tmp/fb-wav2vec2.tflite" model_id = "facebook/wav2vec2-base-960h" audio_fp = None save_model(model_fp, model_id) run_model(model_fp, model_id, audio_fp) print("All done.") if __name__ == "__main__": main() ```
11-10-2021 08:20:08
11-10-2021 08:20:08
Hi @stefan-falk! Generally we don't support or test models for TFLite compatibility due to its OpsSet limitations, but we're open to contributions and suggestions. Also pinging @Rocketknight1 @merveenoyan in case we have some recent developments in the TFLite department :slightly_smiling_face: <|||||>I'm afraid not - TFLite is very limited in my experience! We're not actively working on a way to port our models to it.<|||||>Alright, thanks for the quick response guys! I'll close this issue then. <|||||>@stefan-falk probably the following [page](https://pythonrepo.com/repo/vasudevgupta7-gsoc-wav2vec2-python-natural-language-processing) could be interesting for you.
transformers
14,349
closed
BEIT masked lm
How can one generate labels for BEIT masked LM?
11-10-2021 07:21:03
11-10-2021 07:21:03
Hi, I've created a Colab notebook to illustrate `BeitForMaskedImageModeling` [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/BEiT/Understanding_BeitForMaskedImageModeling.ipynb).<|||||>See also this issue: https://github.com/microsoft/unilm/issues/401<|||||>Thanks!<|||||>Hi @NielsRogge, did you have any progress in reconstructing the masked visual tokens with BEiT so that they correspond to the ground truth labels? I am having the same issue. Thanks<|||||>Hi, I'm pretty sure my implementation is correct. You can visualize the predictions of BeitForMaskedImageModeling using DALL-E's decoder.
transformers
14,348
closed
enhance rewrite state_dict missing _metadata
# What does this PR do? enhance PR https://github.com/huggingface/transformers/pull/14276 which fix issue #14268 in order to avoid the ignore_key does not exist in the state_dict and cause failed. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @sgugger
11-10-2021 07:08:38
11-10-2021 07:08:38
transformers
14,347
closed
how to config gpu for run_text_classification?
I'm using example script for a text classfication,it works well on my laptop without gpu, but not using gpu in a docker environment. would you please help on it? nvidia-smi returns: No running processes found example script: https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification version: tf 2.4.1 cudnn 10.0 transformers 4.12.3
11-10-2021 06:39:18
11-10-2021 06:39:18
Hi, As stated in the [README](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification#multi-gpu-and-tpu-usage): > By default, the script uses a MirroredStrategy and will use multiple GPUs effectively if they are available. TPUs can also be used by passing the name of the TPU resource with the --tpu argument. So maybe you can first check if Tensorflow [recognizes the GPU](https://stackoverflow.com/questions/38009682/how-to-tell-if-tensorflow-is-using-gpu-acceleration-from-inside-python-shell).<|||||>thanks, seem tf2.4 need cuda11, I will check it first
transformers
14,346
closed
'tuple' object doesn't have attribute `as_list`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> ``` - `transformers` version: 4.12.3 - Platform: Linux-5.10.0-9-amd64-x86_64-with-glibc2.2.5 - Python version: 3.8.12 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): 2.6.2 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ``` ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj @patrickvonplaten - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten - Tokenizers: @LysandreJik - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information I'm sorry for not being able to give more information about this, since I don't directly works with the model. I believe the model is [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased). I'm trying to create a chat bot with Rasa. ## To reproduce Steps to reproduce the behavior: Run the chat bot, either use `rasa shell` or `rasa run --enable-api` and `curl` to chat with the bot. Error log: ``` 2021-11-09 08:08:34 ERROR rasa.core.channels.rest - An exception occured while handling user message 'hello'. Traceback (most recent call last): File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/channels/rest.py", line 120, in receive await on_new_message( File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/channels/channel.py", line 89, in handler await app.agent.handle_message(message) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/agent.py", line 577, in handle_message return await processor.handle_message(message) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/processor.py", line 96, in handle_message tracker = await self.log_message(message, should_save_tracker=False) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/processor.py", line 327, in log_message await self._handle_message_with_tracker(message, tracker) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/processor.py", line 594, in _handle_message_with_tracker parse_data = await self.parse_message(message, tracker) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/processor.py", line 572, in parse_message parse_data = await self.interpreter.parse( File "/home/grooo/.local/lib/python3.8/site-packages/rasa/core/interpreter.py", line 145, in parse result = self.interpreter.parse(text) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/nlu/model.py", line 470, in parse component.process(message, **self.context) File "/home/grooo/.local/lib/python3.8/site-packages/rasa/nlu/utils/hugging_face/hf_transformers.py", line 749, in process self._get_docs_for_batch( File "/home/grooo/.local/lib/python3.8/site-packages/rasa/nlu/utils/hugging_face/hf_transformers.py", line 678, in _get_docs_for_batch ) = self._get_model_features_for_batch( File "/home/grooo/.local/lib/python3.8/site-packages/rasa/nlu/utils/hugging_face/hf_transformers.py", line 609, in _get_model_features_for_batch sequence_hidden_states = self._compute_batch_sequence_features( File "/home/grooo/.local/lib/python3.8/site-packages/rasa/nlu/utils/hugging_face/hf_transformers.py", line 460, in _compute_batch_sequence_features model_outputs = self.model( File "/home/grooo/.local/lib/python3.8/site-packages/keras/engine/base_layer.py", line 1037, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/home/grooo/.local/lib/python3.8/site-packages/transformers/models/bert/modeling_tf_bert.py", line 1129, in call outputs = self.bert( File "/home/grooo/.local/lib/python3.8/site-packages/keras/engine/base_layer.py", line 1037, in __call__ outputs = call_fn(inputs, *args, **kwargs) File "/home/grooo/.local/lib/python3.8/site-packages/transformers/models/bert/modeling_tf_bert.py", line 803, in call attention_mask_shape = shape_list(inputs["attention_mask"]) File "/home/grooo/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py", line 1831, in shape_list static = tensor.shape.as_list() AttributeError: 'tuple' object has no attribute 'as_list' ``` Line 1831 of `transformers/modeling_tf_utils.py`: ```python static = tensor.shape.as_list() ``` After printing out stuff in `transformers/modeling_tf_utils.py`, I found out that sometime `tensor` is a numpy array, therefore `tensor.shape` is a tuple and indeed doesn't have `as_list`. Proposed fix: ```python static = tensor.shape if type(static) == tuple: static = list(static) else: static = static.as_list() ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior No error. <!-- A clear and concise description of what you would expect to happen. -->
11-10-2021 01:29:17
11-10-2021 01:29:17
I also encountered this error when upgrading Transformers from version 3.5.1 --> 4.12.2. Can confirm @ndgnuh's proposed fix works! Can this fix be incorporated into the bug fixes?<|||||>Nice catch, do you want to open a PR with the fix?<|||||>Sorry but I have a potato computer and I'm too lazy for the full PR procedure :smile: <|||||>cc @Rocketknight1 <|||||>This looks like the same issue as #14404, it definitely needs a fix<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,345
closed
Support for TF >= 2.7
# What does this PR do? This PR fixes the last issues to enable support for TF 2.7: - wrong call to the superclass with a `config` in `TFRoFormerClassificationHead`
11-09-2021 23:31:27
11-09-2021 23:31:27
transformers
14,344
closed
Allow per-version configurations
Similarly to https://github.com/huggingface/transformers/pull/12713, this allows per-version configurations. This is necessary for LayoutXLM, which up to now was using the configuration-defined `XLMRobertaTokenizer`, but which should now use the `LayoutXLMTokenizer`. Updating the configuration would mean breaking all previous versions of `transformers` that were using LayoutXLM. Not updating this parameter means that LayoutXLM will never benefit from `LayoutXLMTokenizer` through the `AutoTokenizer` API. Resolves https://github.com/huggingface/transformers/issues/14275 This implements similar tests to the tokenizer, but instead of using `bert-base-cased`, it uses the actual model that is at issue (`microsoft/layoutxlm-base`). This model should continue using the `XLMRobertaTokenizer` until a new minor version is released, as the configuration I uploaded is named `config.4.13.0.json`: https://huggingface.co/microsoft/layoutxlm-base/blob/main/config.4.13.0.json
11-09-2021 23:03:54
11-09-2021 23:03:54
transformers
14,343
closed
bump flax version
# What does this PR do? jax `v0.2.21` introduced a slight breaking change, `jnp.ndarray` now only returns `True` for `jax` arrays and not `numpy` arrays. so for standard `numpy` array x, `isinstance(x, jnp.ndarray)` will now return `False` . (cf [here](https://github.com/google/jax/releases/tag/jax-v0.2.21), the last point under breaking changes) And the `jax.random.split` method returns numpy array instead of jax. We use this method during random init of the model to create the `rngs` `dict`. Flax has a check for valid rngs where it expects `jnp.ndarray` , but since `split` returns np array and jax now does not treat np arrays as `jnp.ndarray` the check returns `False` and raises an error. This has been fixed in `flax==0.3.5` (cf [here](https://github.com/google/flax/blob/v0.3.5/flax/core/scope.py#L753)). So all of our models fail with `jax>=0.2.21` and `flax<0.3.5`. To fix this, this PR bumps the flax version to `0.3.5`.
11-09-2021 15:59:11
11-09-2021 15:59:11
transformers
14,342
closed
Electra model from pretrained not loading correctly
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> `transformers` version: 4.11.3 - Platform: Linux-5.11.0-36-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @LysandreJik ## Information Model I am using (Electra): The problem arises when using: * [ ] the official example scripts: (give details below) * [x ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Train an electra model M1 2. Save M1 as pretrained 3. Load model fro pretrained -> M2 4. Make prediction from M1 and M2 -> M2 always gives the same vectors for all tokens ```python learn = Learner(dls, electra_model, loss_func=electra_loss_func, opt_func=opt_func , path='./checkpoints', model_dir='pretrain', cbs=[mlm_cb, RunSteps(c.steps, [0.01, 0.0625, 0.125, 0.25, 0.5, 0.75, 1.0], c.run_name+"_{percent}"), ], ) learn.fit(3, cbs=[lr_shedule]) m1 = electra_model.discriminator m1.save_pretrained("electra-small-test") m2 = ElectraForPreTraining.from_pretrained("electra-small-test") tokenizer = ElectraTokenizerFast.from_pretrained("electra-small-generator-test") tokens = tokenizer.encode('Hello World') tokens_tensor = torch.tensor([tokens]) ``` Getting an embedding from the trained model m1 yields different vectors for each token: ```python print(m1(tokens_tensor)) Output: tensor([[[ 2.4823, 0.5022, 1.0171, ..., 4.2493, 6.4751, -2.7281], [ 2.3639, 0.3814, 1.0644, ..., -5.1085, 6.1802, -11.2867], [ 2.2842, 0.5379, 1.1335, ..., -4.7094, 5.8275, -10.8865], [ 2.4677, 1.0243, 1.1420, ..., -5.1284, 6.4092, -11.5033], [ 2.6425, 0.3355, 1.0403, ..., -5.5712, 2.2294, -12.2296]]], grad_fn=<NativeLayerNormBackward0>) ``` However, using the loaded model m2 all vectors are the same: ```python print(m2(tokens_tensor)) Output: tensor([[[ 2.2637, 0.4339, 1.2181, ..., -5.1501, 6.0848, -10.8840], [ 2.2637, 0.4339, 1.2181, ..., -5.1501, 6.0848, -10.8840], [ 2.2637, 0.4339, 1.2181, ..., -5.1501, 6.0848, -10.8840], [ 2.2637, 0.4339, 1.2181, ..., -5.1501, 6.0848, -10.8840], [ 2.2637, 0.4339, 1.2181, ..., -5.1501, 6.0848, -10.8840]]], grad_fn=<NativeLayerNormBackward0>) ``` Checking the weights of both models all weights are identical. ## Expected behavior When a model is stored on disk and then loaded again, I would expect exactly the same behaviour.
11-09-2021 12:55:48
11-09-2021 12:55:48
Have you tried doing this? ``` m2 = ElectraForPreTraining.from_pretrained("electra-small-test").discriminator ```<|||||>This is not working since I´m saving and loading the discriminator already: ```python m1 = electra_model.discriminator m1.save_pretrained("electra-small-test") m2 = ElectraForPreTraining.from_pretrained("electra-small-test") ```<|||||>Ah you are right! This is indeed pretty weird. I'm not sure what your `electra_model.discriminator` object actually is - I can't seem to find a class that has a `discriminator` attribute anywhere in the docs.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
14,341
closed
remove an irrelevant test from test_modeling_tf_layoutlm
# What does this PR do? This PR remove `test_model_various_embeddings` from `test_modeling_tf_layoutlm.py`. ## More Information Current version of `test_modeling_tf_layoutlm.py` has ``` def test_model_various_embeddings(self): config_and_inputs = self.model_tester.prepare_config_and_inputs() for type in ["absolute", "relative_key", "relative_key_query"]: config_and_inputs[0].position_embedding_type = type self.model_tester.create_and_check_model(*config_and_inputs) ``` After a quick search inside the repo, I believe `position_embedding_type` is only for (some) PyTorch models, and `test_model_various_embeddings` is not necessary here. Furthermore, - `test_modeling_tf_layoutlm.py` is the only TF test script containing `position_embedding_type` - `modeling_tf_layoutlm.py` has no usage of `config.position_embedding_type`. Therefore, this PR remove `test_model_various_embeddings` from `test_modeling_tf_layoutlm.py`.
11-09-2021 12:40:32
11-09-2021 12:40:32
transformers
14,340
closed
layoutlmv2 input tensor shape
![question_hugging_face](https://user-images.githubusercontent.com/20882775/140917182-f88fc896-a3fd-40c5-988d-d588110aac45.JPG) Hi, AFAIK encoder in layoutlmv2 only takes 512 tokes but this snippet from hugging face documentation attached above contradicts this. could anyone help me understand this?
11-09-2021 11:44:49
11-09-2021 11:44:49
Hi, LayoutLMv2 (and LayoutXLM) take both an `image` input and the regular `input_ids`. The `input_ids` indeed have a length of max 512 tokens (if you pad them up to the max length), however the image input is turned into a feature map, which is concatenated with the text tokens. This feature map is then flattened into a list of "image tokens", which are then concatenated with the text tokens. <|||||>@NielsRogge As you mentioned, the feature map (7*7) is flattened into a list of image tokens, which in this case is 49 (documentation), which is then concatenated with 512 to make the entire length of input 561, which is then sent into the encoder, which only accepts 512 tokens as input. What is going on with the "image tokens"? Aint they're going into the encoder? but as written "these are then concatenated with the text tokens, and send through the Transformer encoder" What exactly am I missing here?<|||||>I'm not sure what you mean. You just need to give a resized `image` (224x224) as well as `input_ids` (and `token_type_ids`, `bbox`) to the model. The model will internally create the image tokens using its visual backbone.<|||||>![layoutlmV2](https://user-images.githubusercontent.com/20882775/140938612-035f3e36-f64b-482a-b880-af2a17054d7c.jpg) are these 49 tokens fed into the transformer encoder after concatenation? <|||||>Yes, the text and image tokens are first concatenated, before being fed to the Transformer encoder.<|||||> layoutlm paper says "We initialize the weight of LayoutLM model with the pre-trained BERT base model." with is trained with 512 tokens only. how come are we able to pass 561 tokens into the encoder in layoutlmv2 which is initialized with the pre-trained BERT model?<|||||>As stated in the [LayoutLMv2 paper](https://arxiv.org/abs/2012.14740): > The model is initialized from the existing pre-trained model checkpoints. For the encoder along with the text embedding layer, LayoutLMv2 uses the same architecture as UniLMv2 (Bao et al., 2020), thus it is initialized from UniLMv2. For the ResNeXt-FPN part in the visual embedding layer, the backbone of a Mask-RCNN (He et al., 2017) model trained on PubLayNet is leveraged. The rest of the parameters in the model are randomly initialized.<|||||>thanks for the clarification @NielsRogge
transformers
14,339
closed
[Wav2Vec2] PyCTCDecode Integration to support language model boosted decoding
# Draft to integrate [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) into 🤗 Transformers This will is a short doc to explain all the important aspects of a possible integration of [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) into 🤗 Transformers ## What is LM-boosted Decoding? In **LM-boosted Decoding** an acoustic model (Wav2Vec2) in trained on some speech data and independently of this training a language model (e.g. [KenLM n-gram](https://github.com/kpu/kenlm)) is trained on some text in the same language than the speech data. Then during evaluation, the language model supports the acoustic model in predicted the transcribed words via beam search decoding. To be more precise, the output (log-)probability matrix of the acoustic model - being a [timesteps x log-prob for each subword token] matrix - is fed into a beam search decoder and by means of a language model (`P(subword token | prev subword token)`), the overall best subword token sequence is chosen using a beam search algorithm. ## Why do we need LM-boosted Decoding for Speech? LM-boosted decoding is still **the** or one of the state-of-the-art approaches for ASR systems in terms of Word-error-rate (WER) performance. The other upcoming system is an end-to-end approach where the language model is learned together with the acoustic model. This approach includes: - [Google's RNN-T Conformer](https://arxiv.org/abs/2104.02133): Here a RNN Transducer architecture is used where a language model is learned end-to-end with a powerful acoustic model, like Conformer. - [Encoder-Decoder architecture](https://huggingface.co/transformers/master/model_doc/speechencoderdecoder.html#speechencoderdecodermodel) this architecture is essentially like T5/Bart only that the encoder is an acoustic model. The decoder is then the corresponding language model **_The advantage of LM-boosted decoding is:_** - more flexibility: acoustic model and language model are trained differently. E.g. it's totally possible to use our GPT2 implementation for LM-boosted decoding for speech in the future. - Usually lighter: very good results can be achieved just by using n-grams - Usually faster: Encoder-decoder and RNN-T usually have to do some kind of auto-regressive generation which is costly. This holds especially true on CPU. **_The disadvantages are:_** - more hyperparameters to tune - more difficult to support on GPU. It's easier to build a highly optimized end-to-end pipeline on GPU with encoder-decoder (since everything is written in `torch.nn`). ## Why [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode)? We could implement the whole CTC beam search algorithm ourselves in `transformers` or a separate library, but it would look very similar to already existing libraries and in the spirit of open-source it's usually better to together improve existing libraries instead of duplicating work. There are three libraries for CTC beam search decoding that I analysed: 1. https://github.com/flashlight/flashlight -> this is Facebook's library written in C++ and the library used by the Wav2Vec2 team. It's highly optimized, but only runs on CUDA (https://github.com/flashlight/flashlight/tree/main/bindings/python#dependencies), has quite some dependencies and is not easy to understand. It gives good results and is fast. It only works for PyTorch. 2. https://github.com/kensho-technologies/pyctcdecode -> this is a very young library by Kensho Technologies (only 112 stars and not that many pip installs), but the maintainers seem quite active and eager to grow the library. It worked well in my experiments (see [here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode)). Also, it has very few dependencies and is written in pure Python. It's much slower than 1.) on GPU (obviously, since it only supports CPU), but quite fast on CPU compared to other libraries. It doesn't rely on PT or Tensorflow, so it could serve us for both those frameworks and JAX as well. 3. https://github.com/parlance/ctcdecode -> this is an older library (573 stars) and was quite fast in my experiments (It's written in PyTorch & C++ kernels). However, I didn't manage to get good results (see [here](here)) and IMO the code is not very easy to read & no docs & no examples. Given this analysis and that the spirit of `transformers` is **readability** and **easy-to-contribute to**, 2.) makes by far the most sense to be considered for an integration to `transformers` IMO. It would be great if we manage to collaborate well with https://github.com/kensho-technologies/pyctcdecode on design choices and integrations, but we can also in the worst-case scenario (if for some reason our vision differs too strongly from https://github.com/kensho-technologies/pyctcdecode) fork the repo and shape it to how we would need it - it has a MIT license. However, the library looks quite nice to me and I'm also confident that we can start a fruitful collaboration both `pyctcdecode` and we can profit from. ## Integration into Hugging Face's `transformers` A couple of important requirements for a nice integration with `transformers` are: - It fits well with the current API for Wav2Vec2 so that people can very easily switch from their current Wav2Vec2 setup to an improved version - Everything can be downloaded from the :hugs: or other easy-to-use storage systems for the most user-friendly experience. Keeping in mind that *LM boosted Decoding* requires the output log-probs of the acoustic model (Wav2Vec2ForCTC) as well as a dictionary and a language model there are two clean ways of integrating the feature IMO: 1.) - We add a new `Wav2Vec2CTCDecoder` class that replaces the `Wav2Vec2CTCTokenizer` and can be used just as `Wav2Vec2CTCTokenizer` within `Wav2Vec2Processor`. Since this class would require the vocabulary of `Wav2Vec2CTCTokenizer` we would probably have to add a `self.tokenizer = Wav2Vec2CTCTokenizer(...)` attribute in `Wav2Vec2CTCDecoder` which would create a bit too much abstraction IMO (Wav2Vec2Processor -> Wav2Vec2CTCDecoder -> Wav2Vec2Tokenizer). 2.) - We add a new `Wav2Vec2ProcessorWithLM` class that replaces `Wav2Vec2Processor`. It essentially just adds a `self.decoder = ...` to `Wav2Vec2Processor` and the `batch_decode()` and `decode()` methods now run LM-boosted decoding instead of the previous "tokenizer-only" decoding. => IMO 2.) is the better approach as it requires less abstraction and is also "safer" in that we can simply say that `Wav2Vec2ProcessorWithLM` is an experimental class that can be used to replace `Wav2Vec2Processor`. **This PR** implements more or less everything that is required on the `transformers` side for 2). So the change in API that I'm aiming for would look as follows: ```diff import torch -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor +from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM from datasets import load_dataset ds = load_dataset("common_voice", "es", split="test", streaming=True) sample = next(iter(ds)) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).n model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm") -processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm") +processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm") input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits -prediction_ids = torch.argmax(logits, dim=-1) -transcription = processor.batch_decode(prediction_ids) +transcription = processor.batch_decode(logits.cpu().numpy()).text print(transcription) ``` Thinking a bit ahead here, IMO it would also be totally fine to have both a `Wav2Vec2Processor` and a `Wav2Vec2ProcessorWithLM` work correctly with an `AutoProcessor` class. We could just add a new `processor_type` attribute to the `config.json` so that the correct processor class is loaded depending on the `config.json` of the model. We could use a similar general design (ideally even a bit cleaner) as is used [here](https://github.com/huggingface/transformers/blob/4f24058c58ed9fcde0d9e5629e66c5500f67c7c8/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L422). ## Feature additions to [pyctcdecode](https://github.com/kensho-technologies/pyctcdecode) for target API It would be great if together with `pyctcdecode` we could add an optional `from_from_hf_hub(...)` functionality for their [BeamSearchDecoder class(es)](https://github.com/kensho-technologies/pyctcdecode/blob/94dfdae1d18ad95e799286173826aec2dec9a6b2/pyctcdecode/decoder.py). This should be pretty simple to do with [huggingface_hub](https://github.com/huggingface/huggingface_hub) and should also in general make it much easier for `pyctcdecode` to load and save models online (for free). This is to be discussed. In a first step, it would be easiest to focus on fully supporting download and upload of [KenLM](https://github.com/kpu/kenlm) language models for seamless `KenLM`-ngram boosted decoding. KenLM-ngram boosted decoding yielded some nice improvements in my experiments [here](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode#results) In a next step, we could then look into support for `transformer` LM models in PyCTCDecode (make `pyctcdecode's` beam search compatible with our `AutoModelForCausalLM` models) and also add `load_from_hub(...)` functionality for this in `pyctcdecode`. Other possible improvements could include: - Timesteps prediction per word (outputting an exact time stamp for each predicted word, given the `logits` and sampling rate of the model - Audio frame to word alignment ...
11-09-2021 11:30:18
11-09-2021 11:30:18
Not convinced by grouping everything together in one class which will sometimes have an additional method that works and sometimes not (depending on the env). The code is probably also going to be hard to read if all the imports have to be contained in a `decode_with_lm`.<|||||>Regarding the PyCTCDecode integration I think it makes sense to start step-by-step and simple add a `load_from_hub(...)` method to KenLM's `LanguageModel` class as shown here: https://github.com/kensho-technologies/pyctcdecode/pull/32 Once this PR is approved we can move forward with this PR.<|||||>Thanks a lot for the feedback regarding the design choices @sgugger, @LysandreJik and @anton-l . I agree more with @sgugger here, but I think both implementations are valid and have pro/cons. To give some more background for a better design decision: - The language model should definitely not be loaded on the fly -> this is more or less equivalent to loading a `gpt2` sized model and every forward pass which would often take longer than the forward pass itself. => So the only possible design for putting everything under `Wav2Vec2Processor` would be something similat to what was stated by @anton-l being: ``` processor = Wav2Vec2Processor(feat_extractor, tokenizer, language_model: pyctcdecode.LanguageModel) ``` with `pyctcdecode.LanguageModel` being this class here: https://github.com/kensho-technologies/pyctcdecode/blob/b4c6a590b729303772604fba12118fad50326f3f/pyctcdecode/language_model.py#L184 IMO, the main reason why I think a new class would be better is because the class will be very experimental and will most likely change in the future (add other backend libraries, other language models, ...). Langauge model support for decoding is by no means always necessary or needed for ASR, so I can see lots of people just keep using `Wav2Vec2Processor`. In this case I don't think it's nice to add a lot of complex code to the existing class, but would prefer to keep it lean and simple to understand. `...WithLM` will evolve in the future and in order to support more language models require complexer code and a couple of libraries to be installed. Think it's better to move all that extra code to a new class. Some other reasons why I prefer `Wav2Vec2ProcessorWithLM` are: - Adding lots of things under the hood to `Wav2Vec2Processor` goes IMO a bit against our philosophy of having classes be "barebone" and not doing any "magic" under the hood - People using `Wav2Vec2ProcessorWithLM` are expected to provide a LM model which saves us some complex `if else` code and makes everything more readable. - `Wav2Vec2ProcessorWithLM` provides all the functionalities of `Wav2Vec2Processor` so there is never a case where one would need both processors - In terms of user experience, it boils down to the following in my opinion. -> Do I want to decode with a language model? Let's use `Wav2VecProcessorWithLM` -> No language model? Let's use `Wav2Vec2Processor` Both classes would have the exact same API and can be replaced one-by-one. So for users wanting to decode with a language model everything is bundled in a single class, namely `Wav2VecProcessorWithLM`. I don't see the huge advantage of having only a single `Wav2Vec2Processor`. In general in speech we will always have multiple classes for the same task, e.g. - `...ForConditionalGeneration`, `...ForCTC`, `...ForRNNT`, etc... can all be used with the automatic speech recognition pipeline - Wav2Vec2 will have multiple tokenizers (one for phonemes, one for characters only, ...)<|||||>When I said on the fly, I meant it would be loaded the first time; every subsequent operation would use the previously loaded model. But I don't have a strong opinion, and I understand your perspective. Good for me to go with the new class!<|||||>After looking through the previous PRs conserning `Wav2Vec2Processor`, I understand why having a separate `Wav2Vec2ProcessorWithLM` can be more convenient. But I'm interested in discussing (perhaps not in this PR) how we can evolve the current processing design, since only the feature extractor is universally required for speech models now, and the tokenizer and LM can be applied separately, depending on the target task.<|||||>Real world demo example for a SOTA spanish wav2vec2 model: https://huggingface.co/patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm -> seems to give a nice 10-20% WER improvement<|||||>Final user API: ```diff import torch import torchaudio.functional as F -from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor +from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM from datasets import load_dataset ds = load_dataset("common_voice", "es", split="test", streaming=True) sample = next(iter(ds)) resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).n model = Wav2Vec2ForCTC.from_pretrained("patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm") -processor = Wav2Vec2Processor.from_pretrained("patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm") +processor = Wav2Vec2ProcessorWithLM.from_pretrained("patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm") input_values = processor(resampled_audio, return_tensors="pt").input_values with torch.no_grad(): logits = model(input_values).logits -prediction_ids = torch.argmax(logits, dim=-1) -transcription = processor.batch_decode(prediction_ids) +transcription = processor.batch_decode(logits.cpu().numpy()).text print(transcription) ``` <|||||>An example of this PR is shown here: https://huggingface.co/patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm<|||||>> Great integration! Although I feel like pyctcdecode's "magic options" can be documented a bit more verbosely, so I left some suggestions :) Yes, we should definitely make a notebook about it<|||||>@LysandreJik @sgugger - think it's ready for a final review. I've now made sure that this LM boosted ASR is only tested for the TF, Flax and PT tests, but not for the ONNX & Hub tests. I've also added integration tests for TF and Flax. @sgugger I don't really see how to get rid of `pip install https://github.com/kpu/kenlm/archive/master.zip` sadly<|||||>Thanks for containing the addition of kenlm. The documentation on how to run the tests locally should get an update as we are very very far now from just needing `pip install transformers[testing]`, and we should maybe add comments on what each of those install lines is for in the config for circleCI/workflows for GitHub as I feel we're getting lost in the endless dependencies. It's fine as long as it runs but the day something breaks...
transformers
14,338
closed
Loading RoBERTa pytorch_model.bin checkpoint in fairseq for evaluation
Hi, I have fine-tuned a RoBERTa model using transformers and am facing issues when I'm loading that checkpoint in fairseq. The evaluation framework poses constraints so I have to load the model using fairseq. ``` self.model = RobertaModel.from_pretrained( roberta_model_dir, checkpoint_file=roberta_model_name ) ``` I pass `checkpoint_file` as pytorch_model.bin I get `KeyErrors` in fairseq` load_checkpoint_to_cpu function` (https://github.com/pytorch/fairseq/blob/main/fairseq/checkpoint_utils.py#L281). I have seen there exists a function for converting fairseq model to huggingface model but not able to find a fix for this. I even have models saved using `torch.save()` if that helps. Would really appreciate help on this. Thanks.
11-09-2021 11:20:32
11-09-2021 11:20:32
Hi! We have a script to convert from `fairseq`'s checkpoints to `transformers`, but we don't have the script that would do it the other way around. You could try and adapt the script to do the conversion the other way, as it's essentially just renaming layers.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.