repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
16,247
closed
Update XLM with TF decorator
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Unpacks TF model inputs through a decorator, improving code clarity for the `XLM` model. To be read with issue https://github.com/huggingface/transformers/issues/16051 , which holds the description of the problem, the proposed solution, and future plan. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://github.com/huggingface/transformers/issues/16051 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @gante <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-18-2022 07:52:58
03-18-2022 07:52:58
tests results ``` RUN_SLOW=1 py.test -vv tests/xlm/test_modeling_tf_xlm.py ================================================================================== test session starts =================================================================================== platform linux -- Python 3.8.12, pytest-7.1.1, pluggy-1.0.0 -- /home/louisowen6/miniconda3/envs/transformers_dev/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/mnt/c/Users/Louis Owen/transformers/.hypothesis/examples') rootdir: /mnt/c/Users/Louis Owen/transformers, configfile: setup.cfg plugins: hypothesis-6.39.4, forked-1.4.0, timeout-2.1.0, xdist-2.5.0, dash-2.3.0 collected 36 items tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 2%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 5%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_config PASSED [ 8%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 11%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_for_multiple_choice PASSED [ 13%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_for_token_classification PASSED [ 16%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 19%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 22%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 25%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 27%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 30%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 33%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 36%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 38%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 41%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 44%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 47%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 50%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 52%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 55%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 58%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_from_pretrained PASSED [ 61%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 63%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 66%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 69%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 72%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 75%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 77%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 80%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 83%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 86%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_lm_head PASSED [ 88%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_model PASSED [ 91%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_qa PASSED [ 94%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_sequence_classif PASSED [ 97%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelLanguageGenerationTest::test_lm_generate_xlm_mlm_en_2048 PASSED [100%] ```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>second tests report ```  RUN_SLOW=1 py.test -vv tests/xlm/test_modeling_tf_xlm.py =============================================================================== test session starts ================================================================================ platform linux -- Python 3.8.12, pytest-7.1.1, pluggy-1.0.0 -- /home/louisowen6/miniconda3/envs/transformers_dev/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/mnt/c/Users/Louis Owen/Desktop/transformers/.hypothesis/examples') rootdir: /mnt/c/Users/Louis Owen/Desktop/transformers, configfile: setup.cfg plugins: hypothesis-6.39.4, forked-1.4.0, timeout-2.1.0, xdist-2.5.0, dash-2.3.0 collected 36 items tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 2%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 5%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_config PASSED [ 8%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 11%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_for_multiple_choice PASSED [ 13%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_for_token_classification PASSED [ 16%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 19%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 22%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 25%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 27%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 30%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 33%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 36%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 38%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 41%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 44%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 47%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 50%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 52%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 55%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 58%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_from_pretrained PASSED [ 61%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 63%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 66%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 69%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 72%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 75%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 77%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 80%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 83%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 86%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_lm_head PASSED [ 88%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_model PASSED [ 91%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_qa PASSED [ 94%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelTest::test_xlm_sequence_classif PASSED [ 97%] tests/xlm/test_modeling_tf_xlm.py::TFXLMModelLanguageGenerationTest::test_lm_generate_xlm_mlm_en_2048 PASSED [100%] ```<|||||>Perfect, thanks! Will merge as soon as CI gets to green
transformers
16,246
closed
[Constrained Beam Search] Adding Notebook Example & Minor Typo Fix
# [CBS] Adding Notebook Example & Minor Typo Fix Very lightweight PR for: - a fix to a somewhat embarrassing copy-and-paste error I made in my initial PR - Thought it'd be nice if the notebook on `huggingface/blog` could be added to the list of Pytorch notebooks as others have done (both the Colab and SageMaker link imports from the `huggingface/blog` `master` branch with no issue. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten
03-18-2022 06:18:05
03-18-2022 06:18:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>The current test fail is unrelated to the PR (cbs is separated from `sample_generate`): ``` -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info ============================ FAILED tests/pegasus/test_modeling_pegasus.py::PegasusStandaloneDecoderModelTest::test_sample_generate === 1 failed, 5253 passed, 4576 skipped, 4839 warnings in 498.21s (0:08:18) ==== ```
transformers
16,245
closed
Load local dataset error
sorry
03-18-2022 03:14:35
03-18-2022 03:14:35
transformers
16,244
closed
Add missing type hints for PyTorch Longformer models
# What does this PR do? - Adds missing type hints to the PyTorch Longformer model classes. - Issue: #16059 ## Tests / Checks - I ran `make fixup` before my final commit. - `RUN_SLOW=1 py.test -vv tests/longformer/test_modeling_longformer.py` ran without any failures. _This PR is part of the Great Code Cleanup 2022!_ @Rocketknight1
03-18-2022 00:03:14
03-18-2022 00:03:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,243
closed
Update troubleshoot with more content
This PR follows up on #16136 to include an `attention_mask` if user outputs are incorrect as a result of not masking the padding tokens. cc @patrickvonplaten @sgugger
03-17-2022 22:31:23
03-17-2022 22:31:23
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,242
closed
Add unpack_inputs decorator for ctrl
# What does this PR do? Add the `unpack_inputs` decorator for `ctrl`. It also replaces the `past` parameters in models in and output by `past_key_values` as there was an irregularity in the naming that caused an error with the new input processing. <!-- Remove if not applicable --> Fixes # (issue) #16051 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante ## Tests I ran `RUN_SLOW=1 py.test -vv tests/ctrl/test_modeling_tf_ctrl.py` but it only came to around 69% and failed the pre-trained model test, because it is too big for my local machine to test it.
03-17-2022 22:26:05
03-17-2022 22:26:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>Going to run the slow tests locally, to confirm they pass<|||||>Can confirm that they pass, merging 👍
transformers
16,241
closed
[xtreme-s] Update Minds14 results
# What does this PR do? Updated results for the XTREME-S experiments with the new Minds14 splits
03-17-2022 21:13:52
03-17-2022 21:13:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>Let's maybe put the new eval code (to do per lang eval) directly into this here as well?<|||||>@patrickvonplaten The per-language results are now reflected in the model card too: https://huggingface.co/anton-l/xtreme_s_xlsr_300m_minds14 (will update the MLS results after running one more full experiment). The names look a bit janky due to how metrics are parsed by the trainer, but overall the PR is ready to merge :slightly_smiling_face: <|||||>Perfect! Thanks for fixing this :-)
transformers
16,240
closed
add type hints for gpt2 model classes (PyTorch)
# What does this PR do? 1. This PR adds type hints for GPT2 as mentioned in #16059 (following docs and other PRs for GPT-like model classes). Notes: - I've specified tensor subtypes where I could, e.g. `Optional[torch.LongTensor]`, `Optional[torch.FloatTensor]` as opposed to `Optional[torch.Tensor]` (assuming that's desired) - I was not entirely sure about `encoder_hidden_states`, as those don't seem to be documented yet (Tuple?, assumed it's `torch.FloatTensor`) - I was not able to run `pytest` yet due to local (Flask version) conflicts - I was not able to run `make fixup` locally (first time contributing in Python, so happy to work on and learn setting up tests) ## Who can review? @Rocketknight1
03-17-2022 19:59:06
03-17-2022 19:59:06
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16240). All of your documentation changes will be reflected on that endpoint.<|||||>@KristijanArmeni thanks for this PR! I think the types you chose look good. Running `make fixup` can be tricky, though - it depends on having a working `transformers` dev environment. You need to follow the instructions here, but if you're running on Windows you might find that this is impossible: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md If you can't get `make fixup` working, feel free to ping me and I'll do it! Make sure you've enabled contributions from maintainers on your PR.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,239
closed
Make `add-new-model-like` work in an env without all frameworks
# What does this PR do? This PR removes the need for a full dev installed when a user wants to run `add-new-model-like`.
03-17-2022 19:40:35
03-17-2022 19:40:35
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,238
closed
[Deepspeed] non-HF Trainer doc update
Adding some clarifications that `HfDeepSpeedConfig` is not needed unless Deepspeed ZeRO-3 is used w/o HF Trainer: based on this communication https://github.com/huggingface/transformers/issues/15399#issuecomment-1070364836 @sgugger
03-17-2022 19:36:37
03-17-2022 19:36:37
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,237
closed
Draft a guide with our code quirks for new models
# What does this PR do? This PR adds a few indications on code styling for new model additions. I put the one that I could think of quickly, but there are probably more!
03-17-2022 19:14:09
03-17-2022 19:14:09
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,236
closed
[FlaxSpeechEncoderDecoderModel] Skip from_encoder_decoder_pretrained
# What does this PR do? Skip `test_encoder_decoder_model_from_encoder_decoder_pretrained` until this issue is: https://github.com/google/jax/issues/9941 cc @sanchit-gandhi @patrickvonplaten
03-17-2022 18:45:23
03-17-2022 18:45:23
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16236). All of your documentation changes will be reflected on that endpoint.
transformers
16,235
closed
Enable the use of private models in example scripts using use_auth_token
# 🚀 Feature request Currently private models can be used with the examples scripts by setting the `use_auth_token argument` as `True` however this doesn't seem to be an option for private datasets. ## Motivation Enabling the use of private datasets using the example scripts ## Your contribution I'm happy to work on a PR to add this to all example scripts if this is indeed the case and I haven't missed another method for using private datasets.
03-17-2022 17:52:48
03-17-2022 17:52:48
cc @sgugger <|||||>Sure, I believe it's just a matter of passing the token in the right places :-)<|||||>Great thanks @sgugger. Will work on a PR adding those in :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,234
closed
Loading from AutoModel gives ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds
Executing the following code snippet states that ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds. ```py from transformers import AutoModel, AutoTokenizer model_name = "castorini/t5-base-canard" model = AutoModel.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) context = ''' Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband? ''' encoded_input = tokenizer( context, padding='max_length', max_length=512, truncation=True, return_tensors="pt", ) decoder_input = tokenizer( context, padding='max_length', max_length=512, truncation=True, return_tensors="pt", ) encoder_output = model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"]) output = tokenizer.decode( encoder_output[0], skip_special_tokens=True ) output ``` yields this error: ``` Some weights of the model checkpoint at castorini/t5-base-canard were not used when initializing T5Model: ['lm_head.weight'] - This IS expected if you are initializing T5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing T5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Input length of decoder_input_ids is 512, but ``max_length`` is set to 20. This can lead to unexpected behavior. You should consider increasing ``config.max_length`` or ``max_length``. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) [<ipython-input-11-b9fe12b71812>](https://localhost:8080/#) in <module>() 24 ) 25 ---> 26 encoder_output = model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"]) 27 output = tokenizer.decode( 28 encoder_output[0], 6 frames [/usr/local/lib/python3.7/dist-packages/transformers/models/t5/modeling_t5.py](https://localhost:8080/#) in forward(self, input_ids, attention_mask, encoder_hidden_states, encoder_attention_mask, inputs_embeds, head_mask, cross_attn_head_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 925 else: 926 err_msg_prefix = "decoder_" if self.is_decoder else "" --> 927 raise ValueError(f"You have to specify either {err_msg_prefix}input_ids or {err_msg_prefix}inputs_embeds") 928 929 if inputs_embeds is None: ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds ```
03-17-2022 17:16:56
03-17-2022 17:16:56
The problem here is that `AutoModel` does not load the full T5 model wiht a language head. One should use `AutoModelForSeq2SeqLM` inseatd: ```py from transformers import AutoModelForSeq2SeqLM, AutoTokenizer model_name = "castorini/t5-base-canard" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) context = ''' Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband? ''' encoded_input = tokenizer( context, padding='max_length', max_length=512, truncation=True, return_tensors="pt", ) decoder_input = tokenizer( context, padding='max_length', max_length=512, truncation=True, return_tensors="pt", ) encoder_output = model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"]) output = tokenizer.decode( encoder_output[0], skip_special_tokens=True ) output ``` This should work just fine @dxlong2000<|||||>> The problem here is that `AutoModel` does not load the full T5 model wiht a language head. One should use `AutoModelForSeq2SeqLM` inseatd: > > ```python > from transformers import AutoModelForSeq2SeqLM, AutoTokenizer > model_name = "castorini/t5-base-canard" > > model = AutoModelForSeq2SeqLM.from_pretrained(model_name) > tokenizer = AutoTokenizer.from_pretrained(model_name) > > context = ''' > Frank Zappa ||| Disbandment ||| What group disbanded ||| Zappa and the Mothers of Invention ||| When did they disband? > ''' > > encoded_input = tokenizer( > context, > padding='max_length', > max_length=512, > truncation=True, > return_tensors="pt", > ) > decoder_input = tokenizer( > context, > padding='max_length', > max_length=512, > truncation=True, > return_tensors="pt", > ) > > encoder_output = model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"]) > output = tokenizer.decode( > encoder_output[0], > skip_special_tokens=True > ) > output > ``` > > This should work just fine @dxlong2000 Sure thanks @patrickvonplaten !<|||||>> ```python > model.generate(input_ids=encoded_input["input_ids"], decoder_input_ids=decoder_input["input_ids"]) > ``` Dear Patrick, thank you very much for all your helpful answers here on github. They've helped me so many times! If I train a T5 model from `AutoModelForSeq2SeqLM` with my custom dataset with `Trainer`, what type of decoder input is the Trainer using? It seems I do not have to specify, and I've tried deducing it from the source and docs but I couldn't find an answer to that. I'd super appreciate some insights!
transformers
16,233
closed
How to pass args of generate() method via batch_decode
I want to specify the generate params in the line below: ``` generate(tokenized_input.input_ids, max_length=64, no_repeat_ngram_size=2, early_stopping=True, num_beams=4, num_return_sequences=3) ``` https://github.com/huggingface/transformers/blob/93d3fd86459eb27bc584da29a3d542817a395bca/examples/pytorch/summarization/run_summarization.py#L662 I specified the args as following. I did not get any error, but it is still getting one sequence instead of 3 as expected. ``` batch_decode( predict_results.predictions, skip_special_tokens=True, early_stopping=True, clean_up_tokenization_spaces=False, no_repeat_ngram_size=2, num_return_sequences=3 ) ```
03-17-2022 17:03:58
03-17-2022 17:03:58
Currently the seq2seq trainer doesn't support `num_return_sequence` argument for generation. You might need to tweak it here: https://github.com/huggingface/transformers/blob/v4.17.0/src/transformers/trainer_seq2seq.py#L158-L162<|||||>Thanks @qqaatw. Because batch_decode cotains **kwargs, I thought maybe I can pass them through it but it seems that is not implemented. So, in this case `Generate()` will use default values and all metrics will calculated based on that. `no_repeat_ngram_size` is really important to avoid generating duplicated tokens. <|||||>`batch_decode` isn't responsible for generation by design, it only converts ids to tokens and does some normalizations. So passing generation-related arguments for it won't work.
transformers
16,232
closed
A question about the cross-attention layer shape in encoder-decoder model
I am trying to intialize a bert2bert model with bert-base-uncased as encoder and bert-large-uncased as decoder with the following codes: ``` from transformers import EncoderDecoderModel, BertTokenizer import torch tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") model = EncoderDecoderModel.from_encoder_decoder_pretrained( "bert-base-uncased", "bert-large-uncased") ``` When I print the parameter shape of random intialized cross-attention layer, the results are: ``` (attention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=1024, out_features=1024, bias=True) (key): Linear(in_features=1024, out_features=1024, bias=True) (value): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=1024, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) (crossattention): BertAttention( (self): BertSelfAttention( (query): Linear(in_features=1024, out_features=1024, bias=True) (key): Linear(in_features=1024, out_features=1024, bias=True) (value): Linear(in_features=1024, out_features=1024, bias=True) (dropout): Dropout(p=0.1, inplace=False) ) (output): BertSelfOutput( (dense): Linear(in_features=1024, out_features=1024, bias=True) (LayerNorm): LayerNorm((1024,), eps=1e-12, elementwise_affine=True) (dropout): Dropout(p=0.1, inplace=False) ) ) ``` For the cross attention part, I think the shape for query and key should be (768,1024) instead of (1024,1024), since it converts the bert-base output dim (768) to bert-large dim (1024). Who can tell me where I made a mistake and give me a explanation? Many thanks!
03-17-2022 16:46:41
03-17-2022 16:46:41
transformers
16,231
closed
fix(flax): generate with logits processor/warper
# What does this PR do? This PR fixes an issue with flax generate method where the logits processor and warper were not being considered. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @patil-suraj @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2022 16:28:49
03-17-2022 16:28:49
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,230
closed
Fix TFBert `past_key_values`
# What does this PR do? Current `TFBertEncoderDecoderModelTest.test_bert2bert_summarization` on `master` will fail because `past_key_values` received by TF Bert is a tuple of 2 elements: - 1st element: it is just `encoder_hidden_states` - 2nd element: this is the same as the `past_key_values` in Pytorch version The remaining code in TF Bert expects `past_key_values` not containing the 1st element. Prior to #15944 , the `past_key_values` is only the 2nd element in the current version. This PR fixes the test, but it looks like we should prepare the correct inputs rather than this hacking fix. I would like to have your (in particular @gante ) confirmation before going further, thanks.
03-17-2022 16:06:05
03-17-2022 16:06:05
_The documentation is not available anymore as the PR was closed or merged._<|||||>Here is the code I used for this issue/PR ```python from transformers import EncoderDecoderModel, AutoTokenizer, TFEncoderDecoderModel tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") model = TFEncoderDecoderModel.from_pretrained("ydshieh/bert2bert-cnn_dailymail-fp16") ARTICLE_STUDENTS = """(CNN)Sigma Alpha Epsilon is under fire for a video showing party-bound fraternity members singing a racist chant. SAE's national chapter suspended the students, but University of Oklahoma President David Boren took it a step further, saying the university's affiliation with the fraternity is permanently done. The news is shocking, but it's not the first time SAE has faced controversy. SAE was founded March 9, 1856, at the University of Alabama, five years before the American Civil War, according to the fraternity website. When the war began, the group had fewer than 400 members, of which "369 went to war for the Confederate States and seven for the Union Army," the website says. The fraternity now boasts more than 200,000 living alumni, along with about 15,000 undergraduates populating 219 chapters and 20 "colonies" seeking full membership at universities. SAE has had to work hard to change recently after a string of member deaths, many blamed on the hazing of new recruits, SAE national President Bradley Cohen wrote in a message on the fraternity's website. The fraternity's website lists more than 130 chapters cited or suspended for "health and safety incidents" since 2010. At least 30 of the incidents involved hazing, and dozens more involved alcohol. However, the list is missing numerous incidents from recent months. Among them, according to various media outlets: Yale University banned the SAEs from campus activities last month after members allegedly tried to interfere with a sexual misconduct investigation connected to an initiation rite. Stanford University in December suspended SAE housing privileges after finding sorority members attending a fraternity function were subjected to graphic sexual content. And Johns Hopkins University in November suspended the fraternity for underage drinking. "The media has labeled us as the 'nation's deadliest fraternity,' " Cohen said. In 2011, for example, a student died while being coerced into excessive alcohol consumption, according to a lawsuit. SAE's previous insurer dumped the fraternity. "As a result, we are paying Lloyd's of London the highest insurance rates in the Greek-letter world," Cohen said. Universities have turned down SAE's attempts to open new chapters, and the fraternity had to close 12 in 18 months over hazing incidents.""" input_dict = tokenizer(ARTICLE_STUDENTS, return_tensors="tf") output_ids = model.generate(input_ids=input_dict["input_ids"], max_length=None).numpy().tolist() summary = tokenizer.batch_decode(output_ids, skip_special_tokens=True) ```<|||||>Agreed that we shouldn't need this hack. @ydshieh LMK if you want me to look into `generate()`, since I've been touching it recently<|||||>> Agreed that we shouldn't need this hack. @ydshieh LMK if you want me to look into `generate()`, since I've been touching it recently Yes, please :-) - I can check the logic flow in this code snippet though!<|||||>close since the root cause is found and being addressed in #16260
transformers
16,229
closed
Translate installation.mdx to Spanish
# What does this PR do? Adds the translation of [installation.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/installation.mdx) to Spanish. Fixes # [15947](https://github.com/huggingface/transformers/issues/15947) (issue) ## Before submitting - [✓] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [✓] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
03-17-2022 15:33:50
03-17-2022 15:33:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @lilianabs! 🤗<|||||>@omarespejel all of the changes have been added, thank you! :) <|||||>Thank you @lilianabs! 🤗. @sgugger LGTM 👍
transformers
16,228
closed
TFLongformer: Add missing type hints and unpack inputs decorator
# What does this PR do? 1. Adds missing type hints to the TF Longformer model classes. - Issue: #16059 - Note: I also added `Numpy array` as a possible argument type to parameters listed in `LONGFORMER_INPUTS_DOCSTRING`. 2. Unpacks TF Longformer model inputs using the `unpack_inputs` decorator. - Issue: #16051 ### Tests / Checks - `RUN_SLOW=1 py.test -vv tests/longformer/test_modeling_tf_longforme.py` ran without any failures. - I ran `make fixup` before my final commit. _This PR is part of the Great Code Cleanup 2022!_ @gante @Rocketknight1
03-17-2022 14:16:37
03-17-2022 14:16:37
Lol just realized I accidentally committed my changes using my work email address 🤦‍♂️<|||||>One thing I note in the docstring changes is that you wrote `Numpy array` rather than `np.ndarray`, which might be causing our doc builder some confusion! Can you try changing that and seeing if we get a green light on the doc building tests afterwards?<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>...I take it back, the doc builder is fine. Can we make that change anyway, though?<|||||>Done! <|||||>LGTM!
transformers
16,227
closed
Fix Type Hint of Nan/Inf Logging Filter Arg
# What does this PR do? Nan/Inf Logging Filter Training Argument is type hinted as `str` even though it looks like it is a `bool`. Documentation marks it as a `bool`, the default value is a `bool`, and all the usages that I could see are for a `bool`. We are doing some work involving dependency injection and rely on type hints. This mismatch is throwing some of our systems off, which is how I found out about it. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
03-17-2022 12:52:54
03-17-2022 12:52:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16227). All of your documentation changes will be reflected on that endpoint.
transformers
16,226
closed
add unpack_inputs decorator for marian
# What does this PR do? Add the unpack_inputs decorator to marian model and remove unneeded code Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante ## Tests The tests via `RUN_SLOW=1 py.test -vv tests/marian/test_modeling_tf_marian.py` ran without problems on the current state
03-17-2022 12:33:41
03-17-2022 12:33:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>CI is not fully green, but I will merge -- the issue is separated from this PR, and is being tracked internally
transformers
16,225
closed
Truncation Error is inconsistent between fast and standard (non-fast) tokenizers
## Environment info - `transformers` version: 4.17.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help Library: - Tokenizers: @SaulLu ## Information When truncation cannot be performed due to length issue Fast tokenizer raises Exception whereas standard (non-fast) only logs error and do not perform truncation at all. I only tested with AutoTokenizer (bert-base-uncased) but believe this is general on the level of `PreTrainedTokenizer` and `PreTrainedTokenizerFast`. ## To reproduce ```python from transformers import AutoTokenizer from transformers.tokenization_utils_base import ( BatchEncoding, PaddingStrategy, TruncationStrategy, ) fast_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=True) tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased", use_fast=False) text = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua." text_pair = "Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum." kwargs = { "text": text, "text_pair": text_pair, "max_length": 16, "truncation": TruncationStrategy.ONLY_FIRST } # logs: We need to remove 69 to truncate the input but the first sequence has a length 44. Please select another truncation strategy than TruncationStrategy.ONLY_FIRST, for instance 'longest_first' or 'only_second'. print("Tokenizer encode_plus") batch = tokenizer.encode_plus(**kwargs) print(len(batch.input_ids) <= kwargs["max_length"]) # raises Exception: Truncation error: Sequence to truncate too short to respect the provided max_length print("Fast Tokenizer encode_plus") fast_tokenizer.encode_plus(**kwargs) ``` ## Expected behavior I expect consistent behavior among tokenizers. Either fast and non-fast just log error and do not truncate or both of them raise exception that can be easily identified (ideally custom not just general `Exception` with string description).
03-17-2022 12:08:10
03-17-2022 12:08:10
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,224
closed
Skip equivalence test for TransfoXL
Skips the equivalence test for TransfoXL as it does not have the `loss` or `losses` returned.
03-17-2022 12:04:46
03-17-2022 12:04:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,223
closed
Add type hints for XLM TensorFlow
# What does this PR do? This PR adds type hints to the XLM model for TensorFlow. Models: - albert, bert, xlm: @LysandreJik
03-17-2022 11:58:50
03-17-2022 11:58:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16223). All of your documentation changes will be reflected on that endpoint.
transformers
16,222
closed
Attention mask is important in the case of batching...
# What does this PR do? Fixes #16221 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2022 11:23:56
03-17-2022 11:23:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,221
closed
token classification pipeline does not use attention mask, affects predictions
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.17.0 - Platform: Mac - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1 no GPU - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @Narsil ## Information Model I am using: Bert for token classification The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) NER pipeline ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoModelForTokenClassification, AutoTokenizer, \ TokenClassificationPipeline model = AutoModelForTokenClassification.from_pretrained("alvaroalon2/biobert_diseases_ner") tokenizer = AutoTokenizer.from_pretrained("alvaroalon2/biobert_diseases_ner") pipeline = TokenClassificationPipeline(model=model, tokenizer=tokenizer, aggregation_strategy="first", ignore_labels=["0", "O"], ) sentences = ["hello, I have influenza", "Hepatitis is a known problem for many nations, particularly in " "combination with rabies"] print(pipeline(sentences, batch_size=1)) print(pipeline(sentences, batch_size=2)) ``` output: ```python [[{'entity_group': 'DISEASE', 'score': 0.9999732, 'word': 'influenza', 'start': 14, 'end': 23}], [{'entity_group': 'DISEASE', 'score': 0.99998534, 'word': 'Hepatitis', 'start': 0, 'end': 9}, {'entity_group': 'DISEASE', 'score': 0.99989414, 'word': 'rabies', 'start': 80, 'end': 86}]] [[], [{'entity_group': 'DISEASE', 'score': 0.99998534, 'word': 'Hepatitis', 'start': 0, 'end': 9}, {'entity_group': 'DISEASE', 'score': 0.99989414, 'word': 'rabies', 'start': 80, 'end': 86}]] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The outputs should be the same regardless of batch size. The attention mask is not passed to the model, leading to the discrepancy. If may just be a case of setting `return_attention_mask=True` in the [preprocessing](https://github.com/huggingface/transformers/blob/25b8f9a85b929e2b9acb190b6df7f094d07e4b61/src/transformers/pipelines/token_classification.py#L195) function. Let me know if you need any more clarification! Thanks
03-17-2022 11:02:00
03-17-2022 11:02:00
Hi @CallumMcMahon , Thanks you very much for reporting this. It is indeed the case that somehow `return_attention_mask` is deliberately set to `False`. I tried to look back in history why it was there but I couldn't find a very good reason, so I removed it in the fix PR.
transformers
16,220
closed
[Flax] remove jax.ops.index
# What does this PR do? Remove `jax.ops.index` as this method is removed in latest jax release `v0.3.2`.
03-17-2022 10:44:08
03-17-2022 10:44:08
_The documentation is not available anymore as the PR was closed or merged._<|||||>There seem to be some failing tests still :-) <|||||>The failing tests are not related to this PR.<|||||>Hi, I am not very into the detail here, but curious about that if this implies from this version, `transformers` will only work with the new version of `jax`? <|||||>> Hi, I am not very into the detail here, but curious about that if this implies from this version, `transformers` will only work with the new version of `jax`? No, the `.at` syntax is also supported in older jax versions.<|||||>Feel free to merge if you want @patil-suraj
transformers
16,219
closed
MaskFormer: fix device on test
# What does this PR do? This PR fixes some failed tests in MaskFormer due to not setting `torch_device`
03-17-2022 09:49:59
03-17-2022 09:49:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,218
closed
[Tests] Fix DiT test
# What does this PR do? Fixes the DiT integration test by setting the appropriate device.
03-17-2022 08:20:54
03-17-2022 08:20:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,217
closed
Fix readmes
# What does this PR do? Removes duplicate items. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2022 06:25:39
03-17-2022 06:25:39
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,216
closed
Question about `query_length` in modeling_t5.py
https://github.com/huggingface/transformers/blob/0a057201a96565df29984d716f660fd8d634329a/src/transformers/models/t5/modeling_t5.py#L464 If the `past_key_value` is not None, the `query_length` induced from `SelfAttention` can be the length of `past_key_value` + `hidden_states`, which is actually wrong. Luckily, this problematic behavior would not affect the `EncDecAttetion` since the `position_bias` are always zeros. Correct me if I am wrong.
03-17-2022 05:28:31
03-17-2022 05:28:31
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,215
closed
Framework split for Spanish version of doc quicktour.mdx
# What does this PR do? This PR applies the framework split performed in PR #16030 to the Spanish version of quicktour.mdx.
03-17-2022 04:28:56
03-17-2022 04:28:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sorry for those mistakes! My IDE was formatting MD-style files funny.
transformers
16,214
closed
Add type hints to xlnet
# What does this PR do? I added type annotations for xlnet (PT and TF) as described in [#16059] <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1 @patrickvonplaten @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-17-2022 02:17:13
03-17-2022 02:17:13
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,213
closed
ValueError: HFArgumentParser on running run_glue.py
Hi, I get the following error with HFArgumentParser on running run_glue.py. Any help would be appreciated. Thanks! ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: CentOS - Python version: 3.9 - PyTorch version (GPU?): 1.10 - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - BERT --> ## Information Model I am using (Bert): The problem arises when using: ``` export GLUE_DIR=/work/vrawte/glue_data export TASK_NAME=STSB python run_glue.py \ --model_type=bert \ --model_name_or_path=bert-base-cased \ --task_name=$TASK_NAME \ --do_train \ --do_eval \ --data_dir=$GLUE_DIR/$TASK_NAME \ --max_seq_length=128 \ --train_batch_size=32 \ --learning_rate=3e-5 \ --num_train_epochs=3.0 \ --output_dir=/tmp/$TASK_NAME \ --overwrite_output_dir \ --logging_steps=50 \ --save_steps=200 \ --num_cores=8 \ --only_log_master ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) ![image](https://user-images.githubusercontent.com/22553367/158710662-9ac2142a-96c3-4f63-9d71-847bf78bc019.png)
03-16-2022 23:59:34
03-16-2022 23:59:34
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,212
closed
Adding Unpack Decorator For DPR model
## What does this PR do? Unpacks TF model inputs through a decorator, Sepcfically for DPR. In response to #16051 ## Before submitting * [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). * [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) * [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? - #16051 * [ ] Did you make sure to update the documentation with your changes? * [ ] Did you write any new necessary tests? ## Review @gante ## I will post the test and make fix result below Also please let me know if i have done anything wrong. It is my first commit on a codebase this huge :)
03-16-2022 22:38:33
03-16-2022 22:38:33
` ========================================================================================== test session starts ==========================================================================================platform linux -- Python 3.8.10, pytest-7.1.0, pluggy-1.0.0 -- /usr/bin/python3 cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/mnt/d/transformer/transformers-1/.hypothesis/examples') rootdir: /mnt/d/transformer/transformers-1, configfile: setup.cfg plugins: forked-1.4.0, hypothesis-6.39.3, timeout-2.1.0, xdist-2.5.0, dash-2.3.0, xonsh-0.9.13 collected 33 items tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 3%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 6%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_config PASSED [ 9%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 12%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_dpr_context_encoder_model PASSED [ 15%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_dpr_question_encoder_model PASSED [ 18%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_dpr_reader_model PASSED [ 21%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 24%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 27%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 30%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 33%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 36%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 39%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 42%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 45%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 48%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 51%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 54%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 57%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 60%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 63%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 66%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_model_from_pretrained PASSED [ 69%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 72%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 75%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 78%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 81%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 84%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 87%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 90%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 93%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 96%] tests/dpr/test_modeling_tf_dpr.py::TFDPRModelIntegrationTest::test_inference_no_head PASSED [100%] =========================================================================================== warnings summary ============================================================================================../../../../home/rahul/.local/lib/python3.8/site-packages/flatbuffers/compat.py:19 /home/rahul/.local/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp src/transformers/dynamic_module_utils.py:75 /mnt/d/transformer/transformers-1/src/transformers/dynamic_module_utils.py:75: DeprecationWarning: invalid escape sequence \s relative_imports = re.findall("^\s*import\s+\.(\S+)\s*$", content, flags=re.MULTILINE) src/transformers/dynamic_module_utils.py:77 /mnt/d/transformer/transformers-1/src/transformers/dynamic_module_utils.py:77: DeprecationWarning: invalid escape sequence \s relative_imports += re.findall("^\s*from\s+\.(\S+)\s+import", content, flags=re.MULTILINE) src/transformers/dynamic_module_utils.py:119 /mnt/d/transformer/transformers-1/src/transformers/dynamic_module_utils.py:119: DeprecationWarning: invalid escape sequence \s imports = re.findall("^\s*import\s+(\S+)\s*$", content, flags=re.MULTILINE) src/transformers/dynamic_module_utils.py:121 /mnt/d/transformer/transformers-1/src/transformers/dynamic_module_utils.py:121: DeprecationWarning: invalid escape sequence \s imports += re.findall("^\s*from\s+(\S+)\s+import", content, flags=re.MULTILINE) -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html ` RUN_SLOW=1 py.test -vv tests/dpr/test_modeling_tf_dpr.py results!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Awesome! Will merge as soon as CI gets to green
transformers
16,211
closed
SegFormer Quantized Model (Int8) not loading weights properly.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: Windows 11/Ubuntu 20.04 - Python version: 3.7 - PyTorch version (GPU?): 1.10.2+cu113 - Tensorflow version (GPU?): - - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: @NielsRogge. Library: - Vision: @NielsRogge, @sgugger Model I am using (SegFormer): The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] my own task or dataset: (give details below) Steps to reproduce the behavior: I use SegFormer on a custom dataset. SegFormer works on the custom dataset (3 classes, IoU @0.77, prediction looks great, done just like in the tutorial from @NielsRogge). What I want to do, is to perform dynamic post-training quantization on the SegFormer model. 1. `model_quantized = quantize_dynamic(model, {nn.Linear}, inplace=False, dtype=torch.qint8)` 2. `model_quantized.save_pretrained("segformer_quantized.pt")` Here comes the problem: VARIANT 1: ``` feature_extractor_inference = SegformerFeatureExtractor() model = SegformerForSemanticSegmentation.from_pretrained("segformer_quantized.pt").to(device) ``` VARIANT 2: ``` feature_extractor_inference = SegformerFeatureExtractor.from_pretrained("segformer_quantized.pt/config.json") model = SegformerForSemanticSegmentation.from_pretrained("segformer_quantized.pt").to(device) ``` Both variants seems not to properly load the quantized weights, hence blank predictions. How can this problem be solved? Thank you in advance, @NielsRogge ``` Some weights of the model checkpoint at nvidia/mit-b0 were not used when initializing SegformerForSemanticSegmentation: ['classifier.weight', 'classifier.bias'] - This IS expected if you are initializing SegformerForSemanticSegmentation from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing SegformerForSemanticSegmentation from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of SegformerForSemanticSegmentation were not initialized from the model checkpoint at nvidia/mit-b0 and are newly initialized: ['decode_head.linear_fuse.weight', 'decode_head.linear_c.3.proj.bias', 'decode_head.batch_norm.bias', 'decode_head.linear_c.0.proj.bias', 'decode_head.classifier.bias', 'decode_head.batch_norm.weight', 'decode_head.linear_c.0.proj.weight', 'decode_head.linear_c.2.proj.weight', 'decode_head.linear_c.1.proj.bias', 'decode_head.linear_c.2.proj.bias', 'decode_head.batch_norm.running_var', 'decode_head.batch_norm.running_mean', 'decode_head.linear_c.1.proj.weight', 'decode_head.classifier.weight', 'decode_head.batch_norm.num_batches_tracked', 'decode_head.linear_c.3.proj.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Some weights of the model checkpoint at segformed_lanes_quantized.pt were not used when initializing SegformerModel: ['segformer.encoder.block.3.1.attention.self.value.zero_point', 'segformer.encoder.block.2.1.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.self.query._packed_params.dtype', 'segformer.encoder.block.0.0.mlp.dense1.zero_point', 'segformer.encoder.block.0.0.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.2.0.attention.self.key.zero_point', 'segformer.encoder.block.0.0.attention.self.value.zero_point', 'decode_head.linear_c.3.proj._packed_params._packed_params', 'segformer.encoder.block.1.0.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.2.1.mlp.dense2.zero_point', 'segformer.encoder.block.1.0.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.2.1.attention.self.key._packed_params.dtype', 'segformer.encoder.block.3.0.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.self.key._packed_params.dtype', 'decode_head.batch_norm.running_var', 'segformer.encoder.block.0.0.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.1.0.attention.output.dense.zero_point', 'decode_head.linear_c.0.proj.scale', 'segformer.encoder.block.2.1.attention.self.value.scale', 'segformer.encoder.block.2.0.attention.self.key.scale', 'segformer.encoder.block.2.0.mlp.dense2.zero_point', 'segformer.encoder.block.2.1.attention.self.query.scale', 'segformer.encoder.block.1.0.mlp.dense2._packed_params.dtype', 'segformer.encoder.block.0.1.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.3.1.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.1.0.attention.self.query.zero_point', 'segformer.encoder.block.2.0.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.1.1.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.1.0.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.3.0.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.0.1.attention.self.key.scale', 'segformer.encoder.block.3.0.attention.self.key.scale', 'segformer.encoder.block.0.1.mlp.dense2.scale', 'decode_head.linear_c.1.proj._packed_params.dtype', 'segformer.encoder.block.2.0.mlp.dense1.zero_point', 'segformer.encoder.block.3.0.mlp.dense2._packed_params.dtype', 'segformer.encoder.block.1.1.mlp.dense1.scale', 'segformer.encoder.block.1.0.mlp.dense2.scale', 'segformer.encoder.block.1.1.attention.self.value._packed_params.dtype', 'segformer.encoder.block.3.1.attention.self.value._packed_params.dtype', 'segformer.encoder.block.3.1.attention.output.dense.scale', 'segformer.encoder.block.3.1.attention.self.query.scale', 'decode_head.batch_norm.bias', 'decode_head.linear_c.2.proj._packed_params.dtype', 'segformer.encoder.block.3.0.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.2.0.attention.output.dense.scale', 'segformer.encoder.block.0.0.mlp.dense2.zero_point', 'segformer.encoder.block.2.1.mlp.dense1.zero_point', 'segformer.encoder.block.2.0.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.3.0.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.1.0.mlp.dense1.zero_point', 'segformer.encoder.block.1.0.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.3.1.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.self.value.scale', 'segformer.encoder.block.3.1.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.1.0.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.2.0.attention.self.query.scale', 'segformer.encoder.block.1.1.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.1.1.mlp.dense1.zero_point', 'segformer.encoder.block.0.0.mlp.dense2.scale', 'segformer.encoder.block.0.0.attention.output.dense.zero_point', 'segformer.encoder.block.0.1.attention.self.query._packed_params.dtype', 'segformer.encoder.block.3.0.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.0.0.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.2.1.attention.self.key.zero_point', 'decode_head.linear_c.1.proj.zero_point', 'decode_head.linear_c.1.proj.scale', 'segformer.encoder.block.2.1.mlp.dense1.scale', 'segformer.encoder.block.3.0.attention.self.query.zero_point', 'segformer.encoder.block.2.0.attention.self.query.zero_point', 'decode_head.linear_c.0.proj.zero_point', 'segformer.encoder.block.3.1.attention.self.query._packed_params.dtype', 'segformer.encoder.block.1.1.attention.self.query.zero_point', 'segformer.encoder.block.1.1.attention.self.key.zero_point', 'segformer.encoder.block.1.0.attention.self.key.zero_point', 'segformer.encoder.block.0.1.attention.self.key._packed_params.dtype', 'segformer.encoder.block.3.0.attention.output.dense.scale', 'segformer.encoder.block.0.1.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.1.1.attention.output.dense.zero_point', 'segformer.encoder.block.0.0.mlp.dense1.scale', 'segformer.encoder.block.2.1.mlp.dense2.scale', 'segformer.encoder.block.3.0.attention.self.value.scale', 'segformer.encoder.block.1.0.mlp.dense1.scale', 'decode_head.linear_c.2.proj.scale', 'segformer.encoder.block.2.1.attention.self.value._packed_params.dtype', 'segformer.encoder.block.3.0.attention.self.value._packed_params.dtype', 'segformer.encoder.block.2.0.attention.self.query._packed_params.dtype', 'segformer.encoder.block.3.0.mlp.dense2.scale', 'segformer.encoder.block.3.1.attention.self.key.scale', 'segformer.encoder.block.0.0.attention.self.query.zero_point', 'segformer.encoder.block.3.0.attention.self.query._packed_params.dtype', 'segformer.encoder.block.2.0.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.3.0.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.2.0.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.3.1.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.1.0.attention.self.key.scale', 'segformer.encoder.block.3.1.mlp.dense1.zero_point', 'segformer.encoder.block.3.1.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.3.0.mlp.dense2.zero_point', 'segformer.encoder.block.2.1.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.0.1.attention.self.query.zero_point', 'segformer.encoder.block.3.0.mlp.dense1.scale', 'segformer.encoder.block.3.0.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.3.0.attention.self.value.zero_point', 'segformer.encoder.block.2.0.attention.self.key._packed_params.dtype', 'segformer.encoder.block.0.1.mlp.dense1.zero_point', 'segformer.encoder.block.3.1.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.3.1.mlp.dense2.zero_point', 'decode_head.linear_c.2.proj.zero_point', 'segformer.encoder.block.1.1.attention.self.query._packed_params.dtype', 'segformer.encoder.block.0.1.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.0.1.attention.output.dense.zero_point', 'segformer.encoder.block.1.1.attention.self.value.scale', 'decode_head.linear_c.0.proj._packed_params.dtype', 'segformer.encoder.block.0.1.attention.self.key.zero_point', 'segformer.encoder.block.0.1.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.0.1.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.1.0.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.2.1.attention.self.query._packed_params.dtype', 'segformer.encoder.block.0.0.mlp.dense2._packed_params.dtype', 'segformer.encoder.block.2.1.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.0.1.attention.self.value.zero_point', 'segformer.encoder.block.2.0.mlp.dense2._packed_params.dtype', 'segformer.encoder.block.0.1.mlp.dense2.zero_point', 'decode_head.batch_norm.num_batches_tracked', 'segformer.encoder.block.0.1.attention.self.query.scale', 'segformer.encoder.block.1.1.mlp.dense2.zero_point', 'segformer.encoder.block.1.1.attention.self.key._packed_params.dtype', 'segformer.encoder.block.3.1.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.0.1.mlp.dense2._packed_params.dtype', 'decode_head.linear_c.3.proj.scale', 'segformer.encoder.block.2.1.attention.output.dense.zero_point', 'segformer.encoder.block.0.1.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.0.0.attention.self.value._packed_params.dtype', 'segformer.encoder.block.1.0.attention.self.value.scale', 'segformer.encoder.block.1.1.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.0.1.attention.self.key._packed_params._packed_params', 'decode_head.batch_norm.running_mean', 'segformer.encoder.block.2.1.mlp.dense1._packed_params.dtype', 'segformer.encoder.block.1.0.attention.self.key._packed_params.dtype', 'segformer.encoder.block.1.1.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.1.0.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.1.1.attention.output.dense.scale', 'segformer.encoder.block.2.1.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.1.1.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.2.0.mlp.dense2.scale', 'segformer.encoder.block.2.1.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.2.1.attention.self.key.scale', 'segformer.encoder.block.3.1.attention.self.key._packed_params.dtype', 'segformer.encoder.block.3.0.attention.output.dense.zero_point', 'decode_head.linear_c.3.proj.zero_point', 'segformer.encoder.block.1.1.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.0.1.attention.self.value.scale', 'segformer.encoder.block.0.1.attention.self.value._packed_params.dtype', 'segformer.encoder.block.1.0.attention.self.query._packed_params.dtype', 'segformer.encoder.block.1.1.attention.self.key.scale', 'segformer.encoder.block.1.0.attention.output.dense.scale', 'segformer.encoder.block.1.0.mlp.dense2.zero_point', 'segformer.encoder.block.3.1.attention.self.key.zero_point', 'segformer.encoder.block.0.1.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.2.0.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.1.1.mlp.dense2._packed_params.dtype', 'segformer.encoder.block.3.1.mlp.dense1.scale', 'segformer.encoder.block.1.0.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.self.key.zero_point', 'segformer.encoder.block.0.0.mlp.dense2._packed_params._packed_params', 'segformer.encoder.block.3.1.mlp.dense2._packed_params.dtype', 'segformer.encoder.block.2.0.attention.self.value.zero_point', 'segformer.encoder.block.3.1.attention.self.query.zero_point', 'segformer.encoder.block.0.0.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.3.0.attention.self.key._packed_params.dtype', 'segformer.encoder.block.1.0.attention.self.query.scale', 'segformer.encoder.block.2.1.attention.output.dense.scale', 'segformer.encoder.block.1.0.attention.self.value.zero_point', 'decode_head.linear_c.1.proj._packed_params._packed_params', 'segformer.encoder.block.3.1.attention.output.dense.zero_point', 'segformer.encoder.block.1.1.attention.self.query.scale', 'decode_head.linear_fuse.weight', 'decode_head.classifier.bias', 'segformer.encoder.block.2.1.attention.self.query.zero_point', 'segformer.encoder.block.1.0.attention.self.value._packed_params.dtype', 'segformer.encoder.block.1.1.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.0.1.attention.output.dense.scale', 'segformer.encoder.block.2.1.attention.self.value.zero_point', 'decode_head.linear_c.0.proj._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.self.key.scale', 'segformer.encoder.block.0.0.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.1.1.attention.self.value.zero_point', 'segformer.encoder.block.0.1.mlp.dense1.scale', 'segformer.encoder.block.3.0.attention.self.key.zero_point', 'decode_head.linear_c.3.proj._packed_params.dtype', 'segformer.encoder.block.2.0.attention.output.dense._packed_params.dtype', 'segformer.encoder.block.3.1.mlp.dense2.scale', 'decode_head.classifier.weight', 'segformer.encoder.block.2.0.attention.self.value._packed_params._packed_params', 'segformer.encoder.block.3.1.attention.self.value.scale', 'segformer.encoder.block.3.0.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.2.0.attention.self.query._packed_params._packed_params', 'segformer.encoder.block.2.1.attention.self.key._packed_params._packed_params', 'segformer.encoder.block.2.0.attention.self.value._packed_params.dtype', 'segformer.encoder.block.2.1.mlp.dense2._packed_params.dtype', 'decode_head.linear_c.2.proj._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.output.dense.scale', 'segformer.encoder.block.2.0.attention.self.value.scale', 'segformer.encoder.block.3.0.mlp.dense1.zero_point', 'segformer.encoder.block.1.1.attention.self.value._packed_params._packed_params', 'decode_head.batch_norm.weight', 'segformer.encoder.block.3.1.attention.output.dense._packed_params._packed_params', 'segformer.encoder.block.2.0.mlp.dense1.scale', 'segformer.encoder.block.3.0.attention.self.query.scale', 'segformer.encoder.block.0.0.attention.self.query.scale', 'segformer.encoder.block.2.0.attention.output.dense.zero_point', 'segformer.encoder.block.1.1.mlp.dense2.scale', 'segformer.encoder.block.2.1.mlp.dense1._packed_params._packed_params', 'segformer.encoder.block.0.0.attention.output.dense._packed_params.dtype'] - This IS expected if you are initializing SegformerModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing SegformerModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of SegformerModel were not initialized from the model checkpoint at segformed_lanes_quantized.pt and are newly initialized: ['segformer.encoder.block.1.1.mlp.dense1.weight', 'segformer.encoder.block.2.1.mlp.dense2.bias', 'segformer.encoder.block.3.0.attention.self.key.weight', 'segformer.encoder.block.1.1.mlp.dense1.bias', 'segformer.encoder.block.1.0.attention.self.key.bias', 'segformer.encoder.block.3.1.attention.output.dense.bias', 'segformer.encoder.block.0.1.mlp.dense1.bias', 'segformer.encoder.block.3.1.attention.self.value.weight', 'segformer.encoder.block.3.0.attention.self.query.weight', 'segformer.encoder.block.0.0.attention.self.key.bias', 'segformer.encoder.block.2.0.attention.self.query.weight', 'segformer.encoder.block.3.1.mlp.dense2.weight', 'segformer.encoder.block.1.1.attention.self.query.bias', 'segformer.encoder.block.2.0.attention.self.value.weight', 'segformer.encoder.block.3.1.attention.self.key.bias', 'segformer.encoder.block.2.1.mlp.dense2.weight', 'segformer.encoder.block.2.1.attention.self.value.weight', 'segformer.encoder.block.3.1.attention.self.query.weight', 'segformer.encoder.block.2.0.attention.self.query.bias', 'segformer.encoder.block.1.1.attention.output.dense.weight', 'segformer.encoder.block.3.0.mlp.dense1.weight', 'segformer.encoder.block.3.0.attention.self.value.weight', 'segformer.encoder.block.3.0.mlp.dense2.weight', 'segformer.encoder.block.0.0.mlp.dense2.bias', 'segformer.encoder.block.0.1.mlp.dense2.bias', 'segformer.encoder.block.2.0.attention.self.key.bias', 'segformer.encoder.block.2.0.mlp.dense1.weight', 'segformer.encoder.block.1.0.attention.self.query.bias', 'segformer.encoder.block.2.1.attention.self.key.weight', 'segformer.encoder.block.0.0.attention.self.key.weight', 'segformer.encoder.block.0.0.mlp.dense1.weight', 'segformer.encoder.block.0.1.attention.output.dense.bias', 'segformer.encoder.block.0.1.attention.self.query.weight', 'segformer.encoder.block.3.1.mlp.dense2.bias', 'segformer.encoder.block.1.0.mlp.dense1.bias', 'segformer.encoder.block.2.0.mlp.dense1.bias', 'segformer.encoder.block.0.0.mlp.dense1.bias', 'segformer.encoder.block.2.0.attention.output.dense.bias', 'segformer.encoder.block.1.1.mlp.dense2.weight', 'segformer.encoder.block.2.0.attention.output.dense.weight', 'segformer.encoder.block.3.1.attention.output.dense.weight', 'segformer.encoder.block.1.1.attention.self.value.weight', 'segformer.encoder.block.2.0.attention.self.value.bias', 'segformer.encoder.block.1.0.mlp.dense1.weight', 'segformer.encoder.block.0.0.mlp.dense2.weight', 'segformer.encoder.block.0.1.attention.self.query.bias', 'segformer.encoder.block.2.1.mlp.dense1.bias', 'segformer.encoder.block.0.0.attention.self.query.bias', 'segformer.encoder.block.3.0.attention.output.dense.weight', 'segformer.encoder.block.1.0.mlp.dense2.weight', 'segformer.encoder.block.1.0.attention.output.dense.bias', 'segformer.encoder.block.0.1.attention.self.key.bias', 'segformer.encoder.block.3.0.attention.self.key.bias', 'segformer.encoder.block.3.1.attention.self.query.bias', 'segformer.encoder.block.0.0.attention.self.value.weight', 'segformer.encoder.block.3.1.mlp.dense1.weight', 'segformer.encoder.block.3.0.mlp.dense1.bias', 'segformer.encoder.block.2.1.attention.self.key.bias', 'segformer.encoder.block.2.1.attention.self.query.bias', 'segformer.encoder.block.1.1.attention.self.key.bias', 'segformer.encoder.block.3.0.attention.self.query.bias', 'segformer.encoder.block.2.1.attention.self.query.weight', 'segformer.encoder.block.2.1.attention.output.dense.weight', 'segformer.encoder.block.2.0.mlp.dense2.weight', 'segformer.encoder.block.1.1.mlp.dense2.bias', 'segformer.encoder.block.1.0.attention.self.query.weight', 'segformer.encoder.block.3.0.attention.self.value.bias', 'segformer.encoder.block.0.0.attention.output.dense.weight', 'segformer.encoder.block.2.1.attention.self.value.bias', 'segformer.encoder.block.0.1.mlp.dense1.weight', 'segformer.encoder.block.1.0.attention.self.value.bias', 'segformer.encoder.block.2.1.attention.output.dense.bias', 'segformer.encoder.block.0.1.attention.output.dense.weight', 'segformer.encoder.block.1.0.attention.output.dense.weight', 'segformer.encoder.block.1.0.mlp.dense2.bias', 'segformer.encoder.block.2.0.attention.self.key.weight', 'segformer.encoder.block.0.1.attention.self.key.weight', 'segformer.encoder.block.3.0.mlp.dense2.bias', 'segformer.encoder.block.1.0.attention.self.value.weight', 'segformer.encoder.block.0.0.attention.self.value.bias', 'segformer.encoder.block.0.1.attention.self.value.bias', 'segformer.encoder.block.2.0.mlp.dense2.bias', 'segformer.encoder.block.1.1.attention.output.dense.bias', 'segformer.encoder.block.3.1.attention.self.value.bias', 'segformer.encoder.block.1.0.attention.self.key.weight', 'segformer.encoder.block.0.1.mlp.dense2.weight', 'segformer.encoder.block.3.1.attention.self.key.weight', 'segformer.encoder.block.3.0.attention.output.dense.bias', 'segformer.encoder.block.1.1.attention.self.value.bias', 'segformer.encoder.block.1.1.attention.self.key.weight', 'segformer.encoder.block.0.1.attention.self.value.weight', 'segformer.encoder.block.3.1.mlp.dense1.bias', 'segformer.encoder.block.0.0.attention.self.query.weight', 'segformer.encoder.block.0.0.attention.output.dense.bias', 'segformer.encoder.block.1.1.attention.self.query.weight', 'segformer.encoder.block.2.1.mlp.dense1.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ```
03-16-2022 20:36:17
03-16-2022 20:36:17
Quantized (only background predicted) : ![quantized](https://user-images.githubusercontent.com/23454613/158686845-804e594a-b874-43d1-a52c-d8b3f87f4d11.png) <|||||>Initial model: ![normal](https://user-images.githubusercontent.com/23454613/158686978-01e29eae-8efd-4b71-897a-6802c7a773e7.png) <|||||>Hi, Cool to see you're interested in quantizing one of the models I've implemented! I feel like this could be interesting to support model quantization of vision models, cc @lewtun. We recently added ONNX support for the Vision Transformer (ViT), but things like dynamic post-training quantization are very interesting as well. To be honest I'm not an expert on that topic, hopefully @lewtun can bring some knowledge here.<|||||>Thank you very much for the fast response (also for the awesome contributions), looking forward to finding out a solution to this :D <|||||>Hey @TimbusCalin thanks for opening this issue. In general, you cannot load quantized models using the `from_pretrained()` method because quantization changes the model's `state_dict`. You can find a work around in this [forum post](https://discuss.huggingface.co/t/pegasus-model-weights-compression-pruning/6381/9?u=lewtun), but it does require one to load a dummy model. We're actually working on making is easy to apply various quantization techniques in our `optimum` [library](https://github.com/huggingface/optimum), so feel free to check that out in case it's relevant to your use case<|||||>@NielsRogge @lewtun thank you guys for the prompt responses and suggestions given!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,210
closed
Using pooler/hidden states output of an AutoModel vs. XForSequenceClassification for training another classifier?
I have a finetuned RobertaForSequenceClassification model that I have already trained and saved to disk. I now want to load it, and instead of using it for classification tasks, extract the embeddings it generates and outputs, or "pooled/pooler output". From my understanding, I can load the model using X.fromPretrained() with "output_hidden_states=True". If I load the model using: `model = AutoModel.from_pretrained(model_dir, num_labels=2, output_hidden_states=True)` I get the warning about weights not being initialized: "roberta.pooler.dense.weight", "roberta.pooler.dense.bias", and understand they are randomly initialized on loading the model. Also from my understanding, I can still use this model to generate what I believe to be the pooler output by using something like: `pooler_output = model(input_ids, attention_mask=attention_mask)` Since the AutoModel does not load the weights/bias from the saved model, it leads to random results that I don't want. I've now read two closed issues [[1](https://github.com/huggingface/transformers/issues/1328#issuecomment-534956703), [2](https://github.com/huggingface/transformers/issues/7540#issuecomment-704155218)] that gave me some insight on how to generate this pooler output from XForSequenceClassification models. I don't understand that from the first issue, the poster "concatenates the last four layers" by using the indices -4 to -1 of the output. In my mind this means the last index of the hidden state's output of an XForSequenceClassification model is what is needed to retrieve pooler output. However, in the second link, and in the [RobertaClassificationHead code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/roberta/modeling_roberta.py#L1445), they use the hidden representation of the [CLS] token only, something like: ``` outputs = model(input_ids, attention_mask=attention_mask) last_hidden_state = outputs[0] cls_representation = last_hidden_state[:,0,:] ``` However, when I do this by loading and using a RobertaForSequenceClassification model and instead using outputs[1] (since I do not use labels for the forward function), all rows in the resulting Tensor are equal. I am not sure if this is intended, but if so, then how is this useful for classification if I want to use this as input into another classifier as mentioned by the poster of the second link and the RobertaClassificationHead code? I am not sure if I missed something or there's something wrong with my trained model, but from what the first poster explains and how they take the last four layers, do I want to take outputs[-1][:,0,:] instead, or is outputs[0] actually the first embedding layer in XForSequenceClassification models rather than the final layer?
03-16-2022 19:37:58
03-16-2022 19:37:58
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,209
closed
Fix reproducibility in Training for PyTorch 1.11
# What does this PR do? It used to be that PyTorch called just one random operation in the `RandomSampler` code, so to ensure reproducibility, we only had to get the first batch of a dataloader to have its generator RNG be in the same state as if we had done the whole epoch. Well not anymore... Introducing PyTorch 1.11, where the `RandomSampler` now does two random operations, one being at the very end. So we need to iterate through the whole sampler to guarantee reproducibility. This PR deals with that.
03-16-2022 19:11:31
03-16-2022 19:11:31
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,208
closed
Fix typos in docstrings of data_collator.py
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes two typos in `data_collator.py` ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-16-2022 17:56:41
03-16-2022 17:56:41
transformers
16,207
closed
rembert/splinter (torch) type annotations
#16059 Type annotations for Rembert & Splinter (Torch) and copies
03-16-2022 16:59:57
03-16-2022 16:59:57
@Rocketknight1 this passed tests - Think I was missing an Optional typing import on Splinter. No build on PR though<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16207). All of your documentation changes will be reflected on that endpoint.<|||||>@Rocketknight1 Rembert and Splinter done, as well as copies from Bert. Tests failing on flax, which wasn't modified. <|||||>Hi @jacobdineen there's a lot of changes in unrelated files here, mostly just reformatting. Did you do that manually, or did it happen when you ran something like `make style` or `make fixup`?<|||||>@Rocketknight1 The auxiliary files that were updated were results of running make fix-copies, and then make fixup. I tried several different things, but the build always failed in a single place that I wasn't able to trackdown. Something about an Optional call not defined in the model template file, which was not something that was modified.<|||||>Hi @jacobdineen, that's weird. Can you make a new PR (make sure to fork/branch from the most recent version of the library!) and just make the type hint changes to Rembert + Splinter, without any formatting changes or things like `make fixup` or `make fix-copies`? Tests will fail, but I'll review it and run the fixup myself here, and I should be able to resolve any problems at my end. Thanks for your patience here - hopefully we can figure this one out!<|||||>@Rocketknight1 - Will do! Closing this PR now.
transformers
16,206
closed
Fix generation min length
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes a typo in `generation_utils.py`. `min_length` of `0` has no effect so we don't need to check whether it's bigger than -1, but whether it's bigger than 0. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-16-2022 16:30:32
03-16-2022 16:30:32
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16206). All of your documentation changes will be reflected on that endpoint.<|||||>> LGTM. > > If I understand correctly we were almost always adding the min_length logits processor even though it's not needed when min length is zero right ? (curiosity Can we generate zero-tokens actually ?) Yeah that was wrong before - we shouldn't use the min_length logits processor if min_length=0
transformers
16,205
closed
Clinical longformer - IndexError: index out of range in self
**Description:** While using clinical Longformer (AutoModelForSequenceClassification.from_pretrained("yikuan8/Clinical-Longformer", num_labels=3)) and its tokenizer (AutoTokenizer.from_pretrained("yikuan8/Clinical-Longformer")), I am running into the "IndexError: index out of range in self" error. **Source code:** import torch from torch.utils.data import Dataset, TensorDataset, DataLoader from torch.nn.utils.rnn import pad_sequence import pickle import os from transformers import AutoTokenizer #Longformer from transformers import AutoModelForSequenceClassification, AdamW device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") tokenizer = AutoTokenizer.from_pretrained("yikuan8/Clinical-Longformer") model = AutoModelForSequenceClassification.from_pretrained("yikuan8/Clinical-Longformer", num_labels=3) model.to(device) class NLIDataBert(Dataset): def __init__(self, train_df, val_df, input_tokenizer): self.label_dict = {'entailment': 0, 'contradiction': 1, 'neutral': 2} self.train_df = train_df self.val_df = val_df self.tokenizer = input_tokenizer self.train_data = None self.val_data = None self.init_data() def init_data(self): self.train_data = self.load_data(self.train_df) self.val_data = self.load_data(self.val_df) def load_data(self, df): max_seq_length = 3072 token_ids = [] mask_ids = [] seg_ids = [] y = [] premise_list = df['premise'].to_list() hypothesis_list = df['hypothesis'].to_list() label_list = df['gold_label'].to_list() for (premise, hypothesis, label) in zip(premise_list, hypothesis_list, label_list): premise_id = self.tokenizer.encode(premise, add_special_tokens = False) hypothesis_id = self.tokenizer.encode(hypothesis, add_special_tokens = False) pair_token_ids = [self.tokenizer.cls_token_id] + premise_id + [self.tokenizer.sep_token_id] + hypothesis_id + [self.tokenizer.sep_token_id] premise_len = len(premise_id) hypothesis_len = len(hypothesis_id) segment_ids = torch.tensor([0] * (premise_len + 2) + [1] * (hypothesis_len + 1)) # sentence 0 and sentence 1 attention_mask_ids = torch.tensor([1] * (premise_len + hypothesis_len + 3)) # mask padded values token_ids.append(torch.tensor(pair_token_ids)) seg_ids.append(segment_ids) mask_ids.append(attention_mask_ids) y.append(self.label_dict[label]) token_ids = pad_sequence(token_ids, batch_first=True) mask_ids = pad_sequence(mask_ids, batch_first=True) seg_ids = pad_sequence(seg_ids, batch_first=True) ''' padding = [0] * (max_seq_length - len(token_ids)) token_ids += padding mask_ids += padding seg_ids += padding ''' y = torch.tensor(y) dataset = TensorDataset(token_ids, mask_ids, seg_ids, y) print(len(dataset)) return dataset def get_data_loaders(self, batch_size=32, shuffle=True): train_loader = DataLoader( self.train_data, shuffle=shuffle, batch_size=batch_size ) val_loader = DataLoader( self.val_data, shuffle=shuffle, batch_size=batch_size ) return train_loader, val_loader nli_dataset = NLIDataBert(df_train, df_val, tokenizer) train_loader, val_loader = nli_dataset.get_data_loaders(batch_size=16) param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'gamma', 'beta'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay_rate': 0.0} ] optimizer = AdamW(optimizer_grouped_parameters, lr=2e-5, correct_bias=False) def multi_acc(y_pred, y_test): acc = (torch.log_softmax(y_pred, dim=1).argmax(dim=1) == y_test).sum().float() / float(y_test.size(0)) return acc import time EPOCHS = 5 def train(model, train_loader, val_loader, optimizer): total_step = len(train_loader) for epoch in range(EPOCHS): start = time.time() model.train() total_train_loss = 0 total_train_acc = 0 for batch_idx, (pair_token_ids, mask_ids, seg_ids, y) in enumerate(train_loader): optimizer.zero_grad() pair_token_ids = pair_token_ids.to(device) mask_ids = mask_ids.to(device) seg_ids = seg_ids.to(device) labels = y.to(device) loss, prediction = model(pair_token_ids, token_type_ids=seg_ids, attention_mask=mask_ids, labels=labels).values() acc = multi_acc(prediction, labels) loss.backward() optimizer.step() total_train_loss += loss.item() total_train_acc += acc.item() train_acc = total_train_acc/len(train_loader) train_loss = total_train_loss/len(train_loader) model.eval() total_val_acc = 0 total_val_loss = 0 with torch.no_grad(): for batch_idx, (pair_token_ids, mask_ids, seg_ids, y) in enumerate(val_loader): optimizer.zero_grad() pair_token_ids = pair_token_ids.to(device) mask_ids = mask_ids.to(device) seg_ids = seg_ids.to(device) labels = y.to(device) loss, prediction = model(pair_token_ids, token_type_ids=seg_ids, attention_mask=mask_ids, labels=labels).values() acc = multi_acc(prediction, labels) total_val_loss += loss.item() total_val_acc += acc.item() val_acc = total_val_acc/len(val_loader) val_loss = total_val_loss/len(val_loader) end = time.time() hours, rem = divmod(end-start, 3600) minutes, seconds = divmod(rem, 60) print(f'Epoch {epoch+1}: train_loss: {train_loss:.4f} train_acc: {train_acc:.4f} | val_loss: {val_loss:.4f} val_acc: {val_acc:.4f}') print("{:0>2}:{:0>2}:{:05.2f}".format(int(hours),int(minutes),seconds)) train(model, train_loader, val_loader, optimizer) **Exception:** --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <[command-2383037512822880]()> in <module> ----> 1 train(model, train_loader, val_loader, optimizer) <[command-2383037512822878]()> in train(model, train_loader, val_loader, optimizer) 21 token_type_ids=seg_ids, 22 attention_mask=mask_ids, ---> 23 labels=labels).values() 24 25 acc = multi_acc(prediction, labels) /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, head_mask, token_type_ids, position_ids, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1895 output_attentions=output_attentions, 1896 output_hidden_states=output_hidden_states, -> 1897 return_dict=return_dict, 1898 ) 1899 sequence_output = outputs[0] /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, head_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states, return_dict) 1702 1703 embedding_output = self.embeddings( -> 1704 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds 1705 ) 1706 /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/transformers/models/longformer/modeling_longformer.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds) 484 inputs_embeds = self.word_embeddings(input_ids) 485 position_embeddings = self.position_embeddings(position_ids) --> 486 token_type_embeddings = self.token_type_embeddings(token_type_ids) 487 488 embeddings = inputs_embeds + position_embeddings + token_type_embeddings /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1108 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1109 or _global_forward_hooks or _global_forward_pre_hooks): -> 1110 return forward_call(*input, **kwargs) 1111 # Do not call functions when jit is used 1112 full_backward_hooks, non_full_backward_hooks = [], [] /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input) 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 162 def extra_repr(self) -> str: /local_disk0/.ephemeral_nfs/envs/pythonEnv-ed86c010-21bd-4af0-ad0f-9c449ce44eac/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2181 # remove once script supports set_grad_enabled 2182 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2183 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2184 2185 IndexError: index out of range in self **Environment:** Databricks Python: '3.7.5 \n[GCC 8.4.0]' torch: '1.11.0+cu102' transformers: '4.17.0'
03-16-2022 16:29:54
03-16-2022 16:29:54
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,204
closed
Doctests - reporting tool
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-16-2022 16:23:00
03-16-2022 16:23:00
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,203
closed
Training MLM model XLM Roberta large on google machine specs not fast
I am fine tuning masked language model from `XLM Roberta large` on google machine specs. I made couple of experiments and was strange to see few results. ``` "a2-highgpu-4g" ,accelerator_count=4, accelerator_type="NVIDIA_TESLA_A100" on 4,12,672 data batch size 4 Running ( 4 data*4 GPU=16 data points) "a2-highgpu-4g" ,accelerator_count=4 , accelerator_type="NVIDIA_TESLA_A100"on 4,12,672 data batch size 8 failed "a2-highgpu-4g" ,accelerator_count=4, accelerator_type="NVIDIA_TESLA_A100" on 4,12,672 data batch size 16 failed "a2-highgpu-4g" ,accelerator_count=4.,accelerator_type="NVIDIA_TESLA_A100" on 4,12,672 data batch size 32 failed ``` I was not able to train model with `batch size ` more than 4 on # of GPU's. It stopped in mid-way. Here is the code I am using. ``` training_args = tr.TrainingArguments( # disable_tqdm=True, output_dir='/home/pc/Bert_multilingual_exp_TCM/results_mlm_exp2', overwrite_output_dir=True, num_train_epochs=2, per_device_train_batch_size=4, # per_device_train_batch_size # per_gpu_train_batch_size prediction_loss_only=True ,save_strategy="no" ,run_name="MLM_Exp1" ,learning_rate=2e-5 ,logging_dir='/home/pc/Bert_multilingual_exp_TCM/logs_mlm_exp1' # directory for storing logs ,logging_steps=40000 ,logging_strategy='no' ) trainer = tr.Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_data ) ``` **My Questions** How can I train with larger batch size on `a2-highgpu-4g` machine? Which parameters can I include in `TrainingArguments` so that training is fast and occupies small memory? Thanks in advance. ### Versions ``` torch==1.11.0+cu113 torchvision==0.12.0+cu113 torchaudio==0.11.0+cu113 transformers==4.17.0 ```
03-16-2022 16:07:05
03-16-2022 16:07:05
You can use the gradient accumulation step. You will find it in the argument by running your script xx.py --help. Also, your GPU is A100 so I think you can use the flag bf16 which is designed for amber GPUs like A100 and 3090. Why do you don't use fairseq? it has a max update argument which you can adjust to fit a larger batch size. In Fairseq, you control the batch size by adjusting max-token parameters. Use the largest max-token you could until you receive Out of Memory OOM error (sometimes you receive it in middle) and increase max update value which is equivalent to gradient steps. After you run your script you will see in logs something like bzs which refer to the batch size .<|||||>> You can use the gradient accumulation step. You will find it in the argument by running your script xx.py --help. Also, your GPU is A100 so I think you can use the flag bf16 which is designed for amber GPUs like A100 and 3090. Why do you don't use fairseq? it has a max update argument which you can adjust to fit a larger batch size. In Fairseq, you control the batch size by adjusting max-token parameters. Use the largest max-token you could until you receive Out of Memory OOM error (sometimes you receive it in middle) and increase max update value which is equivalent to gradient steps. After you run your script you will see in logs something like bzs which refer to the batch size . **I tried with this parameter and training never started... it's stuck** ``` ,gradient_accumulation_steps=8 ,gradient_checkpointing=True ,fp16=True ,optim="adafactor" ```<|||||>bf16 is different than fp16<|||||>> bf16 is different than fp16 Still stuck !! ``` training_args = tr.TrainingArguments( # disable_tqdm=True output_dir='gs://****/results_mlm_exp1' ,overwrite_output_dir=True ,num_train_epochs=2 ,per_device_train_batch_size=4 # per_device_train_batch_size # per_gpu_train_batch_size ,prediction_loss_only=True ,save_strategy="no" ,run_name="MLM_Exp1" ,learning_rate=2e-5 ,logging_dir='gs://****/logs_mlm_exp1' # directory for storing logs ,logging_steps=26000 ,gradient_accumulation_steps=8 ,gradient_checkpointing=True # ,bf16=True ,optim="adafactor" # ,logging_strategy='no' ) ```<|||||>your logging steps is 26K . reduce it to 500 or 100<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,202
closed
TF: add beam search tests
# What does this PR do? Adds `generate()` beam search tests, in advance of the beam search refactor.
03-16-2022 14:58:32
03-16-2022 14:58:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,201
closed
VAN: update modules names
# What does this PR do? This PR updates VAN modules' names to follow our convention and use more extensively the `Config` ## TODO - [x] reupload weights
03-16-2022 14:27:32
03-16-2022 14:27:32
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,200
closed
Added type hints for Pytorch Marian calls
As the title suggests, this PR adds type hints to the PyTorch Marian calls. (#16059 ) @Rocketknight1
03-16-2022 14:20:07
03-16-2022 14:20:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patil-suraj Hm, do I need to change the BART too for consistency? ``` src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartAttention at line 144 src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartEncoderLayer at line 289 src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartDecoderLayer at line 356 src/transformers/models/marian/modeling_marian.py: copy does not match models.bart.modeling_bart.BartForCausalLM at line 1529 src/transformers/models/pegasus/modeling_pegasus.py: copy does not match models.marian.modeling_marian.MarianSinusoidalPositionalEmbedding at line 109 src/transformers/models/roformer/modeling_roformer.py: copy does not match models.marian.modeling_marian.MarianSinusoidalPositionalEmbedding at line 73 Run `make fix-copies` or `python utils/check_copies.py --fix_and_overwrite` to fix them. ``` Running `make fix-copies` basically reverts all my changes. <|||||>Hi @clefourrier, sorry for the delay! We will indeed need to make changes to BART to make the tests work - but we already have an existing BART PR here: https://github.com/huggingface/transformers/pull/16270. Probably the easiest thing to do is: 1. Wait until we merge that PR 2. If any changes in Marian are still needed, rebase your current PR onto it (make sure to force push afterwards!) 3. Check tests again afterwards<|||||>Hi @Rocketknight1 , perfect! Can you ping me once the BART PR is merged then?<|||||>@clefourrier It was just merged about half an hour ago! Take a look - if any changes are needed, you'll probably need to pull changes from upstream to the master branch of your fork, and then rebase (with force push!) onto that.<|||||>@Rocketknight1 Well, it seems the last failures comes from children archs (pegasus and roformer) - do they have their own PR?
transformers
16,199
closed
Add GLPN
# What does this PR do? This PR implements [GLPN](https://arxiv.org/abs/2201.07436), Global Local Path Networks, for state-of-the-art depth estimation (number 2 on paperswithcode for KITTI and NYUv2 benchmarks). To do: - [x] transfer weights to author's username - [x] add tests for GLPNFeatureExtractor - [x] rename `in_index` to something better - [x] define `DepthEstimatorOutput`
03-16-2022 14:06:54
03-16-2022 14:06:54
_The documentation is not available anymore as the PR was closed or merged._<|||||>Ok I've addressed all comments. Waiting for the author to respond such that I can transfer the weights to his name.
transformers
16,198
closed
Fix loading CLIPVisionConfig and CLIPTextConfig
# What does this PR do? As described in #16185, there's a bug when loading `CLIPVisionModel` from a `CLIPModel` checkpoint. This is because the `CLIPVisionConfig` is not loaded correctly. And all the arguments use default values which correspond to the `openai/clip-vit-base-patch32` model, which results in shape mismatch when loading vision/text model from other CLIP checkpoints. This PR override the `from_pretrained` methods for `CLIPVisionConfig` and `CLIPTextConfig` to allow loading from `CLIPConfig`. Fixes #16185
03-16-2022 13:57:01
03-16-2022 13:57:01
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,197
closed
Fix-clip-configs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-16-2022 13:53:34
03-16-2022 13:53:34
transformers
16,196
closed
ResNet: update modules names
# What does this PR do? This PR updates ResNet modules' names to follow our convention and use more extensively the `Config` ## TODO - [x] reupload weights
03-16-2022 13:47:56
03-16-2022 13:47:56
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,195
open
Gpt2 large for onnx exportation and int8 quantization
Hi, for model big as 7GB, does transformers support export to onnx?? Any tutorial about big model?
03-16-2022 13:36:40
03-16-2022 13:36:40
Hi @jinfagang as far as I know, the `transformers.onnx` package _should_ work for exporting large models. The only difference is that you'll typically see a number of additional files are created because ONNX uses Protobuf and this can only serialise files in 2GB chunks. It would be helpful to have a reproducible code example if you are having trouble exporting a particular checkpoint<|||||>@lewtun Hi, actually I am just trying using the same technique in Bert quantization inside onnxruntime example. Bert can be optimized, and I can shrink the model size from 1.6G to 400M in int8. But when I trying on 7G model, it fails. So I don't know is the optimization doesn't supported such big model, or does the huge model can not be properly loaded by ort. It just returns error like read onnx model failed.<|||||>Hey @jinfagang can you please share a code snippet that shows which checkpoint you're using and an example of how you're loading the exported model?<|||||>@lewtun Hi, I suppose you familiar with GPT2 and magnetron. Let me explain how I do it. 1. firstly, I download a pretrained giant Chinese GPT2-large model, which might trained with Megatron-LM, it was so huge, that weights divided into 2 parts and should loaded by 2 cards. You can download them from: https://huggingface.co/TsinghuaAI/CPM-Generate , Oh, I just recognized it was already on hugging spaces; 2. then, I using tools you must familiar via `python -m transformers.onnx ./models` convert the model to onnx using hugging faces tools; 3. then you got this huge, onnx, which separately into several tiny weights (this is the way onnx stores huge model); 4. then I using same Optimization on onnxruntime to optimized, it fails to produces int8 model. the quantization like this: ``` optimization_options = FusionOptions("gpt2") optimization_options.enable_gelu = True optimization_options.enable_layer_norm = True optimization_options.enable_attention = True optimization_options.enable_skip_layer_norm = True optimization_options.enable_embed_layer_norm = True optimization_options.enable_bias_skip_layer_norm = True optimization_options.enable_bias_gelu = True optimization_options.enable_gelu_approximation = False logger.warning(f">>>>>>> Start optimizing ONNX graph on: {model_type}") # for magatron GPT2 optimizer = optimize_model( onnx_model_f, model_type=model_type, # num_heads=16, num_heads=32, # hidden_size=1024, hidden_size=2560, optimization_options=optimization_options, opt_level=0, use_gpu=False, only_onnxruntime=False, ) ``` As you can see my commented part is the smaller GPT model, which can be successfully quantized. But huge model fails. If you interested try same 7.0GB model quantization that would be very great and I really would like to see if you can managed to convert it.
transformers
16,194
closed
clearer model variable naming: blenderbot_small
# What does this PR do? clearer model variable naming: blenderbot_small <img width="1512" alt="Ekran Resmi 2022-03-16 16 27 40" src="https://user-images.githubusercontent.com/58150504/158600522-808954f1-9519-4a56-9a07-c3649faac819.png"> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ *] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante
03-16-2022 13:28:46
03-16-2022 13:28:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,193
closed
Minor fixes to XTREME-S
# What does this PR do? Quick fixes to the xtreme-s example script (target columns and readme)
03-16-2022 12:46:06
03-16-2022 12:46:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,192
closed
clearer model variable naming: blenderbot
# What does this PR do? clearer model variable naming: blenderbot <img width="1512" alt="Ekran Resmi 2022-03-16 15 12 18" src="https://user-images.githubusercontent.com/58150504/158587347-5934dc62-b07e-4fa1-b49a-5f3be4b0f250.png"> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [* ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante
03-16-2022 12:13:22
03-16-2022 12:13:22
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,191
closed
layoutlmv2 visual backbone pretrained on publaynet
@NielsRogge Amazing work thanks for sharing, Does layoutlmv2's visual backbone utilize ResNeXt-FPN trained on Publaynet as described in the paper? If so, could you please point out where the weights are used to initialize the model in the repo?
03-16-2022 11:29:13
03-16-2022 11:29:13
Hi, The LayoutLMv2 authors indeed leveraged a ResNeXT-FPN for the visual backbone. It was taken from this repo: https://github.com/hpanwar08/detectron2. However, we've just added the [Document Image Transformer (DiT)](https://huggingface.co/docs/transformers/master/en/model_doc/dit), which actually outperforms the ResNeXT model on PubLayNet with the same object detection framework (Mask R-CNN), reaching an AP of 0.945/0.949 for DiT-base/large respectively (vs. 0.906 for ResNeXT). Take a look at this Space for a demo: https://huggingface.co/spaces/nielsr/dit-document-layout-analysis<|||||>@NielsRogge Thanks for responding; yes, I got my hands on DIT, which is a fantastic model, but I'm trying to figure out where the ResNeXT-FPN weights are initialized in the layoutlmv2 model for the current hugging face implementation. https://github.com/microsoft/unilm/blob/master/layoutlmft/layoutlmft/models/layoutlmv2/detectron2_config.py i don't see any weights path given here <|||||>I don't think Microsoft open-sourced this weights initialization. This was probably part of their pre-training code (which is not open-sourced). Weights can be found here: https://github.com/hpanwar08/detectron2#benchmarking<|||||>Then how does the visual backbone extracts features from the image? I can see the model config for detectron2 but I can't figure out what weights the model is getting initialized with? for example in detectron2, we pass config fille and YAML for weights ``` cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml")) cfg.MODEL.WEIGHTS = model_zoo.get_checkpoint_url("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml") ```<|||||>Yes I know, they probably set the MODEL.WEIGHTS attribute in their pre-training code, which they didn't open-source. But as linked above, you can easily get the backbone weights.<|||||>I want to override the weights for the model but when I pass the weight's path in the visual backbone class the model is not picking up the weights but the MODEL.WEIGHTS key in config is showing the model path string, Any reasons why? @NielsRogge <|||||>got the weights from the backbone and loaded the custom weights <|||||>Hi @gauravlochab, Could you please share how you managed load the custom weights?
transformers
16,190
open
Initialized DETR backbone weights do not match with actual pretrained weights
Hello everybody! My task is to initialize DETR Object Detection model with my own pretrained backbone (for example, ResNet-50). So, in Detr class (I took the code from [this Hugging Face tutorial](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/DETR/Fine_tuning_DetrForObjectDetection_on_custom_dataset_(balloon).ipynb) as a basis), I create model from DetrConfig: ``` class Detr(pl.LightningModule): def __init__(self, lr, lr_backbone, weight_decay, backbone_name): super().__init__() self.model = DetrForObjectDetection(DetrConfig(num_labels=len(id2label) ,ignore_mismatched_sizes=True ,backbone=backbone_name )) ... ``` Then I create model: ``` model = Detr(lr=1e-4, lr_backbone=1e-5, weight_decay=1e-3, backbone_name='resnet50') ``` And I want to see, for example, first layer weights in backbone: ``` model.model.model.backbone.conv_encoder.model.conv1.weight ``` Output: ``` Parameter containing: tensor([[[[-1.0862e-02, 5.6558e-02, -2.5807e-02, ..., -1.8801e-03, 2.1483e-02, -1.1144e-02], ... ``` And I notice that this weights are different from timm backbone model, which is [fed to the input](https://github.com/huggingface/transformers/blob/68dec6bffd68e62d151758d8ac7378bf9b40ac20/src/transformers/models/detr/modeling_detr.py#:~:text=backbone%20%3D%20create_model(name%2C%20pretrained%3DTrue%2C%20features_only%3DTrue%2C%20out_indices%3D(1%2C%202%2C%203%2C%204)%2C%20**kwargs)) of the Detr model: ``` backbone_model = timm.create_model('resnet50', pretrained=True, features_only=True, out_indices=(1, 2, 3, 4)) backbone_model.conv1.weight ``` Output: ``` Parameter containing: tensor([[[[-2.6017e-02, -1.7312e-02, -1.0648e-02, ..., 1.1380e-02, -1.0471e-02, -1.1242e-02], ... ``` So... My question is, how can this happen? Why are the weights of the model that is fed into the Detr input differ from the weights of the same model in the already initialized Detr? Is this a bug or am I missing something? One more detail: If I create model again, weights will also change! But when I restart Kernel and create Detr model, weights will be the same as in my first model creation at previous Kernel! So, the weights are initialized differently, but with a certain cycle? Why is this happening? In my understanding, backbone weights should always be the same and equal to the original ResNet weights. Maybe, I'm setting the configuration wrong, but it could be a bug. I would really appreciate an answer and possible clarification!
03-16-2022 10:26:19
03-16-2022 10:26:19
Hello again! Tagged @NielsRogge, maybe you can help? I tried by hand to track the moment when the weights of the backbone change and until [that moment](https://github.com/huggingface/transformers/blob/main/src/transformers/models/detr/modeling_detr.py#:~:text=apply%20final%20processing-,self.post_init(),-%23%20taken%20from%20https) the weights remain the same, then I could not track the changes. So, I suppose model is initializing correctly, but why (and where) weights are changing?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Thanks for spotting that, @univanxx. I am not sure myself why that is the case. Marking this issue as a "good second issue" for now. <|||||>Hello! I reviewed the issue. It seems that the classes `DetrModel` and `DetrForObjectDetection` apply weight initialization using method `self.post_init()` which calls `DetrPreTrainedModel._init_weights(...)` and changes backbone model weights. `DetrForObjectDetection` and `timm` models have different backbone weights: ```python from transformers import DetrConfig, DetrForObjectDetection import timm config = DetrConfig(ignore_mismatched_sizes=True, backbone='resnet50') model = DetrForObjectDetection(config) timm_backbone = timm.create_model('resnet50', pretrained=True, features_only=True, out_indices=(1, 2, 3, 4)) print((model.model.backbone.conv_encoder.model.conv1.weight == timm_backbone.conv1.weight).all()) ``` Output: `tensor(False)` But the weights are the same as in `timm` model when using only `DetrTimmConvEncoder`: ```python from transformers.models.detr.modeling_detr import DetrTimmConvEncoder backbone = DetrTimmConvEncoder(config.backbone, config.dilation) print((backbone.model.conv1.weight == timm_backbone.conv1.weight).all()) ``` Output: `tensor(True)` This seems like an unwanted behaviour, but I am not sure what would be the best way to fix this.<|||||>Hello everyone! Unfortunately, I still could not find out the cause of the problem, but I developed a temporary solution (however, I'm not sure if it's correct to do this) using @chamidullinr code: 1. First, we initialize the model as usual, for example: ```python from transformers import DetrConfig, DetrForObjectDetection config = DetrConfig(ignore_mismatched_sizes=True, backbone='resnet50') model = DetrForObjectDetection(config) ``` 2. After that, we create timm-model for the future backbone: ```python import timm timm_backbone = timm.create_model('resnet50', pretrained=True, features_only=True, out_indices=(1, 2, 3, 4)) ``` 3. And finally, we manually copy the weights from backbone model to DETR using in-place copying operator: ```python # weighs for name_backbone, parameter_backbone in timm_backbone.named_parameters(): for name, parameter in model.model.backbone.conv_encoder.model.named_parameters(): if name_backbone == name: parameter.data.copy_(parameter_backbone.data) # buffers for name_backbone, parameter_backbone in timm_backbone.named_buffers(): for name, parameter in model.model.backbone.conv_encoder.model.named_buffers(): if name_backbone == name: parameter.data.copy_(parameter_backbone.data) ``` Now, if we try: ```python print((model.model.backbone.conv_encoder.model.conv1.weight == timm_backbone.conv1.weight).all()) ``` Output: ```tensor(True)```<|||||>Hi @univanxx I was looking into this issue, as @chamidullinr mentioned, this is due to the `post_init()` functions being called [here](https://github.com/huggingface/transformers/blob/main/src/transformers/models/detr/modeling_detr.py#L1350), and reinitializing the weights, which is actually expected since we are not calling `.from_pretrained()` and are basically asking the model to just create a model our of the given skeleton, without any guarantees on the weight being pretrained. Looking deeper I also found that the `post_init()` function is disabled when we call `.from_pretrained()`, which causes the weights to not be re-initialized when we are loading pretrained weights from a checkpoint, but it works on the entire model level, and not at the individual level it seems. I might be wrong, but if you want a part of the model to be initialized according to your given weights, your way of manually copying them does seem correct. Although there could be some way to add functionality to make for more fine-grained tuning of what part remains pretrained and what doesn't<|||||>Adding an informative comment here: https://github.com/huggingface/transformers/pull/17572
transformers
16,189
closed
Update a CI job step name
# What does this PR do? Update the step name from `Run all non-slow tests on GPU` to `Run all tests on GPU` for scheduled CI test run.
03-16-2022 10:11:14
03-16-2022 10:11:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,188
closed
RegNet
# What does this PR do? This WIP PR adds [RegNet](https://arxiv.org/abs/2003.13678). Currently, the model can be used as follows ```python from transformers import RegNetConfig, RegNetForImageClassification import requests from io import BytesIO res = requests.get('https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/COCO/000000039769.png?raw=true') image = Image.open(BytesIO(res.content)) feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/regnet-y-040") model = RegNetForImageClassification.from_pretrained("zuppif/regnet-y-040").eval() inputs = feature_extractor(image, return_tensors="pt") outputs = model(**inputs) print(model.config.id2label[torch.argmax(outputs.logits).item()]) # tiger cat ```
03-16-2022 09:08:16
03-16-2022 09:08:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>Updated the codebase to use `config` when is possible. I had to update `ResNetModel` as well to make `make fixup` happy :)<|||||>Conversion script for the 10B regnet model added + needed changes inside `PretrainedModel._load_pretrained_model_low_mem` <|||||>Thanks to all reviewers, I've rebased and updated the code accordingly
transformers
16,187
closed
Fix `LayoutLMv2` tokenization docstrings
# What does this PR do? The tokenizers of `LayoutLMv2` don't accept `is_split_into_words` keyword for `__call__`. Thus, I created a dedicated kwargs docstring to replace the one in `src/transformers/tokenization_utils_base.py` being used. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @NielsRogge @LysandreJik <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-16-2022 07:59:28
03-16-2022 07:59:28
_The documentation is not available anymore as the PR was closed or merged._<|||||>Will open a follow-up PR for `LayoutXLM` after accepted and merged.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge can you give this a look when you have a minute?<|||||>Done.<|||||>Seems like you changed some files which shouldn't be changed, like `index.mdx`. Could you fix those, please?<|||||>Thanks, merging!
transformers
16,186
closed
Truncation when tokenizer does not have `max_length` defined
Hello, I'm currently writing some code using transformers, and have something similar to: ```python model = AutoModel.from_pretrained(model_name, output_hidden_states=True) tokenizer = AutoTokenizer.from_pretrained(tokenizer_name) ids = tokenizer.encode(sentence, truncation=True) ``` The issue I am facing is when sentence has > 512 tokens (wordpieces actually) for certain models. The above code works fine for `bert-base-multilingual-cased`, but fails for `qarib/bert-base-qarib`, because the tokenizer in the latter case does not define a max length: ```bash Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation. .... RuntimeError: The size of tensor a (515) must match the size of tensor b (512) at non-singleton dimension 1 ``` So the model itself is limited to 512, but the tokenizer is not aware of this max length. Apart from asking the original model creators to define the max_model_length in their tokenizer, is there anything else I can do to "autodetect" the max length? Can I use `model.config.max_position_embeddings` as a proxy for max length when using the tokenizer? Thanks!
03-16-2022 06:57:09
03-16-2022 06:57:09
Try this `tokenizer(sentence, truncation=True, max_length=512)` `model.config.max_position_embeddings` is the right way to see the max sequence length<|||||>Sounds good, thanks for the confirmation!
transformers
16,185
closed
CLIPVisionModel errors on trying to load openai/clip-vit-base-patch16
`CLIPVisionModel` errors on trying to load [openai/clip-vit-base-patch16](https://huggingface.co/openai/clip-vit-base-patch16), which was added to HF (using `CLIPModel` for loading `patch16` as the documentation example for that repo works without error) It appears that the model is architected as the `patch32` config, as the "current model" shape correspond to that config. ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) /var/folders/m9/s4s3bdq96pn3dk13fbgpw6rm0000gn/T/ipykernel_2831/1425856315.py in <module> ----> 1 model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch16") /usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1530 cls._load_state_dict_into_model_low_mem(model, loaded_state_dict_keys, resolved_archive_file) 1531 else: -> 1532 model, missing_keys, unexpected_keys, mismatched_keys, error_msgs = cls._load_state_dict_into_model( 1533 model, 1534 state_dict, /usr/local/lib/python3.9/site-packages/transformers/modeling_utils.py in _load_state_dict_into_model(cls, model, state_dict, pretrained_model_name_or_path, ignore_mismatched_sizes, _fast_init) 1688 if len(error_msgs) > 0: 1689 error_msg = "\n\t".join(error_msgs) -> 1690 raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") 1691 1692 if len(unexpected_keys) > 0: RuntimeError: Error(s) in loading state_dict for CLIPVisionModel: size mismatch for vision_model.embeddings.position_ids: copying a param with shape torch.Size([1, 197]) from checkpoint, the shape in current model is torch.Size([1, 50]). size mismatch for vision_model.embeddings.patch_embedding.weight: copying a param with shape torch.Size([768, 3, 16, 16]) from checkpoint, the shape in current model is torch.Size([768, 3, 32, 32]). size mismatch for vision_model.embeddings.position_embedding.weight: copying a param with shape torch.Size([197, 768]) from checkpoint, the shape in current model is torch.Size([50, 768]). ``` ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.17.0 - Platform: macOS-12.3-x86_64-i386-64bit - Python version: 3.9.7 - PyTorch version (GPU?): 1.11.0 (False) ### Who can help @patil-suraj ## To reproduce ```python from transformers import CLIPVisionModel model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch16") ```
03-16-2022 05:46:56
03-16-2022 05:46:56
Thank you for reporting this! Looking into it.<|||||>Found the issue, `CLIPVisionConfig` does not correctly copy the vision arguments from the `CLIPConfig`. It uses the default values., which are defined for the patch32 model. A quick fix to get this working for now is to load `CLIPConfig`, retrieve the `vision_config` from it and pass it to `from_pretrained` ```python from transformers import CLIPVisionModel, CLIPConfig config = CLIPConfig.from_pretrained("openai/clip-vit-base-patch16") model = CLIPVisionModel.from_pretrained("openai/clip-vit-base-patch16", config=config.vision_config) ```
transformers
16,184
closed
Translated version of model_sharing.mdx doc to spanish
# What does this PR do? This PR is related to #15947 issue. This PR translates the model_sharing.mdx file to a spanish version of it and save it into transformers\docs\source_es path. This translation tries to keep the structure and ideas behind each statement, adding some words and translation where there's not specific one-to-one translation in Spanish, in other cases it adds words for a better flow of ideas. Any comment with respect to this translation is welcome while we try to figure out the better way to give Spanish-speaker community the best documentation about HuggingFace. Thank you.
03-16-2022 03:39:11
03-16-2022 03:39:11
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, @Gerard-170! I will review it during the week :)<|||||>Hola @Gerard-170! I just wanted to follow up with your PR 🤗! Please let me know if you need anything else.<|||||>Hi! @omarespejel Well I was waiting the review. whether you have some comments about the translation or not. Let me know what you think! and Happy to contribute!! Thank you! :D <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @Gerard-170! Thank you 🤗. I left above my review. Also, could you please add `model_sharing` to [`transformers/docs/source/es/_toctree.yml`](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml)? As a reference, you can use the [new Translation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) guide (section "✍️ Start translating"). This would allow the tests to pass.<|||||>Hi! Sorry for a late response, of course I will add model_sharing to [transformers/docs/source/es/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml). <|||||>Thank you @Gerard-170! Also please run `git pull upstream master` to merge all the changes made to `main` (note we recently went from `master` to `main`, and this is still not reflected in your fork).<|||||>Hola @Gerard-170! Muchas gracias. It looks great, there is a style error that we recently corrected in another file. Could you please run `git pull upstream main` again so this gets corrected? I think this will do the trick.<|||||>Hola @omarespejel Yes! I did again git pull and all checks have passed! :D Please let me know if you need something else. Gracias! <|||||>Thank you @Gerard-170! This is great 🤗. @sgugger LGTM 👍
transformers
16,183
closed
[Xtreme-S] fix some namings
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-16-2022 00:14:18
03-16-2022 00:14:18
@anton-l - merging this now already given the time constraint. Could you please revise this script tomorrow (make sure it works?)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16183). All of your documentation changes will be reflected on that endpoint.
transformers
16,182
closed
Fix links in guides
This PR fixes broken links in the how-to guides as a result of incorrect syntax.
03-15-2022 22:53:14
03-15-2022 22:53:14
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,181
closed
unpack_input decorator for tf_convnext
# What does this PR do? Add the unpack_input decorator for convnext (see #16051) <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1
03-15-2022 21:42:38
03-15-2022 21:42:38
_The documentation is not available anymore as the PR was closed or merged._<|||||>Sure, no problem @gante . The tests ran without a problem.
transformers
16,180
closed
Small fixes to the documentation
# What does this PR do? This PR fixes a few syntax errors in the doc.
03-15-2022 21:26:48
03-15-2022 21:26:48
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,179
closed
rembert type annotations and dependencies
Added type annotations to Rembert per the guidelines. Modified Bert where copied from transformers.models.bert.modeling_bert #16059 @Rocketknight1
03-15-2022 20:30:41
03-15-2022 20:30:41
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Any idea on why this is failing? Only made modifications to Bert and Rembert, with copies to the other seven models based on make fix-copies. Model template seems to fail due to a bad import of Optional, from what I can tell.<|||||>@jacobdineen Thanks for this! I would guess that `make fix-copies` has copied code to the other models, but one or more of the files it copied to hasn't imported `Optional`, and so the code crashes there. If you add the missing imports to those files it should work! Also, some of the return types here seem to be missing bits - they're just `Union[Tuple]` but usually there's another return type as well in the union.
transformers
16,178
closed
clearer model variable naming: funnel
# What does this PR do? clearer model variable naming: funnel <img width="1512" alt="Ekran Resmi 2022-03-15 22 59 16" src="https://user-images.githubusercontent.com/58150504/158461148-2dc37be1-e491-49ac-9d0d-a47ee5f2d8b4.png"> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [*] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @gante Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
03-15-2022 19:59:59
03-15-2022 19:59:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,177
closed
type annotations for rembert w/ copies from bert (torch)
Added type annotations to Rembert per the guidelines. Modified Bert where copied from transformers.models.bert.modeling_bert #16059 @Rocketknight1
03-15-2022 17:32:41
03-15-2022 17:32:41
Note make fixup unable to run in my environment w/ the error message: ``` /usr/bin/sh: -c: line 2: unexpected EOF while looking for matching `"' /usr/bin/sh: -c: line 3: syntax error: unexpected end of file make: *** [Makefile:10: modified_only_fixup] Error 2 ``` <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>ran make fix-copies. Many copies stemming from Bert (torch), some of which don't have typing imports already. Added those where applicable.
transformers
16,176
closed
Translate accelerate.mdx from english to spanish
# What does this PR do? Add the spanish version of [`accelerate.mdx`](https://github.com/huggingface/transformers/blob/master/docs/source/accelerate.mdx) to [`transformers/docs/source_es`](https://github.com/huggingface/transformers/tree/master/docs/source_es) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes https://github.com/huggingface/transformers/issues/15947 (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? ## Who can review? @omarespejel @osanseviero <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-15-2022 17:26:16
03-15-2022 17:26:16
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, @Sangohe! I will review it during the week :)<|||||>> Muchas gracias! I left some simple comments. Thank you for your comments. I've added all the suggestions and committed the new version<|||||>Thanks again for the review. I've committed the last suggestions.<|||||>Thank you @Sangohe! 🤗 @sgugger LGTM 👍
transformers
16,175
closed
Add type hints for Reformer PyTorch
Adding type hints for forward methods in user-facing class for Reformer model (PyTorch) as mentioned in #16059 @Rocketknight1
03-15-2022 17:12:46
03-15-2022 17:12:46
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,174
closed
Add type hints for Perceiver Pytorch
# What does this PR do? Added type annotations for Perceiver Pytorch models as per https://github.com/huggingface/transformers/issues/16059 @Rocketknight1
03-15-2022 17:12:07
03-15-2022 17:12:07
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,173
closed
Attempt to make load `from_tf=True` more efficiently
# What does this PR do? - Remove the need to get 2X RAM when loading from h5 file - Remove the need to have TF present to load `from_tf=True` (This is reduced to h5 only) A much better version (imo) of the finale code resides here: https://github.com/Narsil/transformers/pull/4 Keeping separate PRs since it's a more involved change. Script: ```python from transformers import GPT2Model model = GPT2Model.from_pretrained("gpt2", from_tf=True) ``` Making sure network is not a thing: ``` TRANSFORMERS_OFFLINE=1 /usr/bin/time -v python test.py ``` Master: ``` Command being timed: "python test.py" User time (seconds): 9.90 System time (seconds): 2.24 Percent of CPU this job got: 224% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:05.42 Maximum resident set size (kbytes): 2411224 ``` With this PR: ``` Command being timed: "python test.py" User time (seconds): 2.41 System time (seconds): 0.97 Percent of CPU this job got: 161% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.09 Maximum resident set size (kbytes): 1096128 ``` And just a comparison point loading from Pickled file directly: ``` Command being timed: "python test.py" User time (seconds): 2.63 System time (seconds): 0.95 Percent of CPU this job got: 178% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.00 Maximum resident set size (kbytes): 1319596 ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-15-2022 16:32:09
03-15-2022 16:32:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16173). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger @LysandreJik I think all the tests are passing now. Is there any thing I should be extra careful about with this / test in some form ?<|||||>> I've left a few comments on the PR but I'm extremely nervous about changing a core piece of the library like this. This function is not intended to be performant as the conversion should be done once and for all, and then the TF weights should be uploaded in the repo for every other users. Yes, I fully understand the nervousness, I don't claim to be 100% it won't break anything. I think the pure optimization is NOT the only goal here (although it's the only tangible result in this PR). The bigger picture is an alternative for `pickle` which is unsafe by default and is used throughout PT. During the discussions around options to make things safer, the point was raised if it was even possible to load directly from h5 without the intermediary `tf_model`. Different tensor shapes was mentioned (indeed convolutions use different tensor forms). Fortunately, it seems that the naming and pre-existing code is so consistent that we can do the work directly without resorting to loading the thing in tensorflow. This PR is the result of my exploration of the problem where I think I show that it's possible to purely use the pre-existing h5 files on models that support it. The follow-up also shows (to the best of my knowledge) that we can create an h5 files purely from the pickled file (because all tensor manipulation happen from tensor name and don't require actual models to determine transformations), meaning that this option is actually viable if we want to propose a safer alternative to pickle in PT. Besides security `h5` could also solve some issues encountered by @stas00 for very large model files, where here you can load each tensors individually, and with lower RAM requirements. This PR enables to use even less RAM than the PT way because the tensors only exist in RAM one at a time before being fed onto the network (we can even remove that single copy should we want it). I will remark this as Draft just because as you mentioned, it shouldn't be merged without the extra caution it should come with. Should we even want to merge this. (We could imagine doing a real sweep of model files on the hub to make sure that this works on the real data saved out there besides the tests for one, + running all slow tests too). <|||||>Also a bit nervous about this PR, but I see the gain we're getting with such a change! It's a good sign that all tests are passing here since we do test `from_tf=True` for all models. For me I think I'm generally fine with this I think. Could we maybe do the following: - Run the changes for all pytorch and tensorflow tests cross tests (`RUN_PT_TF_CROSS_TESTS`). (No need to test the Slow tests), but we need to be sure that all cross tests are valid - @Narsil could you specific why we need both a `apply_transpose` and a `apply_reverse` method? How is the tensor reshaping / transposing logic different from before? <|||||> > Also a bit nervous about this PR, Would a solution like using the new code in a `try.. except` and running the exact old code if it fails make everyone feel less nervous ? (I know it would reduce my own nervousness) > Run the changes for all pytorch and tensorflow tests cross tests (RUN_PT_TF_CROSS_TESTS). (No need to test the Slow tests), but we need to be sure that all cross tests are valid That's already done I think. I am not sure why but I get some 5 skipped tests on brutasse, but otherwise it works, and seems to be checked by the ci already too here : https://app.circleci.com/pipelines/github/huggingface/transformers/36539/workflows/629c3ec0-d69f-44f2-bcec-256724806311/jobs/399239 > @Narsil could you specific why we need both a apply_transpose and a apply_reverse method? How is the tensor reshaping / transposing logic different from before? In this PR, reshaping is very close to the same. The only thing I allowed myself to change here was to prevent logic reshape fails (Like imagine we have a `(ATTN, VOCAB_SIZE)` weight in TF, and the PT target expects `(VOCAB_SIZE, ATTN)`. Imagine the code messed up and does not declare the right `TransposeType`, then maybe we could end up doing `(ATTN, VOCAB_SIZE)reshape((VOCAB_SIZE, ATTN))` which would work technically but it's a bug IMO since tensors are different from expected. Looking at the failures when `reshape` was missing, I deduced that the code was meant to simply handle (..., 1, ... 1 ) differences where it wouldn't necessarily be about `squeeze`/`unsqueeze` but also the position of the `1-dim` in the tensor. Doing reshapes of that sort is perfectly safe, so I kept it as-is. > In the follow-up (https://github.com/Narsil/transformers/pull/4) I remove this logic entirely by using new extra `TransposeType` instead. It is even scarier since TransposeType is defined purely through weight naming so if any arch doesn't have those equivalence tests, maybe we're changing tensors in an unintended way (it will crash and not be a silent error, but still). > > Having `TransposeType` be invertible is IMO a necessary property to provide confidently some new functionality to save things in the `h5` instead of `pickle` with minimum changes. > > ```python > weight_name = "clip.model.conv.bias" > tensors = torch.Tensor((x, y, z)) > > tf_name, transpose = transpose_type = convert_pt_weight_name_to_tf_weight_name(weight_name, tensor.shape) # Does not exist yet > > > # Save > tf_tensor = apply_reverse(tensor, tranpose) > save_h5(tf_name, tf_tensor) > ``` > The key thing here is that `transpose` needs only the name and `apply_reverse` would need only the original tensor, it wouldn't need to know the target. > Reversing the name itself is a bit more challenging since it's destructive in the TF -> PT path (`prefix_to_remove`) but I haven't explored that part yet, maybe the naming is consistent enough that we can still invert the name too. > > If we get there that means we could *always* save `tf_model.h5` purely from the PT model. (It does limit a little the TF naming of things, but it seems that naming is sooo consistent across the library that it's sort of already maintained/guaranteed at some point. All of this, is to provide the least possible amount of modifications to support `h5`. If we added a new file like `pytorch_model.h5` then we can purely dictate what goes on here (but it does mean creating a split in the library serialization format, meaning people with older transformers wouldn't be able to load the new `pytorch_model.h5` files etc.. It would cause much confusion where it is used etc... Not ideal. Keeping everything in already defined names means we can leverage pre-existing files and provide a safe/lower memory alternative on any model that has both PT and TF support, without modifying anything for users/models which is already quite nice ! > And **if** I get the naming thing reversible we add support to `save_pretrained(as_h5=True)` to allow saving directly in `h5` format (without needing to support the model in TF) and loading `(from_tf=True)` meaning we can also plan a smooth transition or make the choice of format a user-decision etc.. It's a big IF <|||||>A thing that would make me less nervous is splitting the refactoring and the actual code changes in two different PRs so we can see more clearly the actual diff. On the second PR that does the code change, testing the code works well for all kinds of architectures with checkpoints on the Hub would also be reassuring (it's tested in the CI on small configs but adding an integration check would be safer). We could test all checkpoints that are in the mappings with urls for instance (at the beginning of each modeling file).<|||||>> On the second PR that does the code change, testing the code works well for all kinds of architectures with checkpoints on the Hub would also be reassuring @sgugger good insight, I am finding many things which are not covered by the fast tests (mostly in the heads I think). I am looking into it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,172
closed
Add type hints for XLM (TensorFlow)
# What does this PR do? Adds type hints to the TensorFlow implementation of the XLM model.
03-15-2022 15:52:33
03-15-2022 15:52:33
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16172). All of your documentation changes will be reflected on that endpoint.
transformers
16,171
closed
bert: properly mention deprecation of TF2 conversion script
Hi, after training some BERT models with the TF2 implementation from the TF models repo, I just made the following observations: The training code was deprecated a while ago, and now lives under the `legacy` namespace. I have trained models with this "legacy" implementation but the TF2 conversion script is no longer working. I did some debugging, trained a few models and found out that the latest working version is v2.3. For that reason, I have just updated the description in the TF2 conversion script.
03-15-2022 15:23:51
03-15-2022 15:23:51
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @LysandreJik , unfortunately, there's no working example code for pre-training in the TF models repo yet (I did some experiments, but I was not able to get a working setup). However, there's a Roformer code (https://github.com/tensorflow/models/tree/master/official/projects/roformer) available and training is working.<|||||>Sounds good, let's merge the PR like this then. Thanks for your contribution!
transformers
16,170
closed
[MT5Config] add relative_attention_max_distance in config
# What does this PR do? Add the `relative_attention_max_distance` attribute in `MT5Config`.
03-15-2022 14:58:35
03-15-2022 14:58:35
_The documentation is not available anymore as the PR was closed or merged._<|||||>Merging. as all mt5 models are crashing on master now.
transformers
16,169
closed
Add missing type hint for XLM (TensorFlow)
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-15-2022 14:47:47
03-15-2022 14:47:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,168
closed
Fix FlaxRoFormerClassificationHead activation
# What does this PR do? Current `FlaxRoFormerClassificationHead` use hard-coded activation `nn.tanh`, but the PyTorch version `RoFormerClassificationHead` use `ACT2FN[self.config.hidden_act]` which is by default `gelu` (and this is the one used by the default checkpoint's config). This PR fixes this activation issue in `FlaxRoFormerClassificationHead`.
03-15-2022 14:22:17
03-15-2022 14:22:17
To get more familiar with the library management/maintenance, I would like hear from you regarding: does this count as a breaking change? <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Merge now since this is clearly a bug, as well as a tiny change
transformers
16,167
closed
Fix some Flax models' `hidden_states`
# What does this PR do? Fix some Flax models where the last element in `hidden_states` is different between PT/Flax version. ## More context Some models have ``` last_hidden_state = self.layer_norm(last_hidden_state) ``` In Pytorch version, the returned `hidden_states` have this `last_hidden_state` (after layer norm) as the last element. In Flax version, the last element of the returned `hidden_states` is the one before the layer norm. This PR fixes this inconsistency (by using the PyTorch logic).
03-15-2022 13:30:59
03-15-2022 13:30:59
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,166
closed
update transformer XL with tf decorator
# What does this PR do? Unpacks TF model inputs through a decorator, improving code clarity for the transformer XL model. To be read with issue #16051 , which holds the description of the problem, the proposed solution, and future plan. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. #16051 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @LysandreJik @gante
03-15-2022 12:42:32
03-15-2022 12:42:32
tests results using pytest 👇 ``` RUN_SLOW=1 pytest -vv tests/transfo_xl/test_modeling_tf_transfo_xl.py tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 2%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 5%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_config PASSED [ 8%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 11%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 14%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 17%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 20%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 23%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 26%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 29%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 32%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 35%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 38%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 41%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 44%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 47%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 50%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 52%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_common_attributes PASSED [ 55%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_from_pretrained PASSED [ 58%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 61%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 64%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 67%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 70%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 73%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 76%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 79%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 82%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 85%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_transfo_xl_lm_head PASSED [ 88%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_transfo_xl_model PASSED [ 91%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_transfo_xl_sequence_classification_model PASSED [ 94%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelTest::test_xla_mode PASSED [ 97%] tests/transfo_xl/test_modeling_tf_transfo_xl.py::TFTransfoXLModelLanguageGenerationTest::test_lm_generate_transfo_xl_wt103 SKIPPED (Skip test until #12651 is resolved.) [100%] ```<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
16,165
closed
value check for typical sampling
Fixes #16080
03-15-2022 11:20:17
03-15-2022 11:20:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,164
closed
Add type hints for Pegasus model (PyTorch)
Adding type hints for forward methods in user-facing class for Pegasus model (PyTorch) as mentioned in #16059 @Rocketknight1
03-15-2022 10:58:55
03-15-2022 10:58:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16164). All of your documentation changes will be reflected on that endpoint.<|||||>it seems like BART model was not updated yet, hence the inconsistency<|||||>@Tegzes Unfortunately this will have to wait for the BART PR to avoid breaking repository consistency, but I'll take a look at that point!<|||||>@Rocketknight1 I saw that BART model was updated, along with multiple models depending on it, including this one. Should I close the PR?<|||||>Hi @Tegzes, there might still be portions of `modeling_pegasus.py` that need type hints - if you could check that and either submit a Pegasus PR or close this one that would be great. And sorry for the confusion caused by the copying dependencies!
transformers
16,163
closed
added type hints to yoso
# What does this PR do? I added type annotations for YOSO PyTorch as described in [#16059](https://github.com/huggingface/transformers/issues/16059) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @Rocketknight1 @gante @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-15-2022 09:45:22
03-15-2022 09:45:22
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Rocketknight1 Kindly help review
transformers
16,162
closed
ResNetD
# What does this PR do? This WIP PR adds ResNetD proposed in [Bag of Tricks for Image Classification with Convolutional Neural Networks](https://arxiv.org/abs/1812.01187). Currently, the model can be used as follows ```python import requests from io import BytesIO res = requests.get('https://github.com/huggingface/transformers/blob/master/tests/fixtures/tests_samples/COCO/000000039769.png?raw=true') image = Image.open(BytesIO(res.content)) feature_extractor = AutoFeatureExtractor.from_pretrained("zuppif/resnet-d-50") model = ResNetForImageClassification.from_pretrained("zuppif/resnet-d-50").eval() inputs = feature_extractor(image, return_tensors="pt") outputs = model(**inputs) print(model.config.id2label[torch.argmax(outputs.logits).item()]) # tiger cat ``` ## TODO - [ ] Transfer weights to amazon
03-15-2022 08:47:22
03-15-2022 08:47:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16162). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks for adding this. I wonder if the ResNet-D architecture shouldn't have switches in its architecture to allow all modifications (A, B, C) independently, since they were all defined in the same paper. > > Apart from that, LGTM! The model outputs just need to refactored in one model output in a separate PR. So the `d` variant is the only one used in practice because it's the combination of all the empirical improvements they show in the paper (so variant b + c = d). <|||||>I'm aware, but since the paper explores B and C independently too, it might make sense to enable them via config flags (while keeping the default to D).<|||||>> I'm aware, but since the paper explores B and C independently too, it might make sense to enable them via config flags (while keeping the default to D). This seems impossible with the current setup, the model is called `ResNetD` thus it has to be resnet-d. If we want to have all the variants we should add them to `ResNet` and create `ResNetD` using a specific set of parameters in the `ResNetConfig`. For instance, if I want to create `ResNetC` how should I call the `*ShortCut`? `ResNetDCShortCut`? <|||||>> This seems impossible with the current setup, the model is called ResNetD thus it has to be resnet-d. Why? It should be called ResNet-D (because that is the final name in the paper) and it can still have config flags to enable the variants. Anyhow, you don't want to add it so let's stop commenting about it and just merge this.<|||||>Update the codebase and modify one test in `test_config_auto` to check for an exact match (otherwise `resnetd` is matched with `resnet` thus failing the test)<|||||>Due to the incoming changes from #16188 , I want to merge that first and then rebase + update the codebase here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,161
closed
How to resume run_t5_mlm_flax.py script?
I'm trying to resume training when using the `run_t5_mlm_flax.py` script, but I've noticed that I don't see any checkpoint folders in my `output_dir`—there are a few `events.out.tfevents[...].v2` files, and a `flax_model.msgpack` file. The latter sounds like maybe it's the weights file, but I'm not sure. When I try to run training with `--overwrite_output_dir False` it just seems to start again from scratch. I'm obviously missing something here... Any help appreciated.
03-15-2022 03:47:17
03-15-2022 03:47:17
Ping @patil-suraj <|||||>Hi @jbmaxwell ! checkpoint loading and saving is currently not implemented in flax examples. > The latter sounds like maybe it's the weights file, but I'm not sure Yes, you are right. the script only saves the model weights and not the optimizer states. You can load the saved weights by passing it's path as `--model_name_or_path` argument. To save and load the optimizer state ```python # save the opt state opt_state = jax.device_get(state.opt_state) with (Path(output_dir) / "opt_state.msgpack").open("wb") as f: f.write(to_bytes(opt_state)) with Path(Path(output_dir) / "opt_state.msgpack")).open("rb") as f: opt_state = from_bytes(f.read()) ```<|||||>Ah, okay. Thanks for the note!<|||||>I'm finally getting back to this... I've trained the model on my data which seemed to converge pretty well. There are two tasks I'm interested in: 1) sentence infilling, and 2) paraphrasing. Paraphrasing seems like it should be pretty simple since it kind of amounts to "translating" a sentence in its own language, optionally varying `top_p`, `temperature`, etc. But I'm really not sure about infilling. I've tried the code posted here— https://github.com/huggingface/transformers/issues/3972 —but it seems very hit-and-miss. Mind you, I do get this warning when the model loads: `Some weights of T5ForConditionalGeneration were not initialized from the Flax model and are newly initialized: ['decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.`, which makes me think that either it hasn't loaded my trained weights, or there are fine tuning layers that are as yet untrained (I've seen this after pretraining other models in the past). Any guidance on how to get sentence infilling working? Any blogs, posts, or threads I should check out? Any help much appreciated. Thanks!<|||||>> Hi @jbmaxwell, When I use `model = T5ForConditionalGeneration.from_pretrained(pretrained_model_name_or_path_by_flax, from_flax=True)` I also meet this warning: `Some weights of T5ForConditionalGeneration were not initialized from the Flax model and are newly initialized: ['decoder.embed_tokens.weight', 'encoder.embed_tokens.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.` but I use `model_flax = FlaxT5ForConditionalGeneration.from_pretrained(pretrained_model_name_or_path_by_flax)` don't appear warning, so I check the `model_flax`'s weights. There is no embed_tokens weight in `model_flax`. Is this normal?
transformers
16,160
closed
`TFBertModel.predict()` and `TFBertModel()` return different results.
## Environment info - `transformers` version: 4.17.0 - Platform: Linux-5.4.0-99-generic-x86_64-with-glibc2.27 - Python version: 3.9.0 - PyTorch version (GPU?): 1.10.2+cu102 (True) - Tensorflow version (GPU?): 2.8.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no Models: It affects multiple models, at least BERT and Sentence-BERT. - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik ## Information Model I am using (Bert, XLNet ...): BERT-base The problem arises when using: * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Run the following script: ``` from transformers import AutoTokenizer,TFAutoModel,AutoConfig,TFAutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased') model = TFAutoModel.from_pretrained('bert-base-uncased') sentence='It can be any sentence here' encoded_input = tokenizer(text, return_tensors='tf') output0 = model(encoded_input)[0] output1 = model(encoded_input,training=False)[0] output2 = model(encoded_input,training=True)[0] output3 = model.call(encoded_input)[0] output4 = model.call(encoded_input,training=False)[0] output5 = model.call(encoded_input,training=True)[0] output6 = model.predict(encoded_input.values())[0] print(output0[0,0,:10]) print(output1[0,0,:10]) print(output2[0,0,:10]) print(output3[0,0,:10]) print(output4[0,0,:10]) print(output5[0,0,:10]) print(output6[0,0,:10]) ``` The output is : ``` tf.Tensor( [-0.65482056 0.13908163 -0.7431607 -0.46033248 -0.834605 0.24891722 1.042905 0.34481025 -0.19269213 -0.32815126], shape=(10,), dtype=float32) tf.Tensor( [-0.65482056 0.13908163 -0.7431607 -0.46033248 -0.834605 0.24891722 1.042905 0.34481025 -0.19269213 -0.32815126], shape=(10,), dtype=float32) tf.Tensor( [-0.65266645 -0.03810687 -0.776067 -0.07178783 -0.9046151 0.18066 0.82093364 0.53261256 0.06615249 -0.03404194], shape=(10,), dtype=float32) tf.Tensor( [-0.65482056 0.13908163 -0.7431607 -0.46033248 -0.834605 0.24891722 1.042905 0.34481025 -0.19269213 -0.32815126], shape=(10,), dtype=float32) tf.Tensor( [-0.65482056 0.13908163 -0.7431607 -0.46033248 -0.834605 0.24891722 1.042905 0.34481025 -0.19269213 -0.32815126], shape=(10,), dtype=float32) tf.Tensor( [-0.7354083 0.18443112 -1.049724 -0.532299 -0.8917109 0.42158493 1.2287179 0.6927482 0.02612726 -0.5916267 ], shape=(10,), dtype=float32) [-0.8285751 0.2074836 -0.36887544 -0.3653527 -1.0217763 0.608446 0.9114009 0.4278982 -0.02307865 -0.09100434] ``` The output are different between `model(encoded_input,training=True)` `model.call(encoded_input,training=True)`; And the output are different between `model.predict(encoded_input.values())` and `model(encoded_input)`, This one makes me very confuse and I believe this is a bug. ## Expected behavior `model.predict(encoded_input.values())` and `model(encoded_input)` should give the same output.
03-15-2022 01:36:05
03-15-2022 01:36:05
Pinging @Rocketknight1 @gante <|||||>I'll take it! Investigating now.<|||||>Hi @a7744hsc, I don't think there's a bug here. When the model is called with `training=True`, certain layers like Dropout are active. This means that `model(encoded_input,training=True)[0]` and `model.call(encoded_input,training=True)[0]` are giving different outputs, but not because they actually call different code - it's because layers like Dropout are randomly dropping out different neurons. You can see this by simply calling `model(encoded_input,training=True)[0]` twice, and noting that the results are different each time, whereas if you use `training=False` then the results are consistent, because Dropout is disabled. The second issue you mentioned is the difference between `model.predict(encoded_input.values())` and `model(encoded_input)`. The issue here is that `encoded_input.values()` does not unpack the data the way the model expects - it loses the dict keys and so the model can't tell which array corresponds to which model argument. As a result, this line will give bad results. If you want to use `model.predict`, you could just try `model.predict(encoded_input["input_ids"])`. If you do this, you'll see the results are extremely close to the results you get from `model(encoded_input,training=False)`. The remaining small differences are on the order of `1e-4` or `1e-5` and are mostly just `float32` rounding errors, I think. (The technical reason is, I suspect, that `model.predict` compiles the code with `tf.function` internally, which means it takes a slightly different path than `model.call`, which is eager, and thus some small rounding errors can occur between the two implementations) I'm going to close this issue for now, but if you think there are bugs in the code and this reply hasn't addressed them, feel free to re-open it with a new comment!<|||||>@Rocketknight1 Thanks a lot for your detailed explanation, it's quite clear for me. Wish you have a good day. by the way, the sequence for `predict` should be `model.predict([encoded_input["input_ids"],encoded_input["attention_mask"],encoded_input["token_type_ids"]])`
transformers
16,159
closed
Text2Speech classes
Hey, it would be very cool if there were classes already developed for carrying out Text2Speech tasks. We do have many classes capable of doing Speech2Text, but for the opposite task there are no transformer classes available, although there are starting to be models in the hub: https://huggingface.co/facebook/tts_transformer-es-css10 It would be great if instead of using fairseq for loading those models, we could load them and fine-tune them further with Transformers.
03-15-2022 00:01:44
03-15-2022 00:01:44
Pinging @anton-l for knowledge :)<|||||>Hi @alexvaca0, thanks for your interest! We're currently working on implementing FastSpeech2 natively: https://github.com/huggingface/transformers/pull/15773 And looking into FastPitch as well: https://github.com/huggingface/transformers/issues/16349 Feel free to subscribe to those PRs/issues to follow the progress :)
transformers
16,158
closed
Added spanish translation of quicktour.mdx
# Translation of quicktour.mdx into Spanish I made the translation of quicktour.mdx into Spanish (fixes #15947). The document is located in the transformers/docs/source_es folder. @omarespejel <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
03-14-2022 22:36:12
03-14-2022 22:36:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks, @Duedme! LGTM!
transformers
16,157
open
Implement Maximal Update Parametrization (muP)
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> This request is to open up a discussion on 1) whether it makes sense to implement [Maximal Update Parametrization (abbreviated muP)](http://arxiv.org/abs/2203.03466) in Huggingface, 2) if so, how to do it. ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> Hi, I'm a maintainer for the [mup package](https://github.com/microsoft/mup) ([paper](http://arxiv.org/abs/2203.03466)). This repo allows one to implement in their models a special parametrization called maximal update parametrization, or muP, that has the special property that narrow and wide networks share the same optimal hyperparameters (like learning rate, initialization, etc). This is demonstrated below on a Transformer trained with adam, where on the left we have the pytorch default parametrization and the right we have muP. ![image](https://user-images.githubusercontent.com/53244851/158121650-ceb833c4-3cd9-4861-a75e-18ec8346757c.png) Most strikingly, this property can be used to tune hyperparameters for extremely large neural networks like GPT-3 that is too expensive to train more than once, by just tuning a tiny version of it. But even for "regular joe" users, muP can alleviate a lot of the pain when transitioning from exploration to scaling up and finding performance suffer for mysterious reasons. Transformers in particular is somewhat infamous for problems like training instability. So having muP integrated natively into Huggingface can benefit a lot of users at once. muP can be implemented in a backward compatible way, as shown below, so users do not need to worry about it breaking existing codebases. See this [twitter thread](https://twitter.com/TheGregYang/status/1501294412126560257?s=20&t=0B9TXBOHEaAPZWXA6xS8rQ) for more (but brief) information about how this works, and this [blog post](https://www.microsoft.com/en-us/research/blog/%C2%B5transfer-a-technique-for-hyperparameter-tuning-of-enormous-neural-networks/) for less brief overview. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> Now let's return to the two questions at the beginning: 1) whether it makes sense to implement Maximal Update Parametrization (abbreviated muP) in Huggingface, 2) if so, how to do it. For 1), the popularity (or not) of this issue should serve as an indicator of community interest, and the above makes the case for the utility of this integration. For 2), we have examples of how to integrate muP with some common (PyTorch) Huggingface transformers in our [mutransformers repo](https://github.com/microsoft/mutransformers). ### Current Example Implementation In summary, to modify an existing Huggingface transformer to implement muP, one needs to 1. Switch any readout layer (dimensions: width -> number of labels) from `nn.Linear` to `mup.MuReadout`. 2. Modify the `_init_weights` method to use `mup.init.*` methods instead of `nn.init.*` methods (or equivalent). 3. Scale the attention logits like 1/d instead of 1/sqrt(d) 4. Use `mup.AdamW` instead of the pytorch or Huggingface version. In addition, when using a `mutransformer`, one needs to provide a "base shape file" that lets the model know how to properly scale the learning rate and attention with width. This is designed so that if the model parameter shapes are the same as the "base shapes", then the model is in the original parametrization, i.e. backward compatible. ```python from mutransformers import BertConfig, BertForMaskedLM # instantiate model model = BertForMaskedLM(config=BertConfig(...)) # set base shapes set_base_shapes(model, path_to_base_shape_file) # re-initialize model.apply(model._init_weights) ``` ### More Seamless Integration Now, the [mutransformers repo](https://github.com/microsoft/mutransformers) is primarily designed to serve as examples of how to implement [muP](https://github.com/microsoft/mup) into existing transformers. So all of the above can be streamlined if we really want seamless integration into Huggingface. For example, the user interface for instantiating a model could just be the same as it is now, but we just have an additional flag `mup=True` in `BertConfig` that says to switch on `mup`. `BertConfig` itself may carry a default set of base shapes for use in this scenario, which the user can also modify if necessary. ```python # the model automatically sets base shapes based on defaults in BertConfig # no need to re-initialize either model = BertForMaskedLM(config=BertConfig(mup=True,...)) # use model immediately, e.g., train ``` In addition, `mup.MuAdamW` can be incorporated natively into Huggingface as well, so that there is no dependency on the `mup` package at all. ### muP for All Transformers? As, currently, there is no automatic way of backfitting existing transformers, it could be quite a task to add muP to all of the transformers in Huggingface. So a good practical compromise is to just implement muP for the most commonly used models in Huggingface. In the interim, research can be done on a method of such automatic backfitting. This could even involve a pull request into PyTorch core. ### Conclusion Again, this issue is intended to start the discussion of whether and how to make muP available to Huggingface users natively. It could be that the best course forward is to have users implement muP transformers themselves as in `mutransformers`, or even to build `mutransformers` into such a repo of muP transformers. And even if we do decide to integrate muP into Huggingface, there could be many ways to do it. I hope discussion here could elucidate the right course of action.
03-14-2022 22:08:55
03-14-2022 22:08:55
Pinging maintainers for knowledge: @patrickvonplaten @sgugger @patil-suraj <|||||>This is only really relevant for pretraining I assume no? I wonder whether it might make more sense to add this directly to `accelerate` ? cc @sgugger<|||||>Hi Patrick, I'm another maintainer of the `mup` repo. It's true that the biggest payoff will probably come from applying our technique to large-scale pretraining, but `mup` can also help users who are working with novel architectures on a smaller scale. For example, someone might modify an existing model in `transformers` to test a new idea, and `mup` can improve stability and reduce the need to tune HPs when they gradually scale up. We aren't sure if this is a common use case for `transformers`, but mentioning it in case it is. We are more than happy to look into integration with other tools such as `accelerate`. A quick scan tells me that it abstracts away the management of `device` in PyTorch without having to know the architecture inside an `nn.Module`. `mup` on the other hand does need to know the architecture, specifically, which weight dimensions go to infinity. We'd love to hear more from you guys regarding this! <|||||>From what I gather of the `mup` repository, it's not general enough (yet?) to be integrated into Accelerate as it seems to be very targeted toward Transformer models, whereas Accelerate handles any kind of models. As for integrating into Transformers, I think everyone would be delighted to see it as easily accessible as possible. There is just the (big) catch of modifying every modeling file for this. It's not really an option for two reasons: 1. users are getting upset with us that modeling files already contain too much stuff they do not need when they tweak it for their experiments, so this would add more code that's not strictly necessary just to run BERT, GPT-2 etc. 2. there are currently 108 different architectures, so waaaay too many modeling files ;-) As such it would be way more powerful if we could design a function that automatically converts a model to be used with muP. The first two points you mention are easy to do on an existing model (we can change the Linear layers on the fly and re-init the weights), the last one is a tiny bit more complex. I don't know if you have any idea on this. As for making the `mup.AdamW` accessible, we wouldn't necessarily take the code, but it's very easy to add support for a new optimizer in the Trainer with an optional dependency on `mutransformers`. If we don't manage to have such a function, we also have a feature where you can host any modeling code on the Hub and have it run with Transformers (using the `AutoModel.from_pretrained` API). See [here](https://huggingface.co/docs/transformers/custom_models) for the documentation. It would allow us to easily integrate muP in our examples while not changing the code of popular models such as BERT and GPT-2. Let me know your thoughts! <|||||>Hi @sgugger, is there any particular reason you say that `mup` is very targeted toward Transformers? We definitely designed `mup` with general models in mind, even though Transformers would be where a lot of payoffs would be. For example, we have a [ResNet example](https://github.com/microsoft/mup/tree/main/examples/ResNet) in our repo. <|||||>You're right, I should have said that the adaptations you mention seem very targeted toward Transformer (in particularly point 3 above).<|||||>Hi @sgugger, Like Greg said, only the third item is Transformer-specific (we should have noted it clearly). My concern wrt `accelerate` is more so that it seems agnostic to the model architecture in its current shape, whereas our technique requires knowing things like how many dimensions of a parameter tensor grow with width. I like the idea of having a converter function so we keep the model files as clean as possible. I'd also like to point out that muAdam is simply a wrapper on top of torch Adam, which manipulates the parameter group dictionary to explicitly adjust learning rates according to muP. Perhaps this explicit conversion can be a part of the converter function instead to remove the dependency on the `mup` package. @thegregyang What do you think?<|||||>After discussion with Edward, we think perhaps hosting custom model code on the Hub would be the best way to go. We have some questions about this: 1. Is there an API to initialize a model from scratch instead of from a checkpoint (in contrast to `AutoModel.from_pretrained`)? 2. Is there intellisense or other user-facing tools in VS Code or other IDEs to facilitate the usage of a model from the Hub? More generally, I'm just wondering what kind of user experience we are dealing with here.<|||||>You can create a randomly initialized model with `AutoModel.from_config`, with the config pulled with `AutoConfig.from_pretrained`: ```py from transformers import AutoConfig, AutoModel config = AutoConfig.from_pretrained(checkpoint_name) model = AutoModel.from_config(config) ``` As for the second point, not really.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry still working on this!<|||||>Any update?<|||||>@sodabeta7 has been working on this. @sodabeta7 could you summarize your progress?
transformers
16,156
closed
Add type hints OpenAIGPT
This PR adds type annotations for both pytorch and tensorflow implementations of OpenAIGPT model.
03-14-2022 21:13:07
03-14-2022 21:13:07
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @TristanBilot, I'm sorry I didn't see this PR until now! It looks like those models have had type hints added by other PRs. Also, this PR is quite hard to review because of the rebase, which has made the list of "files changed" really large. Can you check the `OpenAIGPT` model files to see if we still need a PR for this, and if we do, can you make a fresh one based on the latest version of `main`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
16,155
closed
Configurable Relative Position Max. Distance
# What does this PR do? The relative attention on the T5 model can only configured with the number of buckets. However, it has fixed max distance and can't be configured. The current fixed max distance is working fine for text modality, but for much longer sequence use-cases, it has to be adjusted. This pull request make it possible to configure the max distance in the relative attention. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @sgugger @patil-suraj @LysandreJik
03-14-2022 20:55:53
03-14-2022 20:55:53
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,154
closed
Shift responsibilities a bit for issues
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Discussed with @patil-suraj offline. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-14-2022 20:39:33
03-14-2022 20:39:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,153
closed
Change assertion to warning when passing past_key_value to T5 encoder
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. https://github.com/huggingface/transformers/issues/15591 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). N/A - [ ] Did you write any new necessary tests? N/A ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-14-2022 20:01:24
03-14-2022 20:01:24
_The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten <|||||>I'm fine with this change<|||||>Thanks. I don't think I can merge -- please merge whenever makes sense.<|||||>@patil-suraj feel free to merge if ok for you
transformers
16,152
closed
clearer model variable naming: pegasus
# What does this PR do? clearer model variable naming: pegasus ```bash ============================================================================= test session starts ============================================================================= platform darwin -- Python 3.8.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/kamal.raj/.virtualenvs/transformers/bin/python cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/kamal.raj/Documents/GitHub/transformers/.hypothesis/examples') rootdir: /Users/kamal.raj/Documents/GitHub/transformers, configfile: setup.cfg plugins: xdist-2.4.0, timeout-2.0.1, dash-2.0.0, hypothesis-6.31.3, forked-1.3.0, typeguard-2.13.2 collected 31 items tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 3%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_compile_tf_model PASSED [ 6%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_config PASSED [ 9%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_decoder_model_past_large_inputs PASSED [ 12%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 16%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 19%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 22%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 25%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 29%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 32%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 35%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 38%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 41%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 45%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 48%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 51%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 54%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 58%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 61%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_model_common_attributes PASSED [ 64%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 67%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 70%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 74%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 77%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 80%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 83%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_resize_token_embeddings PASSED [ 87%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 90%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 93%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusModelTest::test_saved_model_creation PASSED [ 96%] tests/pegasus/test_modeling_tf_pegasus.py::TFPegasusIntegrationTests::test_batch_generation PASSED [100%] ============================================================================== warnings summary =============================================================================== ../../../.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19 /Users/kamal.raj/.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================================================ 30 passed, 1 skipped, 1 warning in 287.07s (0:04:47) ============================================================= ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @gante
03-14-2022 19:40:54
03-14-2022 19:40:54
_The documentation is not available anymore as the PR was closed or merged._
transformers
16,151
closed
Add/type annotations/model vision
I added type annotations for Beit, ViT and Deit models at PyTorch as described in #16059 @Rocketknight1 fix the PR #16092 Also, the issue with `make fix-copies` for ViT_mae, maybe I should open an issue? For this PR, I just remove the type annotation of config parameter that cause the problem
03-14-2022 19:36:25
03-14-2022 19:36:25
_The documentation is not available anymore as the PR was closed or merged._<|||||>This looks really good! Good job handling the more intricate `Tuple` annotations as well as forward references well - this is a lot bigger and more challenging than most of the other type hint PRs, but it looks like you did it well. I have one final small issue - types should only be `Optional` when `None` is an allowable option. It looked like there were some cases when `Optional[bool]` was used for boolean arguments to internal methods which have to be either `True` or `False` - these should just be `bool`. Can you double-check that? Of course, when `None` is allowable or the default option, then `Optional[bool]` is correct. I'm not 100% certain on this one (maybe `None` really is allowable in all of those cases!), so if you think all your annotations were correct, let me know.<|||||>Hi, i have kept `Optional[bool]` just to keep everything equals. But I will take a look, if it can be changed to "bool", I will update it and let you know.<|||||>@Rocketknight1 i have changed to just `bool`, both cases work, but just `bool` seems more correct and clean for me. When has `None` as default, the code will be `a if x is not None else b`, and with `False`/`True` will be just `a if x else b` <|||||>Sounds good, reviewing now!<|||||>@Rocketknight1 about the `make fix-copies` isn't working for `ViT_mae` (config parameters), this is because don't have `with ViT->VitMAE` at the end of the “Copied from …” comment; But i don't know if this annotation is really missing, or it was supposed to be this way
transformers
16,150
closed
clearer model variable naming: xlnet
# What does this PR do? clearer model variable naming: xlnet ```bash platform darwin -- Python 3.8.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/kamal.raj/.virtualenvs/transformers/bin/python cachedir: .pytest_cache hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/kamal.raj/Documents/GitHub/transformers/.hypothesis/examples') rootdir: /Users/kamal.raj/Documents/GitHub/transformers, configfile: setup.cfg plugins: xdist-2.4.0, timeout-2.0.1, dash-2.0.0, hypothesis-6.31.3, forked-1.3.0, typeguard-2.13.2 collected 36 items tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 2%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 5%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_config PASSED [ 8%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 11%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 13%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 16%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 19%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 22%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 25%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 27%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 30%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 33%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 36%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 38%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 41%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 44%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 47%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_loss_computation PASSED [ 50%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 52%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_from_pretrained PASSED [ 55%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 58%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 61%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 63%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 66%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 69%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 72%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 75%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 77%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 80%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_xlnet_base_model PASSED [ 83%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_xlnet_for_multiple_choice PASSED [ 86%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_xlnet_lm_head PASSED [ 88%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_xlnet_qa PASSED [ 91%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_xlnet_sequence_classif PASSED [ 94%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelTest::test_xlnet_token_classification PASSED [ 97%] tests/xlnet/test_modeling_tf_xlnet.py::TFXLNetModelLanguageGenerationTest::test_lm_generate_xlnet_base_cased PASSED [100%] ============================================================================== warnings summary =============================================================================== ../../../.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19 /Users/kamal.raj/.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses import imp -- Docs: https://docs.pytest.org/en/stable/warnings.html ============================================================ 35 passed, 1 skipped, 1 warning in 126.99s (0:02:06) ============================================================= ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-14-2022 19:14:26
03-14-2022 19:14:26
_The documentation is not available anymore as the PR was closed or merged._<|||||>Tagging @gante for a decorator PR!
transformers
16,149
closed
Translation from english to spanish of file pipeline_tutorial.mdx
## Translation from English to Spanish of file pipeline_tutorial.mdx Fixes #15947 @omarespejel I added the translation of the `pipeline_tutorial.mdx` file ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
03-14-2022 18:58:58
03-14-2022 18:58:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks @FernandoLpz! Will review it :) <|||||>Your welcome! I just fix the typos. @omarespejel <|||||>Hi @sgugger! It seems to me that the failed checks are due to something else and not this doc. WDYT?
transformers
16,148
closed
[Flax] improve large model init and loading
# What does this PR do? As discussed in [#15766](https://github.com/huggingface/transformers/issues/15766#issuecomment-1047912290) this PR adds the `_do_init` argument to handle large model loading in flax. By default `_do_init=True` and the API stays same. 1. `__init__`: - When `_do_init=False` is passed to `__init__`, the params are not initialised and the user should manually call the `model.init_weights` method to do the initialisation. - When `_do_init` is `False` accessing `model.params` is not allowed and params must be always kept outside of the model - This PR also adds the `params_shape_tree` shape property to `FlaxPreTrainedModel`, which is a PyTree with shape and dtype information for each param. This is how the API looks like: ```python config = BertConfig() model = FlaxBertModel(config, _do_init=False) # accessing model.params will raise an ValueError model.params # to init the params params = model.init_weights(model.key, model.input_shape) # setting the model.params will also raise an ValueError model.params = params # model.init_weights can be used with `jit` and `pjit` to init the params in CPU or init in sharded way params = jax.jit(model.init_weights, static_argnums=1, backend="cpu")(model.key, model.input_shape) ``` 2. `from_pretrained`: - When `_do_init=False` is passed to `from_pretrained`, the model won't be randomly initialised, only the weights will be loaded and returned along with the model instance. - when `_do_init=False` params will always be loaded on CPU - If `_do_init=False` and some keys are missing, the user should call `init_weights` and pass it the loaded params. `init_weights` will then take care of adding the missing keys. - as described above, getting and setting `model.params` will raise an error. ```python3 model, params = FlaxBertModel.from_pretrained("...", _do_init=False) # if keys are missing params = model.initt_weights(model.key, model.input_shape, params) ``` cc @borisdayma @sanchit-gandhi for info. Fixes #15766
03-14-2022 18:29:15
03-14-2022 18:29:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @patil-suraj, Thanks a lot for fixing this issue. Could you please give us some estimation of when this pull request will be ready?<|||||>Hey @agemagician ! Thanks. I just need to run and fix a few tests and then it should be good to merge by tomorrow.<|||||>> Hey @agemagician ! Thanks. I just need to run and fix a few tests and then it should be good to merge by tomorrow. Perfect, thanks a lot @patil-suraj for your effort. It will be awesome if you could update one of the flax language model examples to understand the changes that is needed after this merge: https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @patil-suraj , Any updates regarding this merge ?<|||||>@patil-suraj - think we can merge this no?<|||||>Just need to fix the template tests and then we can merge it.