repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,147 | closed | update albert with tf decorator | # What does this PR do?
Unpacks TF model inputs through a decorator, improving code clarity for the albert model. To be read with issue #16051 , which holds the description of the problem, the proposed solution, and future plan.
## Who can review? @gante | 03-14-2022 17:49:07 | 03-14-2022 17:49:07 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Report for the pytest run
```
RUN_SLOW=1 pytest -vv tests/albert/test_modeling_tf_albert.py
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_albert_model PASSED [ 2%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 5%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 8%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_config PASSED [ 11%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 13%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_masked_lm PASSED [ 16%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_multiple_choice PASSED [ 19%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_pretraining PASSED [ 22%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_question_answering PASSED [ 25%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_sequence_classification PASSED [ 27%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 30%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 33%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 36%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 38%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 41%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 44%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 47%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 50%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 52%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 55%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 58%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 61%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 63%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 66%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_common_attributes PASSED [ 69%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_from_pretrained PASSED [ 72%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 75%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 77%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 80%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 83%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 86%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 88%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 91%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 94%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 97%]
tests/albert/test_modeling_tf_albert.py::TFAlbertModelIntegrationTest::test_inference_masked_lm PASSED [100%]
```<|||||>Oh -- I noticed that another user (@tanmoyio) has claimed `albert` on the main thread :( Next time @infinite-Joy, please double-check in the original issue, as @tanmoyio might have spent some time here.
@tanmoyio, apologies for the clash, would you like to contribute through another model? :)<|||||>okay, no issues π€<|||||>> Oh -- I noticed that another user (@tanmoyio) has claimed `albert` on the main thread :( Next time @infinite-Joy, please double-check in the original issue, as @tanmoyio might have spent some time here.
>
> @tanmoyio, apologies for the clash, would you like to contribute through another model? :)
sure. will do that. apologies @tanmoyio . |
transformers | 16,146 | closed | clearer model variable naming: Deberta | # What does this PR do?
clearer model variable naming: Deberta
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@gante | 03-14-2022 17:46:12 | 03-14-2022 17:46:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Deberta-v1
```bash
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.8.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/kamal.raj/.virtualenvs/transformers/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/kamal.raj/Documents/GitHub/transformers/.hypothesis/examples')
rootdir: /Users/kamal.raj/Documents/GitHub/transformers, configfile: setup.cfg
plugins: xdist-2.4.0, timeout-2.0.1, dash-2.0.0, hypothesis-6.31.3, forked-1.3.0, typeguard-2.13.2
collected 36 items
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 2%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 5%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_config PASSED [ 8%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 11%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_for_masked_lm PASSED [ 13%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_for_question_answering PASSED [ 16%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_for_sequence_classification PASSED [ 19%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_for_token_classification PASSED [ 22%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 25%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 27%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 30%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 33%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 36%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 38%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 41%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 44%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 47%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 50%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 52%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 55%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 58%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 61%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_model PASSED [ 63%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 66%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_model_from_pretrained PASSED [ 69%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 72%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 75%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 77%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 80%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 83%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 86%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 88%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 91%]
tests/deberta/test_modeling_tf_deberta.py::TFDebertaModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 94%]
tests/deberta/test_modeling_tf_deberta.py::TFDeBERTaModelIntegrationTest::test_inference_masked_lm SKIPPED (Model not available yet) [ 97%]
tests/deberta/test_modeling_tf_deberta.py::TFDeBERTaModelIntegrationTest::test_inference_no_head PASSED [100%]
============================================================================== warnings summary ===============================================================================
../../../.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19
/Users/kamal.raj/.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================== 34 passed, 2 skipped, 1 warning in 36.42s ==================================================================
```
Deberta-v2
```bash
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.8.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/kamal.raj/.virtualenvs/transformers/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/kamal.raj/Documents/GitHub/transformers/.hypothesis/examples')
rootdir: /Users/kamal.raj/Documents/GitHub/transformers, configfile: setup.cfg
plugins: xdist-2.4.0, timeout-2.0.1, dash-2.0.0, hypothesis-6.31.3, forked-1.3.0, typeguard-2.13.2
collected 51 items
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_attention_outputs <- tests/test_modeling_common.py PASSED [ 1%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_config PASSED [ 3%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_correct_missing_keys <- tests/test_modeling_common.py PASSED [ 5%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_deberta_model PASSED [ 7%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_determinism <- tests/test_modeling_common.py PASSED [ 9%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_equivalence_flax_to_pt <- tests/test_modeling_common.py SKIPPED (test is PT+FLAX test) [ 11%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_equivalence_pt_to_flax <- tests/test_modeling_common.py SKIPPED (test is PT+FLAX test) [ 13%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_feed_forward_chunking <- tests/test_modeling_common.py PASSED [ 15%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_for_masked_lm PASSED [ 17%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_for_question_answering PASSED [ 19%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_for_sequence_classification PASSED [ 21%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_for_token_classification PASSED [ 23%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_forward_signature <- tests/test_modeling_common.py PASSED [ 25%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_gradient_checkpointing_backward_compatibility <- tests/test_modeling_common.py PASSED [ 27%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_gradient_checkpointing_enable_disable <- tests/test_modeling_common.py PASSED [ 29%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_head_pruning <- tests/test_modeling_common.py PASSED [ 31%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_head_pruning_integration <- tests/test_modeling_common.py PASSED [ 33%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_head_pruning_save_load_from_config_init <- tests/test_modeling_common.py PASSED [ 35%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_head_pruning_save_load_from_pretrained <- tests/test_modeling_common.py PASSED [ 37%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_headmasking <- tests/test_modeling_common.py PASSED [ 39%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_hidden_states_output <- tests/test_modeling_common.py PASSED [ 41%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_initialization <- tests/test_modeling_common.py PASSED [ 43%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_inputs_embeds <- tests/test_modeling_common.py PASSED [ 45%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_common.py PASSED [ 47%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_common_attributes <- tests/test_modeling_common.py PASSED [ 49%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_from_pretrained PASSED [ 50%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_main_input_name <- tests/test_modeling_common.py PASSED [ 52%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_outputs_equivalence <- tests/test_modeling_common.py PASSED [ 54%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_parallel_beam_search <- tests/test_modeling_common.py SKIPPED (test requires multiple ...) [ 56%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_parallel_equal_results <- tests/test_modeling_common.py SKIPPED (test requires multipl...) [ 58%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_model_parallelization <- tests/test_modeling_common.py SKIPPED (test requires multiple GPUs) [ 60%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_multi_gpu_data_parallel_forward <- tests/test_modeling_common.py SKIPPED (test requires mult...) [ 62%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_problem_types <- tests/test_modeling_common.py PASSED [ 64%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_common.py SKIPPED (test is PT+TF test) [ 66%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_resize_embeddings_untied <- tests/test_modeling_common.py PASSED [ 68%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_resize_position_vector_embeddings <- tests/test_modeling_common.py PASSED [ 70%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_resize_tokens_embeddings <- tests/test_modeling_common.py PASSED [ 72%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_retain_grad_hidden_states_attentions <- tests/test_modeling_common.py PASSED [ 74%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_save_load <- tests/test_modeling_common.py PASSED [ 76%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_save_load_fast_init_from_base <- tests/test_modeling_common.py PASSED [ 78%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_save_load_fast_init_to_base <- tests/test_modeling_common.py PASSED [ 80%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_save_load_keys_to_ignore_on_save <- tests/test_modeling_common.py PASSED [ 82%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_tie_model_weights <- tests/test_modeling_common.py PASSED [ 84%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_torch_fx <- tests/test_modeling_common.py PASSED [ 86%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_torch_fx_output_loss <- tests/test_modeling_common.py PASSED [ 88%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_torchscript_output_attentions <- tests/test_modeling_common.py PASSED [ 90%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_torchscript_output_hidden_state <- tests/test_modeling_common.py PASSED [ 92%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_training <- tests/test_modeling_common.py PASSED [ 94%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_training_gradient_checkpointing <- tests/test_modeling_common.py PASSED [ 96%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelIntegrationTest::test_inference_masked_lm SKIPPED (Model not available yet) [ 98%]
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelIntegrationTest::test_inference_no_head PASSED [100%]
============================================================================== warnings summary ===============================================================================
../../../.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19
/Users/kamal.raj/.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelTest::test_training_gradient_checkpointing
/Users/kamal.raj/.virtualenvs/transformers/lib/python3.8/site-packages/torch/autocast_mode.py:141: UserWarning: User provided device_type of 'cuda', but CUDA is not available. Disabling
warnings.warn('User provided device_type of \'cuda\', but CUDA is not available. Disabling')
tests/deberta_v2/test_modeling_deberta_v2.py::DebertaV2ModelIntegrationTest::test_inference_no_head
/Users/kamal.raj/Documents/GitHub/transformers/src/transformers/models/deberta_v2/modeling_deberta_v2.py:539: DeprecationWarning: `np.int` is a deprecated alias for the builtin `int`. To silence this warning, use `int` by itself. Doing this will not modify any behavior and is safe. When replacing `np.int`, you may wish to use e.g. `np.int64` or `np.int32` to specify the precision. If you wish to review your current use, check the release note link for additional information.
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
bucket_pos = np.where(abs_pos <= mid, relative_pos, log_pos * sign).astype(np.int)
-- Docs: https://docs.pytest.org/en/stable/warnings.html
================================================================= 43 passed, 8 skipped, 3 warnings in 46.97s ==================================================================
```
@gante |
transformers | 16,145 | closed | clearer model variable naming: Tapas | # What does this PR do?
clearer model variable naming: Tapas
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@gante | 03-14-2022 17:28:42 | 03-14-2022 17:28:42 | _The documentation is not available anymore as the PR was closed or merged._<|||||>```bash
============================================================================= test session starts =============================================================================
platform darwin -- Python 3.8.9, pytest-6.2.5, py-1.11.0, pluggy-1.0.0 -- /Users/kamal.raj/.virtualenvs/transformers/bin/python
cachedir: .pytest_cache
hypothesis profile 'default' -> database=DirectoryBasedExampleDatabase('/Users/kamal.raj/Documents/GitHub/transformers/.hypothesis/examples')
rootdir: /Users/kamal.raj/Documents/GitHub/transformers, configfile: setup.cfg
plugins: xdist-2.4.0, timeout-2.0.1, dash-2.0.0, hypothesis-6.31.3, forked-1.3.0, typeguard-2.13.2
collected 49 items
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_attention_outputs <- tests/test_modeling_tf_common.py PASSED [ 2%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_compile_tf_model <- tests/test_modeling_tf_common.py PASSED [ 4%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_config PASSED [ 6%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_determinism <- tests/test_modeling_tf_common.py PASSED [ 8%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_for_masked_lm PASSED [ 10%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_for_question_answering PASSED [ 12%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_for_sequence_classification PASSED [ 14%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_forward_signature <- tests/test_modeling_tf_common.py PASSED [ 16%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_generate_with_headmasking <- tests/test_modeling_tf_common.py PASSED [ 18%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_headmasking <- tests/test_modeling_tf_common.py PASSED [ 20%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_hidden_states_output <- tests/test_modeling_tf_common.py PASSED [ 22%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_initialization <- tests/test_modeling_tf_common.py PASSED [ 24%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_inputs_embeds <- tests/test_modeling_tf_common.py PASSED [ 26%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_keras_save_load <- tests/test_modeling_tf_common.py PASSED [ 28%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_keyword_and_dict_args <- tests/test_modeling_tf_common.py PASSED [ 30%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_lm_head_model_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 32%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_lm_head_model_no_beam_search_generate_dict_outputs <- tests/test_modeling_tf_common.py PASSED [ 34%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_lm_head_model_random_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 36%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_lm_head_model_random_no_beam_search_generate <- tests/test_modeling_tf_common.py PASSED [ 38%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_load_with_mismatched_shapes <- tests/test_modeling_tf_common.py PASSED [ 40%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_loss_computation <- tests/test_modeling_tf_common.py PASSED [ 42%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_model PASSED [ 44%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_model_common_attributes <- tests/test_modeling_tf_common.py PASSED [ 46%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_model_main_input_name <- tests/test_modeling_tf_common.py PASSED [ 48%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_model_outputs_equivalence <- tests/test_modeling_tf_common.py PASSED [ 51%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_numpy_arrays_inputs <- tests/test_modeling_tf_common.py PASSED [ 53%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_onnx_compliancy <- tests/test_modeling_tf_common.py PASSED [ 55%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_onnx_runtime_optimize <- tests/test_modeling_tf_common.py PASSED [ 57%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_pt_tf_model_equivalence <- tests/test_modeling_tf_common.py SKIPPED (test is PT+TF test) [ 59%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_resize_token_embeddings <- tests/test_modeling_tf_common.py PASSED [ 61%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_save_load <- tests/test_modeling_tf_common.py PASSED [ 63%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelTest::test_save_load_config <- tests/test_modeling_tf_common.py PASSED [ 65%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_classification_head PASSED [ 67%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_masked_lm SKIPPED (Model not available yet) [ 69%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_no_head PASSED [ 71%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_question_answering_head_conversational PASSED [ 73%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_question_answering_head_conversational_absolute_embeddings PASSED [ 75%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_question_answering_head_strong_supervision PASSED [ 77%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_inference_question_answering_head_weak_supervision PASSED [ 79%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasModelIntegrationTest::test_training_question_answering_head_weak_supervision PASSED [ 81%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_flatten PASSED [ 83%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_gather PASSED [ 85%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_gather_vectorized PASSED [ 87%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_product_index PASSED [ 89%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_range_index_map PASSED [ 91%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_reduce_max PASSED [ 93%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_reduce_mean PASSED [ 95%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_reduce_sum PASSED [ 97%]
tests/tapas/test_modeling_tf_tapas.py::TFTapasUtilsTest::test_reduce_sum_vectorized PASSED [100%]
============================================================================== warnings summary ===============================================================================
../../../.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19
/Users/kamal.raj/.virtualenvs/transformers/lib/python3.8/site-packages/flatbuffers/compat.py:19: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
import imp
-- Docs: https://docs.pytest.org/en/stable/warnings.html
============================================================ 47 passed, 2 skipped, 1 warning in 105.84s (0:01:45) =============================================================
```
@gante |
transformers | 16,144 | closed | Add type hints for XLM TensorFlow | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-14-2022 17:20:51 | 03-14-2022 17:20:51 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16144). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,143 | closed | clearer model variable naming: ELECTRA | # What does this PR do?
ELECTRA: clearer model variable naming
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@gante
| 03-14-2022 17:19:31 | 03-14-2022 17:19:31 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante
yes. all passed
|
transformers | 16,142 | closed | Use templates | # What does this PR do?
This PR switches to use the templates in `doc-builder` for the doc building jobs. | 03-14-2022 16:15:28 | 03-14-2022 16:15:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Is `.github/workflows/build_documentation.yml` handled in a different PR ? because it is deleted by this PR doesnt have its replacement<|||||>Are you sure you looked at the whole diff? It's just modified by the PR, not deleted. |
transformers | 16,141 | closed | Dcoker images runtime -> devel | Updates the base image for the docker images used for testing to use the `devel` version of the original docker images instead of the `runtime` version, as it's the version necessary for some TF utilities to run. | 03-14-2022 15:37:02 | 03-14-2022 15:37:02 | Let me just add a failing example before this PR:
(just for the record)
```
tests/bart/test_modeling_tf_bart.py::TFBartModelTest::test_xla_mode
(line 54) tensorflow.python.framework.errors_impl.InternalError: libdevice not found at ./libdevice.10.bc [Op:__inference_run_in_graph_mode_1125293]
``` |
transformers | 16,140 | closed | Fixes Loss for TransfoXL when using Trainer API v2 | Reopens https://github.com/huggingface/transformers/pull/14568#issue-1066398357 with backward-compatible mechanisms | 03-14-2022 15:32:40 | 03-14-2022 15:32:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,139 | closed | Use `HF_ENDPOINT` for custom endpoints | # What does this PR do?
To be consistent with Datasets, this PR updates the env variable checked for a custom endpoint to `HF_ENDPOINT` and deprecates the old one.
Fixes #15514 | 03-14-2022 15:08:11 | 03-14-2022 15:08:11 | |
transformers | 16,138 | closed | Steps strategy fix for PushtoHubCallback and changed docstring | Hello π
Steps strategy for `PushtoHubCallback` wasn't working because
`if self.save_strategy == IntervalStrategy.STEPS and batch + 1 % self.save_steps == 0:`
this is only true when `self.save_steps` is 1 and batch is 0 (due to `1 % self.save_steps` preceding +) So I observed a weird behavior where it was only pushing at 0 batch when save steps were 1 and it wasn't pushing when save steps weren't 1.
This is a small fix that does
`if self.save_strategy == IntervalStrategy.STEPS and (batch + 1) % self.save_steps == 0:` which fixes the callback.
I also changed the description for `no` in docstring.
I will also add tests for the callback later. | 03-14-2022 12:59:05 | 03-14-2022 12:59:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,137 | closed | [Doctests] up | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-14-2022 11:41:16 | 03-14-2022 11:41:16 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16137). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,136 | closed | Add warning message if model uses `input_ids` that include padding tokens, but no `attention_mask` is provided. | ## **First good issue**
A current error is that a user forwards a batched tensor of `input_ids` that include a padding token, e.g. ```input_ids = torch.tensor([["hello", "this", "is", "a", "long", "string"], ["hello", "<pad>", "<pad>", "<pad>", "<pad>"]]```
In this case, the `attention_mask` should be provided as well. Otherwise the output hidden_states will be incorrectly computed. This is quite a common silent error IMO.
With @LysandreJik @sgugger, we have decided to not **automatically** create the `attention_mask` that masks out the padding tokens in this case because of the reasons explains here: https://github.com/huggingface/transformers/issues/15479#issuecomment-1066639938 . However as pointed out in https://github.com/huggingface/transformers/issues/15479, we should IMO at least displa a warning since this error happens a lot IMO.
As a first good issue, one could add such a warning to the BertModel in a first case which would go something like:
```py
if attention_mask is not None and (input_ids == pad_token_id).any():
logger.warn("display nice warning here....")
```
What do you think @sgugger @LysandreJik ? | 03-14-2022 11:21:32 | 03-14-2022 11:21:32 | Models usually don't know the right pad token ID as pointed out in the issue (I'm also not sure that community-contributed models or models not as heavily used as BERT have the right pas token ID in their configs), so I'm not in favor of this. Plus, the check of the inputs at each forward pass would slow down performance.
I agree that it's a common error, and it would make a very nice addition to the troubleshooting guide IMO, but I'm not sure we can add anything in the library to properly warn users without hurting performance or having a lot of false alarms.<|||||>Hmm, think we can be pretty confident that `self.config.pad_token_id` inside the model is the correct padding token. Agree that performance would suffer here a bit. Think putting it in the Trouble shooting guide is a good idea cc @stevhliu <|||||>Yay more content for the troubleshooting guide! I'll work on a PR for this π <|||||>Hey, @patrickvonplaten can I work on this issue? <|||||>Sure that'd be great. Just to make sure we don't do duplicated work here - @ydshieh you haven't started on this one yet no? <|||||>Hi, @Pawank06 @patrickvonplaten
Not really. On Sep. 2022, I rebased the branch @patrickvonplaten created [add_important_warning_padding_attention_mask]( https://github.com/huggingface/transformers/tree/add_important_warning_padding_attention_mask), but then turned my focus to other tasks.
@Pawank06, maybe you can pull that branch, rebase on the latest main, and continue what @patrickvonplaten has done? Don't hesitate if you need any help β€οΈ
<|||||>@ydshieh @patrickvonplaten Ok can you assign me this issue and also can you please share me the file path <|||||>@ydshieh @Pawank06 Hello, if no one is actively working on this issue, I am willing to take a look and continue the work!<|||||>@anruijian Let's wait a bit for @Pawank06 's response :-) Thank you for expressing the interest π― <|||||>@ydshieh Sure. It seems @Pawank06 removed the assignment. <|||||>I see. @anruijian , you can take a look on [this comment](https://github.com/huggingface/transformers/issues/16136#issuecomment-1416072271), and let me know if you have any question before working on it. Thank you!<|||||>@ydshieh I have checked the [add_important_warning_padding_attention_mask](https://github.com/huggingface/transformers/tree/add_important_warning_padding_attention_mask) and would like to confirm my understanding of the current status and next steps before proceeding with my work. As of now, the task has been completed for the Torch version. The next steps involve adding an equivalent warning function to the TensorFlow and Flax versions. More specifically, in [FlaxPreTrainedModel](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_flax_utils.py#L157), [modeling_flax_bert.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_flax_bert.py)and [TFPreTrainedModel](https://github.com/huggingface/transformers/blob/main/src/transformers/modeling_tf_utils.py#L1076), [modeling_tf_bert.py](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/modeling_tf_bert.py). Thank you!<|||||>Hi @anruijian . No, the torch part is not finished yet. @patrickvonplaten added a method `warn_if_pad_token_in_input_ids_no_attention_mask` in `src/transformers/modeling_utils.py`, and only used that method in a modeling file `src/transformers/models/bert/modeling_bert.py`.
The goal is to have the same change made in `modeling_bert.py` to other pytorch modeling files in `transformers`, like GPT2, Bart, T5, etc., wherever it makes sense, mostly will be in the places where we have
```python
elif input_ids is not None:
input_shape = input_ids.size()
```
<|||||>@patrickvonplaten @ydshieh It looks like none of the pull requests were committed yet, I'd like to take a stab at this issue if it's ok. Thanks.
<|||||>@hackyon Sure!
You can probably continue the work from the branch opened
https://github.com/huggingface/transformers/pull/21916 |
transformers | 16,135 | open | Improve EncoderDecoderModel docs | ## **First good issue**
There have been quite some issues/questions with how to use the Encoder-Decoder model, e.g.: https://github.com/huggingface/transformers/issues/4483 and https://github.com/huggingface/transformers/issues/15479 . The main reason for this is that the model docs are quite outdated and we could need a nice How-to-guide.
So I think we have two action items here:
1. Improve https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#encoder-decoder-models a.k.a.: https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/encoder-decoder.mdx
We should mention here:
a) How to create a model ? We should show how to use the `from_encoder_decoder_pretrained(...)` and then how to save the model?
b) How to fine-tune this model? We should mention that this model can then be fine-tuned just like any other encoder-decoder model (Bart, T5, ...)
c) Put a big warning that the config values have to be correctly set and how to set them, e.g. read: https://github.com/huggingface/transformers/issues/15479
This should be an `EncoderDecoderModel` specific text and be very concise and short.
In a second step, we should then write a How-to-guide that includes much more details.
More than happy to help someone tackle this first good issue | 03-14-2022 11:14:12 | 03-14-2022 11:14:12 | Hi...I would love to contribute to this.<|||||>Awesome! Would you like to open a PR and give it a try? :-) I think it would be great if we could put some example code on how to create an EncoderDecoderModel on this model doc: https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/encoder-decoder.mdx which will then be displayed here: https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#encoder-decoder-models :-)
Let me know if you have any questions! Happy to help :-)<|||||>Yes..definitely...Will open a PR shortly and ask for help when I'm stuck..thanks a lot.<|||||>Hi @patrickvonplaten , I would love to contribute to this. <|||||>@patrickvonplaten , I have created the fork and added some docs.
> So I think we have two action items here:
>
> 1. Improve https://huggingface.co/docs/transformers/v4.17.0/en/model_doc/encoder-decoder#encoder-decoder-models a.k.a.: https://github.com/huggingface/transformers/blob/master/docs/source/model_doc/encoder-decoder.mdx
>
> We should mention here:
> a) How to create a model ? We should show how to use the `from_encoder_decoder_pretrained(...)` and then how to save the model?
> b) How to fine-tune this model? We should mention that this model can then be fine-tuned just like any other encoder-decoder model (Bart, T5, ...)
**I have added some documentation, let me know what do you think about this.**
>c) Put a big warning that the config values have to be correctly set and how to set them, e.g. read: #15479
**I didn't got chance to go through this, I will try to cover it this week**
> In a second step, we should then write a How-to-guide that includes much more details.
**I have added a colab which has detailed explanation of encoder decoder model and how to train it. Does that help for this?**
<|||||>Hey @Threepointone4, that's great!
Could you maybe open a PR for:
```
We should mention here:
a) How to create a model ? We should show how to use the from_encoder_decoder_pretrained(...) and then how to save the model?
b) How to fine-tune this model? We should mention that this model can then be fine-tuned just like any other encoder-decoder model (Bart, T5, ...)
```
? :-)<|||||>@patrickvonplaten I have created the PR and done the changes based on my understanding. Please let me know if some changes are required. <|||||>Hello all, I'm very much a beginner in this space, so please excuse the potentially stupid question. I have been experimenting with rolling out my own encoder-decoder combinations for use with the VisionEncoderDecoder class as specified in the docs [here](https://huggingface.co/docs/transformers/model_doc/vision-encoder-decoder)
> The [VisionEncoderDecoderModel](https://huggingface.co/docs/transformers/v4.24.0/en/model_doc/vision-encoder-decoder#transformers.VisionEncoderDecoderModel) can be used to initialize an image-to-text model with any pretrained Transformer-based vision model as the encoder ...
but I keep running into the issue of getting this error message
> The attention mask and the pad token id were not set. As a consequence, you may observe unexpected behavior. Please pass your input's `attention_mask` to obtain reliable results.
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.
Based on reading the docs, I am not entirely sure if I need to specifically finetune an encode-decoder combination on the image-to-text downstream task and the error message above is due to that or if I can just pre-trained configurations without finetuning.
Perhaps I could open a PR with some docs suggestions?<|||||> Hey @Winterflower,
Could you please try to use the forum instead for such questions: https://discuss.huggingface.co/ ? :-) Thank you! <|||||> is this issue still open because we i would like to take to solve this problem<|||||>I would like to contribute<|||||>Hi ,if this issue is still open i would love to contribute.<|||||>Hi @patrickvonplaten , Is it still open I want to work on this!
<|||||>Sure maybe you can browse https://huggingface.co/docs/transformers/v4.31.0/en/model_doc/encoder-decoder#overview and check if there is anything we can improve |
transformers | 16,134 | closed | Add possibility to evaluate predefined completions based on logits softmax | # What does this PR do?
This PR enables users to evaluate a predefined set of completions for a given prompt based on the softmax probabilities.
I think sometimes you don't want to have completely "free" text generation but rather have a set of possible completions and have the model decide which fits the best. Of course this is not as easy as it sounds but using the mean or the median of the probabilities can be a good estimate of the fitness of the completion.
The function evaluate_completions() does exactly that given the following params:
- input_ids
- completion_list
Usage:
```python
prob_list = model.evalute_completetions(input_ids, completion_tokenized)
```
Example:
Here is a short example for a given prompt and three possible completions.
Code
```python
prompt = "In recent events the collaboration between Albert Einstein and Yann LeCun started getting traction."
completion_options = [" That's why they kissed.\n", " Einstein suggested to work together on Artifical Intelligence.\n", " Kevin is my name\n"]
completion_tokenized = [tokenizer(completion_str, return_tensors="pt").input_ids[0].to("cuda:0") for completion_str in completion_options]
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to("cuda:0")
prob_list = model.evalute_completetions(input_ids, completion_tokenized)
for idx, prob_item in enumerate(prob_list):
print(completion_options[idx])
print(prob_item)
print("Mean: " + str(np.mean(prob_item)))
print("Median: " + str(np.median(prob_item)))
print("------------------")
```
Results
```
That's why they kissed.
[0.0034647872671484947, 0.04622786119580269, 0.07665219902992249, 0.017973434180021286, 3.707055611812393e-06, 0.09081998467445374, 0.32161933183670044]
Mean: 0.07953732931995157
Median: 0.04622786119580269
------------------
Einstein suggested to work together on Artifical Intelligence.
[0.004310959950089455, 0.0010098001221194863, 0.15259899199008942, 0.005570804700255394, 0.3281031548976898, 0.44232818484306335, 0.00010287049371981993, 0.5048327445983887, 0.9722456932067871, 0.8074969053268433, 0.2874982953071594, 0.09513897448778152]
Mean: 0.30010311499366554
Median: 0.22004864364862442
------------------
Kevin is my name
[5.61575961910421e-06, 0.005230772774666548, 0.0016122044762596488, 0.008357840590178967, 0.009741517715156078]
Mean: 0.004989590263176069
Median: 0.005230772774666548
------------------
```
We see that by using the mean or median, we can distinguish between a good and a bad fit.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
| 03-14-2022 10:19:00 | 03-14-2022 10:19:00 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten @sgugger <|||||>Hey @kevinpl07,
Thanks for the PR. This use case sounds quite specific to me and I don't think core transformers is the right place for it especially since it's really just a helper method IMO. It'll be pretty hard to maintain this method I think.
Since this is a bit of an advanced use case, we can assume that users could write this method on their own I think:
```
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained("gpt2")
bos_token_id = bos_token_id if bos_token_id is not None else self.config.bos_token_id
inputs_tensor, _, model_kwargs = model._prepare_model_inputs(inputs, bos_token_id, model_kwargs)
batch_size = inputs_tensor.shape[0]
if self.config.is_encoder_decoder:
input_ids = model._prepare_decoder_input_ids_for_generation(
batch_size,
decoder_start_token_id=decoder_start_token_id,
bos_token_id=bos_token_id,
model_kwargs=model_kwargs,
)
else:
input_ids = inputs_tensor
orig_input_ids = input_ids.clone()
complete_result = []
for completion in completion_list:
input_ids = orig_input_ids.clone()
result = []
for completion_token in completion:
model_inputs = model.prepare_inputs_for_generation(input_ids, **model_kwargs)
# forward pass to get probabilities
outputs = model(
**model_inputs,
return_dict=True,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
)
next_token_logits = outputs.logits[:, -1, :]
probs = nn.functional.softmax(next_token_logits, dim=-1)
result.append(probs[0][completion_token].item())
input_ids = torch.cat([input_ids, completion_token.view(1, -1)], dim=-1)
model_kwargs = model._update_model_kwargs_for_generation(
outputs, model_kwargs, is_encoder_decoder=self.config.is_encoder_decoder
)
complete_result.append(result)
```
What I think what could be very useful is to write a blog post or a forum post about this - do you think this might be interesting for you? :-)<|||||>Thanks for your review @patrickvonplaten. It is very specific and I totally understand your thought process :)
- closing the PR
-
Any place you could recommend for such a blog post?<|||||>Thanks for the understanding @kevinpl07!
Sure! If you prefer to make it a short blog post, you could do a forum post: https://discuss.huggingface.co/ like we've done it here e.g.: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175
If you prefer to make a longer blog post we could also add it here: https://huggingface.co/blog (happy to help!) |
transformers | 16,133 | closed | [Generate Docs] Correct docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Corrects / Improves the generate docs. Thanks for the feedback @stevhliu !
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-14-2022 09:50:48 | 03-14-2022 09:50:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,132 | closed | typo "conaining" -> "containing" | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-14-2022 09:36:22 | 03-14-2022 09:36:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,131 | closed | integrations: mlflow: skip start_run() if a run is already active and sanity check on enabling integration | # What does this PR do?
- Allows to reuse any existing mlflow run that may be active before Trainer is initialized. Contrary to the discussion #15663 the choice of creating a nested run is left to user since user now has the ability to explicitly start a nested run if they require it before calling the Trainer and reuse it for logging.
- Additionally, option to disable MLFLow Callback through environment variable. Useful when user wishes to purely control the flow outside transformers library context
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #15663 #11115 (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@noise-field @sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-14-2022 09:01:01 | 03-14-2022 09:01:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,130 | closed | Nystromformer Pretrained model with longer sequence length | The released Nystromformer model only deals with 512 tokens. Will we release some pretrained models with max sequence length such as 1024, 2048 and 4096 mentioned in the paper.
@novice03 @NielsRogge | 03-14-2022 05:34:04 | 03-14-2022 05:34:04 | Hello, the model for larger sequence lengths might be released in a few weeks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi,
NystrΓΆmformer trained on sequence length 1024 is available now: https://huggingface.co/uw-madison/nystromformer-1024
Closing this issue. |
transformers | 16,129 | closed | Better input variable naming for OpenAI (TF) | # What does this PR do?
Uses the @unpack_inputs decorator instead of input_processing for OpenAI (TF). See https://github.com/huggingface/transformers/issues/16051
Tagging @Rocketknight1 for review!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-14-2022 00:02:48 | 03-14-2022 00:02:48 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I've also made the same edit on the PR description :)<|||||>@gante Made changes as you suggested and I'm confirming that the tests pass for `RUN_SLOW=1 py.test -vv tests/openai/test_modeling_tf_openai.py` |
transformers | 16,128 | closed | Improve model variable naming - CLIP [TF] | # What does this PR do?
Uses the @unpack_inputs decorator instead of `input_processing` for CLIP (TF). See https://github.com/huggingface/transformers/issues/16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Tagging @Rocketknight1 for review! ππ½
| 03-13-2022 23:21:36 | 03-13-2022 23:21:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @patil-suraj <|||||>Also, making a cheeky edit on your PR description -- I'm swapping `Fixes` with `See` (`Fixes` closes the associated issue)<|||||>Thanks for the review @gante! Made changes in https://github.com/huggingface/transformers/pull/16128/commits/2840ef018073dd7df88a2e85e193dfbca775b88b and I'm confirming that `RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py` works for `clip`. |
transformers | 16,127 | closed | Add type hints for GPTNeo PyTorch | Adding type hints for forward methods in user-facing class for GPTNeo (PyTorch) as mentioned in #16059
@Rocketknight1 | 03-13-2022 23:20:59 | 03-13-2022 23:20:59 | _The documentation is not available anymore as the PR was closed or merged._<|||||>If you run `make style` from the root of the repo and push again then it should fix the failing the tests.<|||||>@patil-suraj Thanks for handling this one, feel free to merge it whenever you're happy!<|||||>@Tegzes if you change the type for `past_key_values` from `List` to `Tuple` we should be good to merge :) <|||||>Awesome, I'm making the changes right now |
transformers | 16,126 | closed | Add type hints for SqueezeBert PyTorch | Adding type hints for forward methods in user-facing class for SqueezeBert (PyTorch) as mentioned in #16059
@Rocketknight1 | 03-13-2022 23:02:54 | 03-13-2022 23:02:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,125 | closed | finetuning without trainer seems not data parallel when using deepspeed? | Hi, I am trying to using deepspeed to finetune a model, but it seems the data are not parallel during the deepspeed?
I have wrote a toy code to repro, using 100 sentences with a batch_size=4, so the dataloader size is 25 when using one GPU; when I try to using multi-gpus, the dataloader size is still 25, which means we still need to do the loop in 25 times. I mean that, when we are using multiple GPUs, shouldn't the data be parallel? such as using 5 GPU here, does it mean we only need to do the loop 5 times? not 25 times?
I've only recently started using deepspeed and do not familiar with deepspeed and sorry for the easy question, hope someone can give me some suggestions. @stas00
see as follows:
```
import numpy as np
from datasets import load_dataset
from torch.utils.data import DataLoader
from torch.optim import Adam
import deepspeed
from transformers.deepspeed import HfDeepSpeedConfig
from transformers import M2M100Config, M2M100ForConditionalGeneration, M2M100Tokenizer, DataCollatorForSeq2Seq
ds_config = {
"gradient_accumulation_steps": 1,
"train_batch_size": 4,
"train_micro_batch_size_per_gpu": 4,
"wall_clock_breakdown": True,
"steps_per_print": 2000,
"fp16": {
"enabled": False,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": True,
"allgather_bucket_size": 500000000,
"overlap_comm": True,
"reduce_scatter": True,
"reduce_bucket_size": 500000000,
"contiguous_gradients": False,
"cpu_offload": True
},
"optimizer": {
"type": "Adam",
"params": {
"lr": 3e-5,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
}
}
model = M2M100ForConditionalGeneration.from_pretrained("facebook/m2m100_418M")
tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M", src_lang="en", tgt_lang="ro")
raw_datasets = load_dataset('wmt16', 'ro-en')
source_lang = 'en'
target_lang = 'ro'
def preprocess_function(examples):
inputs = [ex[source_lang] for ex in examples["translation"]]
targets = [ex[target_lang] for ex in examples["translation"]]
model_inputs = tokenizer(inputs, max_length=128, padding=True, truncation=True)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=128, padding=True, truncation=True)
model_inputs["labels"] = labels["input_ids"]
return model_inputs
train_datasets = raw_datasets['train'].select(range(100))
train_dataset = train_datasets.map(
preprocess_function,
batched=True,
remove_columns=raw_datasets["train"].column_names,
desc="Running tokenizer on train dataset",
)
label_pad_token_id = -100
data_collator = DataCollatorForSeq2Seq(
tokenizer,
model=model,
label_pad_token_id=label_pad_token_id,
)
train_dataloader = DataLoader(train_dataset, shuffle=True, collate_fn=data_collator, batch_size=4)
dschf = HfDeepSpeedConfig(ds_config)
deepspeed.init_distributed()
model_engine, optimizer, _, _ = deepspeed.initialize(model=model, model_parameters=model.parameters(), config_params=ds_config)
all_loss = []
for batch_step, batch in enumerate(train_dataloader):
print(str(batch_step))
batch['input_ids'] = batch['input_ids'].cuda()
batch['attention_mask'] = batch['attention_mask'].cuda()
batch['labels'] = batch['labels'].cuda()
outputs = model_engine(**batch)
loss = outputs.loss
print(loss)
all_loss.append(loss)
model_engine.backward(loss)
model_engine.step()
print('total: ' + np.mean(all_loss))
```
+ code using:
- ```python test.py``` when using one GPU.
- ```deepspeed --num_gpus=5 test.py``` and set ```train_batch_size``` to 20 when using 5 GPUs. | 03-13-2022 21:52:08 | 03-13-2022 21:52:08 | Hi @lavine-lmu,
Your query needs to be asked at the https://github.com/microsoft/DeepSpeed side since you're not using the HF/DS integration but writing your own training loop.
Also, you don't need `HfDeepSpeedConfig` unless you use ZeRO-3. That's the only time its functionality is used to tell `from_pretrained` to load the model directly on gpus.
To give you hint, your dataloader is unware of DDP - so you either need to use a deepspeed dataloader or to code it properly for DDP. You can see how this is done in the HF Trainer here:
https://github.com/huggingface/transformers/blob/8f3ea7a1e1a85e80210b3d4423b674d9a61016ed/src/transformers/trainer.py#L677-L684
<|||||>@stas00 Thanks, problem solved after I use the ```deepspeed``` dataloader. |
transformers | 16,124 | closed | Update convert_marian_to_pytorch.py | Configuration `tied-embeddings-all` implies `tied-embeddings-src` (I believe).
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-13-2022 21:05:01 | 03-13-2022 21:05:01 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,123 | closed | Add type hints for FNet PyTorch | Adding type hints for forward methods in user-facing class for FNet PyTorch as mentioned in #16059
@Rocketknight1 | 03-13-2022 16:27:36 | 03-13-2022 16:27:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,122 | closed | Remove `pos` from _build_network_inputs | @NielsRogge
https://github.com/huggingface/transformers/issues/15971
| 03-13-2022 14:49:52 | 03-13-2022 14:49:52 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16122). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@Rocketknight1 is it mergeable ?<|||||>Hi, I don't know much about this issue unfortunately. Does it relate to the issue from @NielsRogge ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,121 | closed | Add type hints for PoolFormer in Pytorch | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Add type hints for PoolFormer in Pytorch https://github.com/huggingface/transformers/issues/16059 (issue)
@Rocketknight1
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-13-2022 09:40:46 | 03-13-2022 09:40:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,120 | closed | RAM OOM with a large save_steps using trainer API for MLM training | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-4.14.105-1-tlinux3-0013-x86_64-with-glibc2.17
- Python version: 3.9.2
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (in fact the trainer API handles it automatically, with 8 gpus)
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Electra model: @LysandreJik
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...): Electra(ForMaskedLM)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official task: Masked LM (training from scratch)
* [ ] my own task or dataset: (give details below)
## Expected behavior
I don't know if it was a bug, so I'll describe the situation first.
I'm using the trainer API to train a ElectraForMaskedLM model from scratch. I write a simple script (will be shown below) and set `save_step` argument as 10000. But after running the script, I observe the RAM(memory, NOT CUDA) usage continuously increases, finally casued an OOM error thus terminate the docker. At the same time, GPU, CUDA memory and CPU usage is quite normal.

To solve the problem, I've tried adjusting several training arguments, and then find that when I set a smaller `save_step` (like 2000), RAM usage goes up and down periodically, and the cycle length is approximately equal to the interval between two save checkpoints, like the figure below:

So is there a bug about the checkpoint saving mechanism? Or it's normal so I should have set a small `save_steps`?
The training script is below:
```python
import torch
import numpy as np
from transformers import ElectraConfig, ElectraTokenizerFast, ElectraForMaskedLM
import os
os.environ["TOKENIZERS_PARALLELISM"] = "true"
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# device = "cpu"
print(device)
# Tokenizer
print("Importing tokenizer and model...")
tokenizer = ElectraTokenizerFast(tokenizer_file='./models/bert-custom.json')
# Model
config = ElectraConfig(vocab_size=10000)
model = ElectraForMaskedLM(config=config).to(device)
# dataset
print("Importing dataset...")
from transformers import LineByLineTextDataset
dataset = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./data/in_neg_1e7.csv",
block_size=80,
)
eval_set = LineByLineTextDataset(
tokenizer=tokenizer,
file_path="./data/in_eval_1e6.csv",
block_size=80,
)
from transformers import DataCollatorForLanguageModeling
data_collator = DataCollatorForLanguageModeling(
tokenizer=tokenizer, mlm=True, mlm_probability=0.15
)
from transformers import Trainer, TrainingArguments, EarlyStoppingCallback
print("Start Training...")
training_args = TrainingArguments(
output_dir="./electra_save/",
overwrite_output_dir=True,
num_train_epochs=50,
per_device_train_batch_size=512,
per_device_eval_batch_size=512,
eval_accumulation_steps=300,
save_steps=10000, # the key argument
save_total_limit=2,
dataloader_num_workers=80,
evaluation_strategy='steps',
eval_steps=10000,
logging_steps=1000,
metric_for_best_model='eval_loss',
load_best_model_at_end=True,
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
tokenizer=tokenizer,
train_dataset=dataset,
eval_dataset=eval_set,
callbacks=[EarlyStoppingCallback(early_stopping_patience=10)],
)
trainer.train()
trainer.save_model(output_dir='./electra_save/')
```
## To reproduce
Steps to reproduce the behavior:
1. run the training script above with a large dataset (I use a dataset with 10M lines of text) and a large `save_steps` argument
2. wait for a period of time and monitor on the memory usage
3. encounter with OOM
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
| 03-13-2022 09:38:46 | 03-13-2022 09:38:46 | Addition: Previously I've used a similar script to train BertForMaskedLM, and no memory fluctuations were observed that time:

That's why I suspect it's abnormal this time.<|||||>So the issue only appears with Electra, right? Did you encounter it for any other model?<|||||>Yes. I've only tried on Bert and Electra so far (and there's no problem during Bert training, as above figure has shown).
If there's a need, I can try the similar script on RobertaforMaskedLM in next few days.<|||||>That would be helpful if you could!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,119 | closed | [ViTMAE] Add copied from statements and fix prefix | # What does this PR do?
Fixes #16101 by updating `base_model_prefix` from "vit_mae" to "vit" (`ViTMAEForPreTraining` is using the latter).
Also adds copied from statements to ensure consistency with `modeling_vit.py`. | 03-13-2022 09:31:00 | 03-13-2022 09:31:00 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,118 | closed | Add flaubert types | # What does this PR do?
I added type hints for all FlauBERT PyTorch and Tensorflow classes as described in https://github.com/huggingface/transformers/issues/16059 .
@Rocketknight1
| 03-13-2022 09:18:41 | 03-13-2022 09:18:41 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,117 | closed | Update expected slices for pillow > 9 | # What does this PR do?
This PR fixes the integration tests affected by Pillow's new version. | 03-13-2022 09:01:01 | 03-13-2022 09:01:01 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Love what you did for BEiT! Should the same be done for the two others? |
transformers | 16,116 | closed | Change unpacking of TF mpnet, rag, roformer inputs to use decorator | # What does this PR do?
As part of #16051 this PR changes the unpacking of the inputs of TF mpnet, rag, roformer to use the decorator instead.
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @gante
| 03-13-2022 08:18:46 | 03-13-2022 08:18:46 | _The documentation is not available anymore as the PR was closed or merged._<|||||>These changes are also in #16112 -- can I close this PR? :)<|||||>Sorry about this mess. I will close it<|||||>> Sorry about this mess. I will close it
No worries! Thank you for closing the PR |
transformers | 16,115 | closed | Fixed Error Raised Due to Wrongly Accessing Training Sample | Fixed Error Raised Due to Wrongly Accessing Training Sample
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes [16114](https://github.com/huggingface/transformers/issues/16114)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-13-2022 07:35:50 | 03-13-2022 07:35:50 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey
@patil-suraj I did run make style but I see lots of files have now changed not sure if this is the intended effect but all I did was git clone an editable pip install and make style<|||||>Hey @aflah02, it looks like an issue with windows and line-endings. Could you revert your last commit, and we can take care of fixing the quality if you give us write access on your repository!<|||||>Sure @LysandreJik I'll revert the last commit, who should I give write access to?<|||||>> Sure @LysandreJik I'll revert the last commit, who should I give write access to?
Hey @aflah02 ! There's a button at the very bottom on the right hand panel of this page, called `Allow edits and access to secrets by maintainers`, if you tick that box any of us Transformers maintainers can access this PR.<|||||>Hey @patil-suraj I don't see one by that name but there's one by the name "Allow edits by maintainers" which is already checked, is that it?<|||||>Thanks! |
transformers | 16,114 | closed | Issue in Tutorial - "Fine-tune a pretrained model" | One of the code snippets in the tutorial are -
```
from datasets import load_dataset
dataset = load_dataset("yelp_review_full")
dataset[100]
```
However this raises an error -
```
KeyError Traceback (most recent call last)
[<ipython-input-1-0e4a64458134>](https://localhost:8080/#) in <module>()
2
3 dataset = load_dataset("yelp_review_full")
----> 4 dataset[100]
[/usr/local/lib/python3.7/dist-packages/datasets/dataset_dict.py](https://localhost:8080/#) in __getitem__(self, k)
46 suggested_split = available_suggested_splits[0] if available_suggested_splits else list(self)[0]
47 raise KeyError(
---> 48 f"Invalid key: {k}. Please first select a split. For example: "
49 f"`my_dataset_dictionary['{suggested_split}'][{k}]`. "
50 f"Available splits: {sorted(self)}"
KeyError: "Invalid key: 100. Please first select a split. For example: `my_dataset_dictionary['train'][100]`. Available splits: ['test', 'train']"
```
I believe this is just a missing extra key while trying to access the value in the dictionary.
So the last line should be something like -
`dataset['train'][100]`
Which does indeed give a correct output -
```
{'label': 0,
'text': 'My expectations for McDonalds are t rarely high. But for one to still fail so spectacularly...that takes something special!\\nThe cashier took my friends\'s order, then promptly ignored me. I had to force myself in front of a cashier who opened his register to wait on the person BEHIND me. I waited over five minutes for a gigantic order that included precisely one kid\'s meal. After watching two people who ordered after me be handed their food, I asked where mine was. The manager started yelling at the cashiers for \\"serving off their orders\\" when they didn\'t have their food. But neither cashier was anywhere near those controls, and the manager was the one serving food to customers and clearing the boards.\\nThe manager was rude when giving me my order. She didn\'t make sure that I had everything ON MY RECEIPT, and never even had the decency to apologize that I felt I was getting poor service.\\nI\'ve eaten at various McDonalds restaurants for over 30 years. I\'ve worked at more than one location. I expect bad days, bad moods, and the occasional mistake. But I have yet to have a decent experience at this store. It will remain a place I avoid unless someone in my party needs to avoid illness from low blood sugar. Perhaps I should go back to the racially biased service of Steak n Shake instead!'}
```
| 03-13-2022 07:29:43 | 03-13-2022 07:29:43 | I can fix this if this fix looks fine<|||||>I think your code is right.<|||||>@lishengfever yeah i feel so too and it works but the pull request got clouded after running make style so I'm waiting for a response for that |
transformers | 16,113 | closed | Fix broken links | # What does this PR do?
- Fix broken links in page: https://huggingface.co/docs/transformers/model_doc/marian
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-13-2022 07:13:03 | 03-13-2022 07:13:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Have updated the documentation. Thanks for catch @LysandreJik |
transformers | 16,112 | closed | Change unpacking of TF layoutlm inputs to use decorator | # What does this PR do?
As part of #16051 this PR changes the unpacking of the inputs of TF layoutlm to use the decorator instead.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @gante
| 03-13-2022 06:56:32 | 03-13-2022 06:56:32 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Yah, I agree Rag's test is suppper slow :( . Anyway, let me run all the tests again and I will commit when they are done<|||||>> Yah, I agree Rag's test is suppper slow :( . Anyway, let me run all the tests again and I will commit when they are done
I'm assuming all tests have passed, since you've made a commit -- is this correct? :D<|||||>Yeah, all the test have passed :D |
transformers | 16,111 | closed | Add type hints for Luke in PyTorch | # What does this PR do?
Adds type hints for Luke https://github.com/huggingface/transformers/issues/16059
Tagging @Rocketknight1 for review!
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-13-2022 06:06:18 | 03-13-2022 06:06:18 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,110 | closed | Change unpacking of TF mobilebert inputs to use decorator | # What does this PR do?
As part of #16051 this PR changes the unpacking of the inputs of TF mobilebert to use the decorator instead.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Rocketknight1 @gante
| 03-13-2022 06:05:06 | 03-13-2022 06:05:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante I have run the test and all pass. But when I run `make fixup` in my windows laptop it throws the error `-n was unexpected at this time.
make: *** [Makefile:10: modified_only_fixup] Error 255
` Do you know how to fix it?<|||||>> @gante I have run the test and all pass. But when I run `make fixup` in my windows laptop it throws the error `-n was unexpected at this time. make: *** [Makefile:10: modified_only_fixup] Error 255 ` Do you know how to fix it?
Check our guide [here](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#develop-on-windows) :)<|||||>Thank you :D |
transformers | 16,109 | closed | Add type annotations for GPTJ | Add Python type annotations for the GPTJ Pytorch model.
| 03-12-2022 22:07:50 | 03-12-2022 22:07:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16109). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,108 | closed | Add type hints to XLM model (PyTorch) | # What does this PR do?
This PR adds type hints for XLMModel (PyTorch), which is one of the listed models in #16059.
Type hints were added based on the example given in #16074.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
| 03-12-2022 18:29:51 | 03-12-2022 18:29:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,107 | closed | Add type hints for TFDistilBert | This PR adds type hints for TFDistilBert, following the example of https://github.com/huggingface/transformers/pull/16057 as mentioned by @Rocketknight1 in issue https://github.com/huggingface/transformers/issues/16059. | 03-12-2022 16:49:26 | 03-12-2022 16:49:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,106 | closed | Add type annotations for CLIP (torch) (#16059) | # What does this PR do?
Added type annotations to CLIP per the guidelines. Ran make fixup and ran tests before creating PR.
#16059
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [link here](https://github.com/huggingface/transformers/issues/16059)
| 03-12-2022 14:58:40 | 03-12-2022 14:58:40 | No emails
Iβm about to report you people for harassment
Stop fucking emailing me !!!!!!!!!!
Fucking Assholes
Sent from my iPhone
> On Mar 12, 2022, at 9:58 AM, Jacob Dineen ***@***.***> wrote:
>
> ο»Ώ
> What does this PR do?
>
> Added type annotations to CLIP per the guidelines. Ran make fixup and ran tests before creating PR.
>
> Fixes # (16059)
>
> Before submitting
>
> Did you read the contributor guideline,
> Pull Request section?
> Was this discussed/approved via a Github issue or the forum? Please add a link
> to it if that's the case. link here
> You can view, comment on, or merge this pull request online at:
>
> https://github.com/huggingface/transformers/pull/16106
>
> Commit Summary
>
> 14f00db clip typhinting #16059
> File Changes (1 file)
> M src/transformers/models/clip/modeling_clip.py (125)
> Patch Links:
>
> https://github.com/huggingface/transformers/pull/16106.patch
> https://github.com/huggingface/transformers/pull/16106.diff
> β
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
> You are receiving this because you are subscribed to this thread.
<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@debtriche I'm afraid that's not our fault - I'm guessing you've accidentally "watched" the Transformers repo! If you unselect that, you should stop receiving that stream of e-mails. Alternatively, you may have become subscribed to this thread somehow. If so, you can select the 'unsubscribe' button to stop that.

<|||||>Thank:)
Iβve been trying everything, itβs absolutely been driving me mad
Sent from my iPhone
> On Mar 12, 2022, at 10:13 AM, Matt ***@***.***> wrote:
>
> ο»Ώ
> @Rocketknight1 commented on this pull request.
>
> In src/transformers/models/clip/modeling_clip.py:
>
> > + logits_per_image: Optional[torch.FloatTensor] = None
> + logits_per_text: Optional[torch.FloatTensor] = None
> + text_embeds: Optional[torch.FloatTensor] = None
> + image_embeds: Optional[torch.FloatTensor] = None
> + text_model_output: Optional[BaseModelOutputWithPooling] = None
> + vision_model_output: Optional[BaseModelOutputWithPooling] = None
> I don't think the Optional type is necessary for dataclass definitions, just function headers!
>
> β
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
> You are receiving this because you were mentioned.
|
transformers | 16,105 | closed | Added missing type hints - Deberta V1 and V2 | # What does this PR do?
Added missing type hints - Deberta V1 and V2
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1 | 03-12-2022 14:58:25 | 03-12-2022 14:58:25 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,104 | closed | Added missing type hints - ELECTRA TF | # What does this PR do?
Added missing type hints - ELECTRA TF
#16059
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1 | 03-12-2022 14:39:02 | 03-12-2022 14:39:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the quick fix, merging! |
transformers | 16,103 | closed | Added missing type hints - ELECTRA PyTorch | # What does this PR do?
Added missing type hints - ELECTRA PyTorch
#16059
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@Rocketknight1 | 03-12-2022 14:19:35 | 03-12-2022 14:19:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,102 | closed | Add unpack_input decorator to ViT model | # What does this PR do?
I added the unpack_input decorator for the ViT model as describe in #16051
@gante
Maybe can also delete the `**kwargs` from `call` parameters? (tests seems unaffected) | 03-12-2022 13:39:38 | 03-12-2022 13:39:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>> Maybe can also delete the **kwargs from call parameters? (tests seems unaffected)
That's a good point. Actually, I think this will be true for all models after this change. Do you mind if I add it to my backlog, to confirm after this sprint? If we were to change this, I would have to bring in other Hugging Face members in (changing the signature of a public method is sensitive).<|||||>STOP EMAILING ME!!!!!!!!!!!!!!!!!!!!!!
Sent from my iPhone
> On Mar 12, 2022, at 9:20 AM, Joao Gante ***@***.***> wrote:
>
> ο»Ώ
> @gante approved this pull request.
>
> Fantastic, thank you so much for the contribution π
>
> Can you confirm that you've run RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py? Will merge after your confirmation :)
>
> β
> Reply to this email directly, view it on GitHub, or unsubscribe.
> Triage notifications on the go with GitHub Mobile for iOS or Android.
> You are receiving this because you are subscribed to this thread.
<|||||>Hi @debtriche π GitHub doesn't send random emails, if you are receiving unwanted emails please check your subscription and emailing settings :)<|||||>> Fantastic, thank you so much for the contribution +1
>
> Can you confirm that you've run RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py? Will merge after your confirmation :)
Yes, i run, but two test are skipped, all others pass |
transformers | 16,101 | closed | Different weights in ViTMAEModel and ViTMAEForPreTraining | @NielsRogge
Thanks for the implementation of MAE! I noticed that the weights in ViTMAEModel and the weights in ViTMAEForPreTraining.vit are different. However, they are actually the same model. I wonder there is some errors when calling `model = ViTMAEModel.from_pretrained("facebook/vit-mae-base")`, because lots of weights in the encoder are randomly initialized.
Some weights of the model checkpoint at facebook/vit-mae-base were not used when initializing ViTMAEModel: ['vit.encoder.layer.9.intermediate.dense.bias', 'decoder.decoder_layers.5.attention.attention.value.weight', 'decoder.decoder_layers.4.output.dense.bias', 'vit.encoder.layer.8.intermediate.dense.weight', 'decoder.decoder_layers.3.layernorm_after.weight', 'decoder.decoder_pred.bias', 'vit.encoder.layer.2.layernorm_before.bias', 'decoder.decoder_layers.1.attention.attention.value.bias', 'decoder.decoder_layers.4.attention.attention.key.bias', 'vit.encoder.layer.8.intermediate.dense.bias', 'vit.encoder.layer.2.attention.output.dense.weight', 'decoder.decoder_layers.6.attention.output.dense.bias', 'vit.encoder.layer.8.layernorm_before.weight', 'vit.encoder.layer.8.layernorm_after.bias', 'vit.encoder.layer.11.attention.attention.query.bias', 'decoder.decoder_layers.4.output.dense.weight', 'vit.encoder.layer.1.layernorm_before.bias', 'vit.encoder.layer.0.attention.attention.key.bias', 'vit.encoder.layer.11.attention.attention.query.weight', 'decoder.decoder_layers.2.attention.attention.key.bias', 'vit.encoder.layer.3.layernorm_after.weight', 'vit.encoder.layer.5.layernorm_after.bias', 'decoder.decoder_layers.3.layernorm_after.bias', 'vit.encoder.layer.6.attention.attention.key.bias', 'decoder.decoder_layers.1.attention.attention.value.weight', 'vit.encoder.layer.4.attention.attention.value.weight', 'decoder.decoder_layers.3.output.dense.bias', 'vit.encoder.layer.3.attention.output.dense.bias', 'vit.encoder.layer.1.output.dense.weight', 'vit.encoder.layer.2.attention.attention.query.bias', 'vit.encoder.layer.11.intermediate.dense.weight', 'vit.encoder.layer.1.attention.attention.query.bias', 'decoder.decoder_embed.bias', 'decoder.decoder_layers.4.attention.attention.value.bias', 'decoder.decoder_layers.6.attention.attention.query.weight', 'decoder.decoder_layers.3.attention.attention.key.bias', 'vit.encoder.layer.10.attention.output.dense.bias', 'vit.encoder.layer.6.layernorm_after.bias', 'decoder.decoder_layers.0.intermediate.dense.bias', 'vit.encoder.layer.2.output.dense.weight', 'decoder.decoder_layers.6.layernorm_before.weight', 'vit.encoder.layer.11.layernorm_after.bias', 'vit.encoder.layer.11.output.dense.bias', 'vit.encoder.layer.10.attention.output.dense.weight', 'decoder.decoder_layers.5.attention.attention.value.bias', 'vit.encoder.layer.10.intermediate.dense.weight', 'vit.encoder.layer.0.attention.attention.key.weight', 'vit.encoder.layer.7.output.dense.bias', 'vit.encoder.layer.6.intermediate.dense.weight', 'vit.encoder.layer.8.attention.attention.query.bias', 'vit.encoder.layer.1.attention.attention.query.weight', 'vit.encoder.layer.10.layernorm_after.bias', 'vit.encoder.layer.1.layernorm_after.bias', 'vit.encoder.layer.5.layernorm_before.bias', 'decoder.decoder_layers.5.attention.output.dense.bias', 'vit.encoder.layer.7.attention.attention.value.bias', 'decoder.decoder_norm.weight', 'decoder.decoder_layers.6.attention.attention.query.bias', 'decoder.decoder_layers.7.attention.attention.query.weight', 'vit.encoder.layer.10.attention.attention.key.bias', 'vit.encoder.layer.5.attention.attention.key.weight', 'decoder.decoder_layers.1.attention.attention.query.weight', 'vit.encoder.layer.7.layernorm_after.weight', 'vit.encoder.layer.2.output.dense.bias', 'vit.encoder.layer.10.output.dense.weight', 'vit.encoder.layer.8.attention.attention.value.weight', 'vit.encoder.layer.9.output.dense.weight', 'decoder.decoder_layers.0.attention.attention.value.bias', 'vit.encoder.layer.3.attention.attention.key.bias', 'decoder.decoder_layers.4.intermediate.dense.weight', 'vit.encoder.layer.9.attention.attention.key.weight', 'decoder.decoder_layers.7.attention.attention.value.bias', 'vit.encoder.layer.5.attention.attention.value.weight', 'vit.encoder.layer.9.output.dense.bias', 'decoder.decoder_layers.7.attention.attention.query.bias', 'vit.encoder.layer.2.layernorm_after.bias', 'vit.encoder.layer.5.output.dense.weight', 'vit.encoder.layer.3.intermediate.dense.bias', 'vit.encoder.layer.0.attention.output.dense.bias', 'decoder.decoder_layers.0.attention.attention.key.weight', 'vit.encoder.layer.0.layernorm_before.bias', 'vit.encoder.layer.0.intermediate.dense.bias', 'vit.encoder.layer.7.layernorm_after.bias', 'vit.encoder.layer.4.attention.output.dense.bias', 'decoder.decoder_layers.7.attention.attention.value.weight', 'vit.encoder.layer.7.attention.attention.key.weight', 'vit.encoder.layer.2.attention.attention.value.weight', 'vit.embeddings.patch_embeddings.projection.bias', 'vit.embeddings.patch_embeddings.projection.weight', 'vit.encoder.layer.9.intermediate.dense.weight', 'decoder.decoder_layers.4.layernorm_before.bias', 'vit.encoder.layer.0.layernorm_before.weight', 'decoder.decoder_layers.7.intermediate.dense.weight', 'decoder.decoder_layers.1.output.dense.bias', 'vit.encoder.layer.2.attention.attention.value.bias', 'vit.encoder.layer.6.attention.attention.query.bias', 'decoder.decoder_layers.2.output.dense.weight', 'decoder.decoder_layers.5.attention.attention.query.weight', 'decoder.decoder_layers.3.attention.attention.value.weight', 'vit.encoder.layer.10.attention.attention.value.bias', 'decoder.decoder_layers.2.layernorm_after.weight', 'decoder.decoder_layers.5.attention.attention.key.bias', 'vit.encoder.layer.10.attention.attention.value.weight', 'vit.encoder.layer.0.layernorm_after.bias', 'vit.encoder.layer.11.output.dense.weight', 'vit.encoder.layer.5.attention.attention.query.weight', 'decoder.decoder_layers.4.attention.output.dense.weight', 'vit.encoder.layer.3.layernorm_after.bias', 'vit.encoder.layer.4.intermediate.dense.bias', 'vit.encoder.layer.7.intermediate.dense.weight', 'vit.encoder.layer.6.attention.attention.query.weight', 'vit.encoder.layer.2.layernorm_before.weight', 'decoder.decoder_layers.5.attention.output.dense.weight', 'vit.encoder.layer.9.attention.output.dense.weight', 'vit.encoder.layer.5.intermediate.dense.bias', 'decoder.decoder_layers.4.attention.attention.value.weight', 'decoder.decoder_layers.1.layernorm_before.weight', 'decoder.decoder_layers.3.attention.output.dense.bias', 'vit.encoder.layer.6.attention.attention.value.bias', 'vit.encoder.layer.10.layernorm_before.weight', 'vit.encoder.layer.6.output.dense.bias', 'vit.encoder.layer.3.layernorm_before.bias', 'decoder.decoder_layers.2.output.dense.bias', 'vit.encoder.layer.10.attention.attention.query.bias', 'decoder.decoder_layers.4.layernorm_before.weight', 'decoder.decoder_layers.3.attention.attention.query.bias', 'decoder.decoder_layers.2.attention.attention.query.weight', 'vit.encoder.layer.11.attention.attention.value.bias', 'vit.encoder.layer.3.intermediate.dense.weight', 'vit.encoder.layer.8.output.dense.weight', 'decoder.decoder_layers.2.attention.attention.value.bias', 'vit.encoder.layer.5.output.dense.bias', 'decoder.decoder_layers.2.layernorm_before.bias', 'vit.layernorm.weight', 'decoder.decoder_layers.2.layernorm_before.weight', 'vit.encoder.layer.2.intermediate.dense.weight', 'decoder.decoder_layers.3.layernorm_before.bias', 'vit.encoder.layer.10.layernorm_after.weight', 'vit.encoder.layer.9.attention.attention.key.bias', 'decoder.decoder_layers.0.attention.attention.value.weight', 'decoder.decoder_layers.1.attention.attention.key.bias', 'vit.encoder.layer.4.attention.attention.value.bias', 'vit.encoder.layer.7.attention.attention.key.bias', 'vit.encoder.layer.1.attention.attention.key.weight', 'decoder.decoder_layers.3.intermediate.dense.weight', 'vit.encoder.layer.6.attention.output.dense.weight', 'decoder.decoder_pos_embed', 'decoder.decoder_layers.3.attention.attention.value.bias', 'decoder.decoder_layers.1.attention.output.dense.weight', 'vit.encoder.layer.5.attention.attention.key.bias', 'vit.encoder.layer.10.output.dense.bias', 'vit.encoder.layer.8.attention.output.dense.weight', 'decoder.decoder_layers.5.attention.attention.key.weight', 'vit.encoder.layer.7.attention.output.dense.bias', 'vit.encoder.layer.2.intermediate.dense.bias', 'decoder.decoder_layers.2.attention.attention.key.weight', 'decoder.decoder_pred.weight', 'vit.encoder.layer.5.attention.attention.query.bias', 'vit.encoder.layer.5.layernorm_before.weight', 'vit.encoder.layer.1.attention.attention.key.bias', 'vit.encoder.layer.4.intermediate.dense.weight', 'vit.encoder.layer.9.layernorm_before.bias', 'vit.encoder.layer.8.output.dense.bias', 'vit.encoder.layer.3.attention.attention.key.weight', 'vit.encoder.layer.8.attention.attention.key.bias', 'vit.encoder.layer.2.attention.output.dense.bias', 'vit.encoder.layer.1.output.dense.bias', 'decoder.decoder_layers.4.intermediate.dense.bias', 'decoder.decoder_layers.3.layernorm_before.weight', 'vit.encoder.layer.11.layernorm_before.weight', 'vit.encoder.layer.2.layernorm_after.weight', 'decoder.decoder_layers.4.attention.attention.query.bias', 'vit.encoder.layer.5.attention.attention.value.bias', 'vit.encoder.layer.6.layernorm_after.weight', 'decoder.decoder_layers.1.intermediate.dense.weight', 'decoder.decoder_layers.2.intermediate.dense.weight', 'vit.encoder.layer.5.intermediate.dense.weight', 'decoder.decoder_layers.7.output.dense.bias', 'decoder.decoder_layers.7.layernorm_after.bias', 'vit.encoder.layer.1.attention.output.dense.bias', 'decoder.decoder_layers.4.attention.output.dense.bias', 'decoder.decoder_layers.5.output.dense.weight', 'vit.encoder.layer.6.attention.attention.value.weight', 'decoder.decoder_layers.2.intermediate.dense.bias', 'vit.encoder.layer.7.attention.attention.query.bias', 'vit.encoder.layer.8.layernorm_before.bias', 'vit.encoder.layer.3.attention.attention.query.weight', 'vit.encoder.layer.0.attention.attention.value.bias', 'vit.encoder.layer.0.output.dense.weight', 'decoder.decoder_layers.3.output.dense.weight', 'vit.encoder.layer.11.attention.attention.value.weight', 'vit.encoder.layer.0.attention.output.dense.weight', 'decoder.decoder_layers.3.attention.attention.key.weight', 'decoder.decoder_layers.5.layernorm_before.weight', 'vit.encoder.layer.9.layernorm_after.weight', 'vit.encoder.layer.8.attention.output.dense.bias', 'decoder.decoder_layers.0.intermediate.dense.weight', 'vit.encoder.layer.6.output.dense.weight', 'decoder.decoder_layers.1.intermediate.dense.bias', 'vit.encoder.layer.2.attention.attention.query.weight', 'decoder.decoder_layers.1.attention.attention.key.weight', 'vit.encoder.layer.3.output.dense.weight', 'vit.encoder.layer.9.attention.output.dense.bias', 'vit.encoder.layer.11.intermediate.dense.bias', 'decoder.decoder_layers.1.output.dense.weight', 'vit.encoder.layer.4.layernorm_after.bias', 'vit.encoder.layer.11.attention.output.dense.weight', 'decoder.decoder_layers.6.attention.attention.value.bias', 'vit.encoder.layer.7.intermediate.dense.bias', 'decoder.decoder_layers.7.attention.output.dense.bias', 'vit.encoder.layer.4.attention.attention.query.weight', 'vit.encoder.layer.3.attention.attention.value.weight', 'decoder.decoder_layers.3.intermediate.dense.bias', 'vit.embeddings.position_embeddings', 'vit.encoder.layer.3.attention.attention.query.bias', 'vit.encoder.layer.1.intermediate.dense.bias', 'decoder.decoder_layers.6.layernorm_before.bias', 'vit.encoder.layer.11.layernorm_after.weight', 'decoder.decoder_layers.5.attention.attention.query.bias', 'decoder.decoder_layers.1.layernorm_after.weight', 'decoder.decoder_layers.5.intermediate.dense.bias', 'vit.encoder.layer.0.layernorm_after.weight', 'vit.encoder.layer.11.attention.attention.key.weight', 'vit.encoder.layer.3.attention.output.dense.weight', 'decoder.decoder_layers.0.layernorm_after.bias', 'vit.encoder.layer.9.attention.attention.query.weight', 'decoder.decoder_layers.1.layernorm_before.bias', 'vit.layernorm.bias', 'decoder.decoder_layers.5.output.dense.bias', 'vit.encoder.layer.1.layernorm_before.weight', 'decoder.decoder_layers.1.attention.attention.query.bias', 'decoder.decoder_layers.7.intermediate.dense.bias', 'decoder.decoder_layers.1.attention.output.dense.bias', 'decoder.decoder_layers.0.layernorm_before.bias', 'vit.encoder.layer.4.attention.attention.key.weight', 'vit.encoder.layer.0.output.dense.bias', 'decoder.decoder_layers.6.intermediate.dense.weight', 'decoder.decoder_layers.0.attention.attention.key.bias', 'decoder.decoder_layers.3.attention.attention.query.weight', 'vit.encoder.layer.7.layernorm_before.weight', 'decoder.decoder_layers.0.attention.output.dense.bias', 'decoder.decoder_layers.6.attention.attention.key.bias', 'vit.encoder.layer.6.attention.attention.key.weight', 'vit.encoder.layer.4.output.dense.weight', 'vit.encoder.layer.4.attention.attention.query.bias', 'vit.encoder.layer.4.layernorm_before.bias', 'decoder.decoder_layers.7.attention.output.dense.weight', 'vit.encoder.layer.11.attention.output.dense.bias', 'vit.encoder.layer.1.intermediate.dense.weight', 'decoder.decoder_layers.6.layernorm_after.bias', 'vit.encoder.layer.2.attention.attention.key.bias', 'decoder.decoder_layers.0.layernorm_before.weight', 'vit.encoder.layer.7.attention.attention.value.weight', 'vit.encoder.layer.1.attention.attention.value.bias', 'decoder.decoder_layers.2.attention.output.dense.weight', 'decoder.decoder_layers.7.layernorm_before.bias', 'decoder.decoder_layers.5.layernorm_before.bias', 'decoder.decoder_layers.7.layernorm_after.weight', 'vit.encoder.layer.7.attention.attention.query.weight', 'vit.encoder.layer.3.output.dense.bias', 'vit.encoder.layer.0.attention.attention.query.bias', 'decoder.decoder_layers.0.attention.output.dense.weight', 'vit.encoder.layer.7.attention.output.dense.weight', 'decoder.decoder_layers.4.layernorm_after.weight', 'vit.encoder.layer.10.attention.attention.query.weight', 'vit.encoder.layer.9.attention.attention.value.bias', 'decoder.decoder_layers.5.layernorm_after.bias', 'decoder.decoder_layers.7.attention.attention.key.bias', 'decoder.decoder_layers.0.layernorm_after.weight', 'vit.encoder.layer.6.attention.output.dense.bias', 'vit.encoder.layer.4.attention.attention.key.bias', 'decoder.decoder_layers.4.layernorm_after.bias', 'vit.encoder.layer.7.layernorm_before.bias', 'vit.encoder.layer.0.intermediate.dense.weight', 'vit.encoder.layer.10.layernorm_before.bias', 'vit.encoder.layer.7.output.dense.weight', 'vit.encoder.layer.0.attention.attention.value.weight', 'vit.encoder.layer.4.layernorm_before.weight', 'vit.encoder.layer.3.attention.attention.value.bias', 'decoder.decoder_layers.7.attention.attention.key.weight', 'vit.encoder.layer.1.attention.output.dense.weight', 'decoder.decoder_layers.2.attention.output.dense.bias', 'decoder.decoder_layers.7.layernorm_before.weight', 'decoder.decoder_layers.4.attention.attention.query.weight', 'decoder.decoder_layers.6.attention.attention.key.weight', 'vit.encoder.layer.0.attention.attention.query.weight', 'vit.encoder.layer.10.intermediate.dense.bias', 'decoder.decoder_layers.6.output.dense.bias', 'decoder.decoder_layers.7.output.dense.weight', 'vit.encoder.layer.5.attention.output.dense.weight', 'decoder.decoder_layers.4.attention.attention.key.weight', 'vit.encoder.layer.6.layernorm_before.weight', 'vit.encoder.layer.9.attention.attention.query.bias', 'decoder.decoder_layers.6.output.dense.weight', 'vit.encoder.layer.2.attention.attention.key.weight', 'vit.encoder.layer.8.attention.attention.query.weight', 'decoder.decoder_layers.3.attention.output.dense.weight', 'vit.encoder.layer.4.output.dense.bias', 'vit.encoder.layer.1.attention.attention.value.weight', 'vit.encoder.layer.1.layernorm_after.weight', 'vit.encoder.layer.8.layernorm_after.weight', 'vit.encoder.layer.3.layernorm_before.weight', 'decoder.decoder_layers.6.attention.attention.value.weight', 'decoder.decoder_layers.0.output.dense.weight', 'vit.encoder.layer.5.attention.output.dense.bias', 'vit.encoder.layer.4.attention.output.dense.weight', 'vit.encoder.layer.4.layernorm_after.weight', 'vit.encoder.layer.10.attention.attention.key.weight', 'vit.encoder.layer.9.attention.attention.value.weight', 'decoder.decoder_layers.2.attention.attention.value.weight', 'decoder.decoder_layers.5.intermediate.dense.weight', 'decoder.decoder_layers.6.layernorm_after.weight', 'decoder.mask_token', 'vit.encoder.layer.6.layernorm_before.bias', 'vit.encoder.layer.9.layernorm_after.bias', 'decoder.decoder_layers.1.layernorm_after.bias', 'decoder.decoder_layers.2.attention.attention.query.bias', 'decoder.decoder_layers.2.layernorm_after.bias', 'decoder.decoder_norm.bias', 'decoder.decoder_layers.5.layernorm_after.weight', 'vit.encoder.layer.11.attention.attention.key.bias', 'vit.encoder.layer.8.attention.attention.value.bias', 'vit.encoder.layer.8.attention.attention.key.weight', 'decoder.decoder_layers.0.output.dense.bias', 'vit.embeddings.cls_token', 'vit.encoder.layer.11.layernorm_before.bias', 'vit.encoder.layer.9.layernorm_before.weight', 'vit.encoder.layer.5.layernorm_after.weight', 'vit.encoder.layer.6.intermediate.dense.bias', 'decoder.decoder_layers.6.attention.output.dense.weight', 'decoder.decoder_layers.0.attention.attention.query.bias', 'decoder.decoder_embed.weight', 'decoder.decoder_layers.0.attention.attention.query.weight', 'decoder.decoder_layers.6.intermediate.dense.bias']
-This IS expected if you are initializing ViTMAEModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
-This IS NOT expected if you are initializing ViTMAEModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). | 03-12-2022 11:52:07 | 03-12-2022 11:52:07 | |
transformers | 16,100 | closed | Tf t5 clearer var naming (unpack_inputs decorator tensorflow) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
See #16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-12-2022 10:08:13 | 03-12-2022 10:08:13 | |
transformers | 16,099 | closed | Add type annotations for segformer pytorch | This PR adds type annotation for segformer pytorch as mentioned in #16059.
@Rocketknight1 | 03-12-2022 10:06:54 | 03-12-2022 10:06:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I believe there are some minor mistakes in the types. `Optional[...]` is a shorthand notation for `Union[..., None]`, see [python doc](https://docs.python.org/3/library/typing.html#typing.Optional). Thus it must be used when `parameter : ... = None`.
This PR makes `Optional` the following parameters in this [`forward method`](https://github.com/p-mishra1/transformers/blob/6ed256d4b04e99f24f62268c1f81efc47a5870f4/src/transformers/models/segformer/modeling_segformer.py#L396)
```
output_attentions: Optional[bool] = False,
output_hidden_states: Optional[bool] = False,
return_dict: Optional[bool] = True,
```
While to be correct with the `Optional` definition given in python.
```
output_attentions: bool = False,
output_hidden_states: bool = False,
return_dict: bool = True,
```
Since the code in the `forward` method doesn't check `output_attentions is not None`, to my understanding they should not be `Optional`.
cc @Rocketknight1 |
transformers | 16,098 | closed | cannot import name 'Text2TextGenerationPipeline | In transformers 4.17οΌIt will find error like
οΌ"cannot import name 'Text2TextGenerationPipeline'"
My code is "from transformers import Text2TextGenerationPipeline" | 03-12-2022 09:50:29 | 03-12-2022 09:50:29 | The recommended way to load the `Text2TextGenerationPipeline` as stated in the [docs](https://huggingface.co/docs/transformers/v4.17.0/en/main_classes/pipelines#transformers.Text2TextGenerationPipeline) is:
```python
from transformers import pipeline
text2text = pipeline("text2text-generation")
```
But if you must import the class you can import it as such:
```python
from transformers.pipelines.text2text_generation import Text2TextGenerationPipeline
```
If you're just trying to use the pipeline and not making something like a feature contribution for the class, you should use the first method to load the pipeline.
```python
class Text2TextGenerationPipeline(Pipeline):
"""
Pipeline for text to text generation using seq2seq models.
This Text2TextGenerationPipeline pipeline can currently be loaded from [`pipeline`] using the following task
identifier: `"text2text-generation"`.
The models that this pipeline can use are models that have been fine-tuned on a translation task. See the
up-to-date list of available models on
[huggingface.co/models](https://huggingface.co/models?filter=text2text-generation).
Usage:
```python
text2text_generator = pipeline("text2text-generation")
text2text_generator("question: What is 42 ? context: 42 is the answer to life, the universe and everything")
```"""
# Used in the return key of the pipeline.
return_name = "generated"
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.check_model_type(
TF_MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING
if self.framework == "tf"
else MODEL_FOR_SEQ_TO_SEQ_CAUSAL_LM_MAPPING
)
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,097 | closed | add unpack_inputs decorator to mbart tf | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
See #16051
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-12-2022 08:29:10 | 03-12-2022 08:29:10 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@gante <|||||>Also, made a tiny edit to the description of the PR (replaced the word `Fixes` by `See`, as it would close the issue) :)<|||||>> Fantastic, thank you so much for the contribution π
>
> Can you confirm that you've run `RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py`? Will merge after your confirmation :)
Yepp, all the tests passed. |
transformers | 16,096 | closed | Any reasons not using `processors` or `pipelines` modules in example codes? | Hi, I'm wondering that is there any reasons, or purposes not using `processors` or `pipelines` modules in example codes, such as [`run_glue.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py) or [`run_qa.py`](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_qa.py).
First, I strongly appreciate for providing example codes, which is very helpful to understand not only `transformers` library, but also entire workflows of NLP. Moreover, example codes could introduce useful APIs in `transformers` to beginners like me.
So I suggest that it would be great to apply such APIs to example codes, unless there are specific reasons not using them. | 03-12-2022 07:36:29 | 03-12-2022 07:36:29 | Hey @nbqu, `processors` are preprocessing components that are specifically used for complex situations where a feature extractor and a tokenizer are needed for preprocessing. Examples which require them therefore have them, for example see here: https://github.com/huggingface/transformers/blob/5664d2762260ddb9e01f0566ddd43cafd5cdd807/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L270
Pipelines, on the other hand, support inference rather than training. The examples under the `examples` directory are for training specifically, so there's no need to leverage pipelines there.
However, if you take a look at your [documentation](https://huggingface.co/docs/transformers/quicktour), you'll see that pipelines are mentioned!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,095 | closed | Question about the why num_beams is multiplied by 2 in Beamsearch | Hi, I have a simple question, in the code for beam_search, here:
https://github.com/huggingface/transformers/blob/ef9c3ca348cb11c4ba951f8bdab068ecd8b847ce/src/transformers/generation_utils.py#L2137
Why the num_beams is multiplied by 2 ? Thanks! | 03-11-2022 22:57:25 | 03-11-2022 22:57:25 | Ping @patrickvonplaten <|||||>Good question @cloudygoose!
The reason is simply due to the fact that a beam might finish in an EOS in which case the beam search would not be able to continue anymore.
E.g. think about the following scenario where we have num_beams=3
1. step
1. ["hello"]
2. ["hey"]
3. ["hi"]
The best three beams from this are;
1. ["hello", "my"]
2. ["hello", "are"]
3. ["hi", "there"]
Now in the next step the following three beams are the best three beams
1. ["hello", "my", "dear"]
2. ["hello", "are", "there"]
3. ["hi", "there", "EOS"]
No we have a problem because we can't continue with EOS (end of sentence). So what we do now is to put `["hi", "there", "EOS"]` in the finished beams and take the 4th best beam and continue from there:
1. ["hello", "my", "dear"]
2. ["hello", "are", "there"]
4. ["hello", "are", "any"] -> becomes the 3rd best beam
So in the worst case we have all three best beams ending in EOS. In this case we might have to take the 4th, 5th, 6th beams to continue beam search which is why we do `2 * num_beams`
<|||||>thanks for the reply, could you explain more about where the beam with EOS is saved? I did not find the place where "finished beams" is saved in the code... |
transformers | 16,094 | closed | Change unpacking of TF Bart inputs to use decorator | # What does this PR do?
As part of #16051 this PR changes the unpacking of the inputs of bart to use the decorator instead.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
Am I a TF engineer now? :sweat_smile:
| 03-11-2022 22:46:28 | 03-11-2022 22:46:28 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Just straight-up assigning @gante to review the decorator PRs πΌ |
transformers | 16,093 | closed | [ZeRO] Fixes issue with embedding resize | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16008.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-11-2022 22:41:17 | 03-11-2022 22:41:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,092 | closed | Add type annotations for Beit, ViT and Deit models - Pytorch | # What does this PR do?
I added type annotations for Beit, ViT and Deit models at PyTorch as described in #16059
@Rocketknight1
At Beit model, have some missing annotations because the annotation is another class at the file that are declared after. Also, at `BeitSelfOutput` class, I don't know the correct type for the gamma parameter at `forward` method -- the `gamma` and `input_tensor` seems not to be used, but I prefer not to modify this now. | 03-11-2022 22:30:30 | 03-11-2022 22:30:30 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi, thank you for this! I won't merge it yet because I'm going to ask the other team members on Monday what we think about the missing annotations - it's possible to use [forward references](https://peps.python.org/pep-0484/#forward-references) for this, but I want to make sure that plays well with our code analysis tools before we do that.<|||||>Already have a command that runs something like `mypy` to check the types?<|||||>Hi @johnnv1, I just confirmed with the team that forward references are fine for types that aren't defined yet!<|||||>@Rocketknight1 after rebase upstream/master the `make fixup` shows that `vit_mae` need to be updated because of the `# Copied from ...`, but when I run `make fix-copies` he has been adding the `ViTConfig` on the config type annotation, I think these need to be `ViTMAEConfig`, what is the right way to deal with this? <|||||>Hi @johnnv1, that rebase made this pretty tricky to review! It might be easier to close this and start a new PR and just copy the changes over? If there are classes that are copies then the original source of the copy should be changed, followed by `make fix-copies`. If other people are working on the classes you need to change, it might be easiest to wait until those PRs are done before starting yours! |
transformers | 16,091 | closed | Adding type hints for BigBird pytorch | @Rocketknight1 | 03-11-2022 21:42:26 | 03-11-2022 21:42:26 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16091). All of your documentation changes will be reflected on that endpoint.<|||||>https://github.com/huggingface/transformers/issues/16059<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,090 | closed | Adding type hints for Distilbert | # What does this PR do?
Added type hints for DistillBert
Fixes # (https://github.com/huggingface/transformers/issues/16059)
There are some slight issues re the type signatures, eg the example below where the code will fail if the head_mask paramerter is actually none, however I didn't remove the default paramerter as that would require a change in parameter order etc.
(https://github.com/johnryan465/transformers/blob/a11858c3dd7e28dec615c2d00112267ed231c657/src/transformers/models/distilbert/modeling_distilbert.py#L318)
@Rocketknight1
| 03-11-2022 21:36:12 | 03-11-2022 21:36:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@johnryan465 I reviewed things and you're right - `Tuple[torch.Tensor, ...]` is correct! Let me know when you're happy with the rest and I'll do another review and hopefully merge.<|||||>@Rocketknight1 I have gone through your comments and I hopefully addressed them. I'm happy enough with the code now I think. Have a good St. Patrick's Day weekend βοΈ!<|||||>It looks perfect now, thank you! :shamrock: |
transformers | 16,089 | closed | Add missing type hints for all flavors of LayoutLMv2 PyTorch models. | # What does this PR do?
I added type hints for all `LayoutLMv2` PyTorch classes as described in https://github.com/huggingface/transformers/issues/16059 .
@Rocketknight1
| 03-11-2022 18:48:00 | 03-11-2022 18:48:00 | This also looks good, but same comment about the return types!<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> This also looks good, but same comment about the return types!
Yes I fixed return types + I added all type hints for `LayoutLM` PyTorch <|||||>Hi, I'm seeing some tests failing on this one - I think you might have deleted a couple of arguments while adding the annotations! |
transformers | 16,088 | closed | Add type annotations for ImageGPT | # What does this PR do?
I added type annotations for ImageGPT PyTorch as described in #16059
@Rocketknight1 | 03-11-2022 18:45:59 | 03-11-2022 18:45:59 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,087 | closed | [Fix doc example] Fix first example for the custom_datasets tutorial | # What does this PR do?
- Example code for the sequence classification in Tensorflow (`custom_datasets` tutorial) had spelling mistake and running the example would not work.
- Changed the variable names to be consistent with the two other TF examples and for the example code to work | 03-11-2022 18:44:10 | 03-11-2022 18:44:10 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,086 | closed | Add missing type hints for all flavors of RoBERTa PyTorch models. | # What does this PR do?
I added type hints for all `RoBERTa` PyTorch classes as described in #16059 .
As I commented on the linked issue, `CamemBERT` inherits from `RoBERTa`, so it should be also good for PyTorch version of `CamemBERT` models.
@Rocketknight1
I was not sure about `encoder_hidden_states` and `encoder_attention_mask` which have `torch.FloatTensor` as type in the docstring, but I don't know if it needs to be specified or `torch.Tensor` is enough. Tell me if I need to change that.
**EDIT:** Oh it seems that I have put some `torch.LongTensor` like `inputs_ids` as `torch.Tensor`. | 03-11-2022 18:12:04 | 03-11-2022 18:12:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ChainYo it's totally okay to just use `torch.Tensor`, specifying `torch.FloatTensor` or `torch.LongTensor` is a bonus but not at all necessary!
Also, this PR looks great, but note that the methods can return a tuple if `return_dict` is False, so the real return type is usually `Union[Tuple, OutputClass]` and not just `OutputClass`!<|||||>> @ChainYo it's totally okay to just use `torch.Tensor`, specifying `torch.FloatTensor` or `torch.LongTensor` is a bonus but not at all necessary!
>
> Also, this PR looks great, but note that the methods can return a tuple if `return_dict` is False, so the real return type is usually `Union[Tuple, OutputClass]` and not just `OutputClass`!
Fine, I will change that. |
transformers | 16,085 | closed | [Fix doc example] FSMT | # What does this PR do?
Fix doc example.
Without this fix, I get the following error
```
batch_size = inputs_tensor.shape[0]
File "/home/yih_dar_huggingface_co/transformers/src/transformers/tokenization_utils_base.py", line 253, in __getattr__
raise AttributeError
AttributeError
```
where `inputs_tensor` has type `BatchEncoding`. | 03-11-2022 17:58:12 | 03-11-2022 17:58:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,084 | closed | Fix Typo in Argument of FlaxWav2Vec2ForPreTrainingModule | This PR amends a typo in the `freeze_feature_encoder` argument to the `call` function in FlaxWav2Vec2ForPreTrainingModule and it's subsequent use in the Wav2Vec2 module. | 03-11-2022 17:54:07 | 03-11-2022 17:54:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,083 | closed | [Fix doc example] Fix checkpoint name in docstring example | # What does this PR do?
Use real name instead of `_CHECKPOINT_FOR_DOC` in docstring example.
## More context
This line in `modeling_speech_to_text_2.py`
```python
>>> tokenizer = Speech2Text2Tokenizer.from_pretrained(_CHECKPOINT_FOR_DOC)
```
gives the following error:
```python
Traceback (most recent call last):
File "/tmp/tmpgv1pik63/-789619136090462200.py", line 13, in <module>
tokenizer = Speech2Text2Tokenizer.from_pretrained(_CHECKPOINT_FOR_DOC)
NameError: name '_CHECKPOINT_FOR_DOC' is not defined
```
because `_CHECKPOINT_FOR_DOC` is not defined if a user copies the code example and try to run it. | 03-11-2022 17:35:15 | 03-11-2022 17:35:15 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hey @ydshieh,
Could you explain this change in a bit more detail? <|||||>> Hey @ydshieh,
>
> Could you explain this change in a bit more detail?
A docstring example uses the variable _CHECKPOINT_FOR_DOC as the argument for the method from_pretrained. But for a user who copies the example and tries to run it, an error occurs because _CHECKPOINT_FOR_DOC is not defined. It is only defined inside the model file.
I think the code example should be self-contained, and could run after copy-paste.
(I also updated the PR description) |
transformers | 16,082 | closed | Fix ProphetNetTokenizer | # What does this PR do?
Fix `ProphetNetTokenizer`: `ProphetNet` doesn't have `token_type_ids` as argument.
Without this PR, the docstring examples will give the following errors:
```
in _call_impl
return forward_call(*input, **kwargs)
TypeError: forward() got an unexpected keyword argument 'token_type_ids'
``` | 03-11-2022 17:21:28 | 03-11-2022 17:21:28 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,081 | closed | Rebuild deepspeed | This PR rebuilds the DeepSpeed dependency on runtime, as this seems to remove the segmentation fault error we get otherwise. | 03-11-2022 17:21:16 | 03-11-2022 17:21:16 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for the review! |
transformers | 16,080 | closed | CUDA Error with Typical Sampling mass of 1.0 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0.dev0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@cimeister - Original author of TypicalLogitsWarper code
Models:
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
Library:
- Text generation: @patrickvonplaten @narsil
## Information
Model I am using (Bert, XLNet ...): distilgpt2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Initialize any Causal Language Model and load it on a GPU.
2. Initialize a ``TypicalLogitsWarper`` with a ``mass`` of ``1.0``
3. Attempt to sample the model using the ``sample()`` function with the ``TypicalLogitsWarper``.
I have a Google Colab notebook setup to reproduce this issue, which can be found here:
https://colab.research.google.com/drive/1AX62JAl_yXNXPsTZvUevUyNfcS_eyAXm?usp=sharing
Here is the actual code from the notebook used to reproduce the issue:
```python
import torch
from transformers import (AutoModelForCausalLM, AutoTokenizer,
LogitsProcessorList, MaxLengthCriteria,
StoppingCriteriaList, LogitsWarper)
from transformers.generation_logits_process import TypicalLogitsWarper
model_name = 'distilgpt2'
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = AutoModelForCausalLM.from_pretrained(model_name).to(device)
tokenizer = AutoTokenizer.from_pretrained(model_name)
outputs = model.sample(
input_ids = tokenizer.encode('test', return_tensors='pt').to(device),
logits_warper=LogitsProcessorList([TypicalLogitsWarper(1.0)]),
stopping_criteria=StoppingCriteriaList([MaxLengthCriteria(10)]),
pad_token_id=tokenizer.eos_token_id
)
```
And, this is the resulting stack trace when running the above code on a GPU:
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-3-3aaa15148b4a>](https://localhost:8080/#) in <module>()
16 logits_warper=LogitsProcessorList([TypicalLogitsWarper(1.0)]),
17 stopping_criteria=StoppingCriteriaList([MaxLengthCriteria(10)]),
---> 18 pad_token_id=tokenizer.eos_token_id
19 )
[/usr/local/lib/python3.7/dist-packages/transformers/generation_utils.py](https://localhost:8080/#) in sample(self, input_ids, logits_processor, stopping_criteria, logits_warper, max_length, pad_token_id, eos_token_id, output_attentions, output_hidden_states, output_scores, return_dict_in_generate, synced_gpus, **model_kwargs)
1906 # sample
1907 probs = nn.functional.softmax(next_token_scores, dim=-1)
-> 1908 next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
1909
1910 # finished sentences should have their next token be a padding token
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
```
## Expected behavior
I highly suspect that the behavior above is closely related to this issue on PyTorch, https://github.com/pytorch/pytorch/issues/9062, as the script above runs perfectly fine on a CPU. I believe the expected behavior would be to raise an exception if a ``mass`` of 1.0 or above is used.
| 03-11-2022 17:05:24 | 03-11-2022 17:05:24 | Using `CUDA_LAUNCH_BLOCKING=1` I was able to pintpoint the issue into TypicalLogitsWarper, not `multinomial` (you need to set the flag otherwise cuda can report incorrect issues since all computations are asynchronous).
I have pinpointed tentatively that the issue was with the `last_ind` variable.
```python
last_ind = (cumulative_probs < self.mass).sum(dim=1)
last_ind[last_ind < 0] = 0
# Last in can be equal to 50257, which is the vocab size, which crashes in the gather because it cannot gather
# into a non existing index.
# Essentially `sorted_indices_to_remove` should be True.
sorted_indices_to_remove = sorted_scores > sorted_scores.gather(1, last_ind.view(-1, 1))
```
One tentative fix would be to check if the `last_ind` was the vocab size, and then use something like
```python
min_score = sorted_scores.gather(1, last_ind.view(-1, 1)) if last_ind < vocab_size else -1
sorted_indices_to_remove = sorted_scores > True
```
Pinging @patrickvonplaten which has more overview on the soundness of Mass=1.0 anyway and might have better ideas to fix.<|||||>Hey @harubaru,
I don't think it makes sense to set the value to 1.0. You can see here: https://github.com/huggingface/transformers/pull/15504/files#r825865764 that we disregard the typical sampler in this case as it probably has no effect.
Could you try if values < 1.0 work instead?
Also cc @cimeister in case you have more details here :-)<|||||>Thanks! It works with values less than 1.0, it didn't come to my mind that it would be an invalid value unless it threw an exception, as other ``LogitsWarper`` objects throw an exception if it comes across invalid values such as in the ``TopPLogitsWarper`` object here:
https://github.com/huggingface/transformers/blob/8f3ea7a1e1a85e80210b3d4423b674d9a61016ed/src/transformers/generation_logits_process.py#L186
But, I am not sure if it was intended to not raise an exception during the ``__init__`` call of ``TypicalLogitsWarper`` for an invalid ``mass`` value and I think it'd be a good idea to have it raise an exception of sorts if a ``mass`` value greater than or equal to ``1.0`` is passed.<|||||>@harubaru I agree that it would make sense to include something similar in the `__init__` for the logits warper! Should probably be the exact same as for top_p. I can start a PR later today unless you'd like to take care of that |
transformers | 16,079 | closed | Fix and document Zero Shot Image Classification | # What does this PR do?
This PR fixes 3 issues with `zero-shot-image-classification`
* It does not pick a default model, although there is one. This leads to weird error message
```python
pipe = pipeline("zero-shot-image-classification")
>> ValueError: The task defaults can't be correctly selected. You probably meant "translation_XX_to_YY"
```
* The pipeline is not in the list at top of https://huggingface.co/docs/transformers/main_classes/pipelines
* The filter for available models has a typo, although right now this still leads to no models (ok just one).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil @sgugger
| 03-11-2022 16:35:23 | 03-11-2022 16:35:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,078 | closed | Replace all deprecated `jax.ops` operations with jnp's `at` | The `jax.ops.index_update` function used for in-place `jax.ndarray` operations was deprecated in [JAX 0.2.22](https://jax.readthedocs.io/en/latest/changelog.html?highlight=0.2.22#jax-0-2-22-oct-12-2021). Following the official advice, this PR updates any in-place array modifications to be made with [`jax.numpy.ndarray.at`](https://jax.readthedocs.io/en/latest/_autosummary/jax.numpy.ndarray.at.html), thus addressing the issue raised in #16022. Note that in [JAX 0.3.2 (unreleased)](https://jax.readthedocs.io/en/latest/changelog.html#jax-0-3-2-unreleased), this function is removed entirely. | 03-11-2022 16:24:56 | 03-11-2022 16:24:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Cool, thanks for the clean-up!<|||||>@patil-suraj - could you also take a look here?<|||||>Hi! I am still getting the error that was fixed in this issue, and in #16401, despite confirming that I am running the most up to date version of the repo. I am getting the error when I try to load in a model that uses the transformers/models/bart/**modeling_flax_bart.py** file. Below is the full contents of the error:
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-22-fbba08678499>](https://localhost:8080/#) in <module>()
6 # Load dalle-mini
7 model = DalleBart.from_pretrained(
----> 8 DALLE_MODEL, revision=DALLE_COMMIT_ID, dtype=dtype, abstract_init=True
9 )
10
11 frames
[/usr/local/lib/python3.7/dist-packages/dalle_mini/model/utils.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
24
25 return super(PretrainedFromWandbMixin, cls).from_pretrained(
---> 26 pretrained_model_name_or_path, *model_args, **kwargs
27 )
[/usr/local/lib/python3.7/dist-packages/dalle_mini/model/modeling.py](https://localhost:8080/#) in from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs)
1159
1160 # init random models
-> 1161 model = cls(config, *model_args, **model_kwargs)
1162
1163 with open(resolved_archive_file, "rb") as state_f:
[/usr/local/lib/python3.7/dist-packages/dalle_mini/model/modeling.py](https://localhost:8080/#) in __init__(self, config, input_shape, seed, dtype, abstract_init, load_on_cpu, init_weights, **kwargs)
1006 input_shape,
1007 abstract_init=abstract_init,
-> 1008 load_on_cpu=load_on_cpu,
1009 )
1010
[/usr/local/lib/python3.7/dist-packages/dalle_mini/model/modeling.py](https://localhost:8080/#) in init_weights(self, rng, input_shape, abstract_init, load_on_cpu)
1024 # only set shape and dtype, load parameters separately
1025 init_fn = partial(init_fn, input_shape=input_shape)
-> 1026 params = jax.eval_shape(init_fn, rng)
1027 else:
1028 params = init_fn(rng, input_shape)
[/usr/local/lib/python3.7/dist-packages/jax/_src/api.py](https://localhost:8080/#) in eval_shape(fun, *args, **kwargs)
2927 out = pe.abstract_eval_fun(wrapped_fun.call_wrapped,
2928 *map(shaped_abstractify, args_flat),
-> 2929 debug_info=debug_info)
2930 out = [ShapeDtypeStruct(x.shape, x.dtype, x.named_shape) for x in out]
2931 return tree_unflatten(out_tree(), out)
[/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py](https://localhost:8080/#) in abstract_eval_fun(fun, debug_info, *avals, **params)
513 def abstract_eval_fun(fun, *avals, debug_info=None, **params):
514 _, avals_out, _ = trace_to_jaxpr_dynamic(
--> 515 lu.wrap_init(fun, params), avals, debug_info)
516 assert all(isinstance(aval, AbstractValue) for aval in avals_out)
517 return avals_out
[/usr/local/lib/python3.7/dist-packages/jax/_src/profiler.py](https://localhost:8080/#) in wrapper(*args, **kwargs)
204 def wrapper(*args, **kwargs):
205 with TraceAnnotation(name, **decorator_kwargs):
--> 206 return func(*args, **kwargs)
207 return wrapper
208 return wrapper
[/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py](https://localhost:8080/#) in trace_to_jaxpr_dynamic(fun, in_avals, debug_info, keep_inputs)
1737 main.jaxpr_stack = () # type: ignore
1738 jaxpr, out_avals, consts = trace_to_subjaxpr_dynamic(
-> 1739 fun, main, in_avals, keep_inputs=keep_inputs)
1740 del main, fun
1741 return jaxpr, out_avals, consts
[/usr/local/lib/python3.7/dist-packages/jax/interpreters/partial_eval.py](https://localhost:8080/#) in trace_to_subjaxpr_dynamic(fun, main, in_avals, keep_inputs)
1773 in_tracers = _avals_to_tracers(trace, in_avals)
1774 in_tracers_ = [t for t, keep in zip(in_tracers, keep_inputs) if keep]
-> 1775 ans = fun.call_wrapped(*in_tracers_)
1776 out_tracers = map(trace.full_raise, ans)
1777 jaxpr, consts = frame.to_jaxpr(out_tracers)
[/usr/local/lib/python3.7/dist-packages/jax/linear_util.py](https://localhost:8080/#) in call_wrapped(self, *args, **kwargs)
164
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
168 # Some transformations yield from inside context managers, so we have to
[/usr/local/lib/python3.7/dist-packages/jax/linear_util.py](https://localhost:8080/#) in call_wrapped(self, *args, **kwargs)
164
165 try:
--> 166 ans = self.f(*args, **dict(self.params, **kwargs))
167 except:
168 # Some transformations yield from inside context managers, so we have to
[/usr/local/lib/python3.7/dist-packages/transformers/models/bart/modeling_flax_bart.py](https://localhost:8080/#) in init_weights(self, rng, input_shape)
937 rngs,
938 input_ids,
--> 939 attention_mask,
940 decoder_input_ids,
941 decoder_attention_mask,
AttributeError: module 'jax.ops' has no attribute 'index_update'<|||||>You should install transfomers master from source since this this fix is not included in the release yet. It will be available in the next release. Install transformers using
`pip install git+https://github.com/huggingface/transformers.git`<|||||>Ok perfect, that fixed it. Thank you! |
transformers | 16,077 | closed | Run daily doctests without time-out at least once | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Let's try to download all models and see whether the cache works
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-11-2022 16:17:26 | 03-11-2022 16:17:26 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,076 | closed | [Fix doc example] Fix 2 PyTorch Vilt docstring examples | # What does this PR do?
Fix 2 PyTorch Vilt docstring examples | 03-11-2022 15:32:26 | 03-11-2022 15:32:26 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for fixing, can we add ViLT to the doctests? Namely this file: https://github.com/huggingface/transformers/blob/master/utils/documentation_tests.txt<|||||>> Thanks for fixing, can we add ViLT to the doctests? Namely this file: https://github.com/huggingface/transformers/blob/master/utils/documentation_tests.txt
Added (after discussed with Patrick)<|||||>> Thanks for fixing, can we add ViLT to the doctests? Namely this file: https://github.com/huggingface/transformers/blob/master/utils/documentation_tests.txt
@NielsRogge I added `ViLT` to `documentation_tests.txt`. Do you think this PR is ready for merge? Thanks.<|||||>Will merge later today (unless there is further comment π )<|||||>Can you confirm the doc tests are passing?<|||||>> Can you confirm the doc tests are passing?
Hi, Yes, I confirmed! Here is the doctest run
https://github.com/huggingface/transformers/runs/5543735270?check_suite_focus=true
|
transformers | 16,075 | closed | [WIP] Do not merge! Test PR for GH actions | # What does this PR do?
| 03-11-2022 14:55:52 | 03-11-2022 14:55:52 | |
transformers | 16,074 | closed | Add type annotations for BERT and copies | A sample PR, showing type annotations for PyTorch BERT and the other models copying from it! | 03-11-2022 14:23:43 | 03-11-2022 14:23:43 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,073 | closed | Add TFCamembertForCausalLM and ONNX integration test | # What does this PR do?
This PR adds a missing causal language modelling head for the TensorFlow implementation of Camembert.
With this, we can also test the ONNX export of TensorFlow Camembert models, so I've included the checkpoint in the integration tests as well
This is my first TensorFlow contribution to `transformers`, so please be nice :) | 03-11-2022 13:57:16 | 03-11-2022 13:57:16 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,072 | closed | Fix torch-scatter version | Updates torch-scatter to support the new PyTorch version 1.11.0 | 03-11-2022 13:47:23 | 03-11-2022 13:47:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,071 | closed | Want to push a branch for a PR that fixes Deberta classification dropout and armonizes the dropout class, but have no permissions. | Hi, I'm trying to push a new branch that fixes Deberta classification dropout by adding cls_dropout to DebertaConfiguration. In this branch I've also changed the Dropout class of DebertaForTokenClassification to StableDropout, to use the same type of dropout as the other classes.
However, I am not able to perform this operation even this is a new branch and wouldn't break anything in master or other branches... @LysandreJik @vanpelt @patrickvonplaten How can I do it?? | 03-11-2022 12:52:08 | 03-11-2022 12:52:08 | Hi @alexvaca0 π From your message I can't confirm whether you're doing this or not, so here it goes: [we expect the contributions](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) to come from *branches on a fork*, from which you can open a PR to the main repo. Have you tried doing this? :)<|||||>Hey @gante thank you very much for your answer :) I didn't know contributions needed to come from a fork! I'll do that then ! thank you β€οΈ <|||||>Awesome, glad I could help! Closing this issue :) |
transformers | 16,070 | closed | Improve Swin for VisionEncoderDecoder | # What does this PR do?
This PR improves support for the Swin Transformer with the `VisionEncoderDecoderModel` framework.
Thanks to @FrancescoSaverioZuppichini we now have general formulas to compute the sequence length and hidden size of the last hidden state of Swin Transformer.
| 03-11-2022 12:15:24 | 03-11-2022 12:15:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,069 | closed | [INFO|modelcard.py:460] 2022-03-11 19:03:24,502 >> Dropping the following result as it does not have all the necessary fields: {'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}} | ## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Windows-10-10.0.17763-SP0
- Python version: 3.6.8
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
Models:
- GPT-Neo
Examples:
- lanuage modelling/run_clm.py and run_clm_no_trainer.py
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...):GPT-NEO
The problem arises when using:
* [x] the official example scripts: (give details below)
I used run_clm.py and run_clm_no_trainer.py to train GPT-NEO, but it doesn't work today. It worked everytime before today.
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I repeated the process in https://github.com/gagan3012/project-code-py/blob/master/notebooks/gptneomodel.ipynb to create this dataset. I didn't make it a txt file, I use csv. But I also tried a txt file(the data is from https://s3.amazonaws.com/code-search-net/CodeSearchNet/v2/python.zip), and it didn't work.(Both file worked before.)
## To reproduce
Steps to reproduce the behavior:
1. change directory to transformers\examples\pytorch\language-modeling
2. run the command below
```
python run_clm.py^
--output_dir C:\GPT_Coder\model_gpt_neo_test^
--model_name_or_path EleutherAI/gpt-neo-125M^
--train_file C:\GPT_Coder\data\leetcode\train.csv^
--validation_file C:\GPT_Coder\data\leetcode\valid.csv^
--validation_split_percentage 0^
--per_device_train_batch_size 8^
--per_device_eval_batch_size 8^
--learning_rate 5e-5^
--block_size 128^
--num_train_epochs 10
```
3. after tokenized the dataset and group text, this message shows

## Expected behavior
I expected it to start training just like how it worked before.
| 03-11-2022 11:32:27 | 03-11-2022 11:32:27 | Hi @Penguin-jpg , to run the training and evaluation you should pass the `--do_train` and `--do_eval` arguments to the script. <|||||>> Hi @Penguin-jpg , to run the training and evaluation you should pass the `--do_train` and `--do_eval` arguments to the script.
Thanks!! It works! But it is weird. I am pretty sure that when I passed do_train and do_eval then it said no such argument or something like that before today. I had to remove them to make it work. |
transformers | 16,068 | closed | CUDA out of memory when resuming from checkpoint (not always) | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-4.18.0-147.51.2.el8_1.ppc64le-ppc64le-with-glibc2.17
- Python version: 3.8.2
- PyTorch version (GPU?): 1.7.0a0+e85d494 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
@sgugger
Model I am using BERT:
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: MLM
* [ ] my own task or dataset: (give details below)
_____________________________________________________________________
I'm currently pretraining BERT in MLM using the run_mlm.py with trainer in a distributed settig. To execute the script I run:
python -m torch.distributed.launch --nproc_per_node 4 run_mlm.py --model_type bert --train_file XXX --validation_file XXX --max_seq_length 128 --line_by_line True --output_dir XXX --do_train True --do_eval True --seed 2147483647 --save_steps 4000 --num_train_epochs 3 --per_device_train_batch_size 64 --logging_steps 100 --eval_steps 100 --remove_unused_columns=False --cache_dir XXX --logging_first_step True --evaluation_strategy steps --eval_accumulation_steps 1 --resume_from_checkpoint XXX/checkpoint-129000
I'm on a GPU cluster so, after 24h my script is stopped and I have to resume the pretraining. I successfully did it two times, then the third time I get this error:
Error message:
INFO|trainer.py:1277] 2022-03-10 17:37:39,275 >> ***** Running training *****
[INFO|trainer.py:1278] 2022-03-10 17:37:39,275 >> Num examples = 51943464
[INFO|trainer.py:1279] 2022-03-10 17:37:39,275 >> Num Epochs = 3
[INFO|trainer.py:1280] 2022-03-10 17:37:39,276 >> Instantaneous batch size per device = 64
[INFO|trainer.py:1281] 2022-03-10 17:37:39,276 >> Total train batch size (w. parallel, distributed & accumulation) = 256
[INFO|trainer.py:1282] 2022-03-10 17:37:39,276 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1283] 2022-03-10 17:37:39,276 >> Total optimization steps = 608715
[INFO|trainer.py:1303] 2022-03-10 17:37:39,345 >> Continuing training from checkpoint, will skip to saved global_step
[INFO|trainer.py:1304] 2022-03-10 17:37:39,345 >> Continuing training from epoch 0
[INFO|trainer.py:1305] 2022-03-10 17:37:39,345 >> Continuing training from global step 129000
[INFO|trainer.py:1307] 2022-03-10 17:37:39,346 >> Will skip the first 0 epochs then the first 129000 batches in the first epoch. If this takes a lot of time, you can add the `--ignore_data_skip` flag to your launch command, but you will resume the training on data already seen by your model.
Skipping the first batches: 100%|ββββββββββ| 129000/129000 [8:05:02<00:00, 4.43it/s]
0%| | 0/608715 [00:00<?, ?it/s] Traceback (most recent call last):
File "run_mlm.py", line 566, in <module>:04:40, 4.43it/s]
main()
File "run_mlm.py", line 515, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/m100_work/IscrC_HiNLM/idini000/modules/transformers/src/transformers/trainer.py", line 1398, in train
tr_loss_step = self.training_step(model, inputs)
File "/m100_work/IscrC_HiNLM/idini000/modules/transformers/src/transformers/trainer.py", line 2000, in training_step
loss.backward()
File "/cineca/prod/opt/libraries/pytorch/1.7/cuda--10.2/lib/python3.8/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/cineca/prod/opt/libraries/pytorch/1.7/cuda--10.2/lib/python3.8/site-packages/torch/autograd/__init__.py", line 130, in backward
Variable._execution_engine.run_backward(
RuntimeError: CUDA out of memory. Tried to allocate 972.00 MiB (GPU 3; 15.78 GiB total capacity; 11.59 GiB already allocated; 1.02 GiB free; 13.75 GiB reserved in total by PyTorch)
I've read that this was a problem with older versions of trainer but I updated it recently.
Thanks in advance!
| 03-11-2022 10:45:42 | 03-11-2022 10:45:42 | It might be that some of the older processes have left some junk in the GPU memory? Do you have a way to insure everything is clean before launching the training resuming from checkpoint?<|||||>This was probably what happened. I can't check if everything is clean but I launched it again and it worked fine. |
transformers | 16,067 | closed | How to load google-research/bert .ckpt weight | # π New model addition
HelloοΌI'm going to convert.ckpt file to h5 file. Compare the two sides of the parameter information is the same, but through the TFBertForMaskedLM load, check the weight found a lot of programming 1 and 0.
## Model description
<!-- Important information -->
## Open source status
* [ ] the model implementation is available: (give details)
* [ ] the model weights are available: (give details)
* [ ] who are the authors: (mention them, if possible by @gh-username)
| 03-11-2022 10:16:28 | 03-11-2022 10:16:28 | Hi @Smile-L-up π From your message, I'm understanding that you converted the weights and found differences. Is this correct? If so, would you be able to share a script (or a collab) where I could reproduce the issue?<|||||>Thank you for your reply. I read the converted H5 file and the correct H5 file, and roughly compared the weight value is consistent. But load the weights with TFBertForMaskedLM.from_pretrained () and check the weights with {for layer in model.layers: layer.get_weights()}. It is found that the weight value of the converted H5 file is a lot of programming 1 or 0γ
The conversion script is as followsοΌ
import tensorflow as tf
import h5py
import numpy as np
import re
def ckpt_to_h5(cpktFileName='',h5FileName=''):
cpktFileName = 'model/pre-trained/bert-base-chinese/bert_model.ckpt' #cpkt ζδ»Άθ·―εΎ
h5FileName = 'model/fine-tuned/tf_model111.h5'
reader = tf.compat.v1.train.NewCheckpointReader(cpktFileName)
f = h5py.File(h5FileName, 'w')
t_g = None
for key in sorted(reader.get_variable_to_shape_map()):
print(key)
prefix_list=[]
if re.match(r'^bert/embeddings',key):
prefix_list.append('bert/tf_bert_for_pre_training_6/')
prefix_list.append('mlm___cls/tf_bert_for_pre_training_6/')
elif re.match(r'^bert',key):
prefix='bert/tf_bert_for_pre_training_6/'
prefix_list.append(prefix)
elif re.match(r'^cls/predictions',key):
prefix='mlm___cls/tf_bert_for_pre_training_6/mlm___'
prefix_list.append(prefix)
elif re.match(r'^cls/seq_relationship',key):
prefix='nsp___cls/tf_bert_for_pre_training_6/nsp___'
prefix_list.append(prefix)
else:
continue
for prefix in prefix_list:
tmp_key=key
if tmp_key=='cls/seq_relationship/output_bias':
tmp_key='cls/seq_relationship/bias'
elif tmp_key=='cls/predictions/output_bias':
tmp_key='cls/predictions/bias'
elif tmp_key=='cls/seq_relationship/output_weights':
tmp_key='cls/seq_relationship/kernel'
elif re.search(r'embeddings$',tmp_key):
if re.search(r'word_embeddings$',tmp_key):
tmp_key=tmp_key+'/weight'
else:
tmp_key=tmp_key+'/embeddings'
h5_key=prefix+re.sub(r'layer_','layer_._',tmp_key,flags=re.I)+':0'
tensor=reader.get_tensor(key)
if tmp_key=='cls/seq_relationship/kernel':
tensor=tf.transpose(tensor)
print(tensor)
f[h5_key] = tensor
f.close()
def read_ckpt(cpktFileName=''):
cpktFileName = 'model/pre-trained/bert-base-chinese/bert_model.ckpt' #cpkt ζδ»Άθ·―εΎ
reader = tf.compat.v1.train.NewCheckpointReader(cpktFileName)
for key in sorted(reader.get_variable_to_shape_map()):
print(key)
def out_h5(f,group):
group_read=f[group]
if isinstance(group_read,h5py._hl.dataset.Dataset):
print(group_read.name)
data=np.array(group_read)
print(data.shape)
print(data)
pass
else:
for subgroup in group_read.keys():
key=group+'/'+subgroup
out_h5(f,key)
def read_h5(h5FileName=''):
h5FileName = 'model/fine-tuned/tf_model111.h5'
f=h5py.File(h5FileName,'r')
print(f)
for group in f.keys():
out_h5(f,group)
'''
print(group)
group_read=f[group]
for subgroup in group_read.keys():
key=group+'/'+subgroup
print(key)
dset_read = f[group+'/'+subgroup]
for dset in dset_read.keys():
key=group+'/'+subgroup+'/'+dset
print(key)
'''
if __name__ == '__main__':
ckpt_to_h5()
#read_h5()
#read_ckpt()
<|||||>Hey @Smile-L-up π
I am unable to reproduce your code, as it relies on local files (`cpktFileName `).
However, from your description, I have yet to understand the problem, which makes helping difficult :) Are you:
1. Trying to convert the weights from a checkpoint that you created, and they do not load properly into `TFBertForMaskedLM `?
2. Converting weights from an existing checkpoint on TF Hub, and they do not load properly into `TFBertForMaskedLM `?
3. Converting weights from an existing checkpoint on TF Hub, and they do not match a checkpoint hosted on Hugging Face?
4. (other issue?)
Finally, keep in mind that `transformer` do not support TF v1, which may explain some mismatches :) <|||||>Thank you for your reply. I have realized the conversion of CKPT of Bert base Chinese into H5. However, I encountered another problem. Due to the needs of the project, the H5 file is required. I didn't seem to find the H5 file of Albert base Chinese model. I felt that it would take a lot of time to convert the CKPT of Albert base Chinese into H5. I tried to convert the bin of Albert base Chinese into H5, but there was a problem.<|||||>I see what you mean -- Albert models like [this one](https://huggingface.co/ckiplab/albert-tiny-chinese/tree/main) don't have TF weights.
You can still load them and they will be a TensorFlow model if you load them with the `from_pt=True` flag. E.g.
```
model = TFAlbertModel.from_pretrained("ckiplab/albert-tiny-chinese", from_pt=True)
```
If you want to save the corresponding TF2 weights as a local`.h5` file, you can also do it by calling
```
model.save_weights("local/path/model.h5")
```
If you have local `h5` files, you can reload the model doing
```
model = TFAlbertModel.from_pretrained("local/path/model.h5")
```
Alternatively, you can also push these weights into the HF hub -- see docs [here](https://huggingface.co/docs/transformers/master/en/model_sharing#push-a-model-with-pushtohubcallback)
__________
Does this solve your problem?<|||||>Thank you very much for your reply, you solved me a big favor!<|||||>Awesome, glad I could help π Closing this issue |
transformers | 16,066 | closed | Remove assertion over possible activation functions in DistilBERT | Remove old code remaining in DistilBERT preventing the use of alternative activation functions. | 03-11-2022 09:59:45 | 03-11-2022 09:59:45 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Failures look unrelated to this PR, let me know if there is anything to be done on my side ππ» <|||||>Good point, updated both codebase. |
transformers | 16,065 | closed | Make sure `'torch.dtype'` has str-type value in config and all nested dicts for JSON serializability | # What does this PR do?
Currently a model config is not JSON serializable if
- it contains nested dicts,
- those dicts have `'torch.dtype'` as a key,
- the value is of type `torch.dtype` rather than `str`
The changes check for `'torch.dtype'` in the nested dicts and convert them to `str` if necessary.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #16054.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@stas00
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-11-2022 05:42:40 | 03-11-2022 05:42:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,064 | closed | Update configuration_big_bird.py | Fixing backtick alignments
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 03-11-2022 05:36:48 | 03-11-2022 05:36:48 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16064). All of your documentation changes will be reflected on that endpoint.<|||||>@patil-suraj Sry for the late response, Any idea on what would be the problem?
```bash
make style
black examples tests src utils
Skipping .ipynb files as Jupyter dependencies are not installed.
You can fix this by running ``pip install black[jupyter]``
All done! β¨ π° β¨
1489 files left unchanged.
isort examples tests src utils
Skipped 1 files
make autogenerate_code
make[1]: Entering directory '/home/parano/github/dev/transformers'
File "setup.py", line 212
entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
^
SyntaxError: invalid syntax
make[1]: *** [Makefile:22: deps_table_update] Error 1
make[1]: Leaving directory '/home/parano/github/dev/transformers'
make: *** [Makefile:64: style] Error 2
```<|||||>> @patil-suraj Sry for the late response, Any idea on what would be the problem?
>
> ```shell
> make style
> black examples tests src utils
> Skipping .ipynb files as Jupyter dependencies are not installed.
> You can fix this by running ``pip install black[jupyter]``
> All done! β¨ π° β¨
> 1489 files left unchanged.
> isort examples tests src utils
> Skipped 1 files
> make autogenerate_code
> make[1]: Entering directory '/home/parano/github/dev/transformers'
> File "setup.py", line 212
> entries = "\n".join([f' "{k}": "{v}",' for k, v in deps.items()])
> ^
> SyntaxError: invalid syntax
> make[1]: *** [Makefile:22: deps_table_update] Error 1
> make[1]: Leaving directory '/home/parano/github/dev/transformers'
> make: *** [Makefile:64: style] Error 2
> ```
Make sure you have python>=>=3.6.0 and the required versions black. You can install the correct style deps by using
`pip install transformers["quality"]`<|||||>@patil-suraj The problem arises since `make` by default assumes that when we call `python` we mean `python2`, not `python3` (At least in my system, ubuntu 20.04.4 LTS), may be we should change the `python` commands in the `setup.py` and `Makefile` to `python3`, since making and `alternative` or `alias` for python3 is not actually a very good idea (as can be seen they are not presenting the same kind of functionalities ... ) We might also ask for the comments of maintainers ... <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,063 | closed | FIX: updating doc/example for fine-tune for downstream Token Classification | # What does this PR do?
The documentation for fine tuning for downstream tasks, specifically for, Token Classification has an error, in the command AutoModelForTokenClassification command
`model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=2)`
it should be instead 14
`model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, num_labels=14)`
check the following command:
`len({x for x in sample for sample in tokenized_wnut["train"]['labels']})`
14 labels:
- The `WNUTβ17 Emerging and Rare Entities task` dataset has 13 labels
- `-100` to the special tokens `[CLS]` and `[SEP]`
@sgugger I think you are the one reviewing this types of PR, i.e. doc related. | 03-10-2022 20:54:13 | 03-10-2022 20:54:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,062 | closed | Move QDQBert in just PyTorch block | # What does this PR do?
This PR moves `QDQBert` in the main init to make it accessible as soon as PyTorch is installed. This is because otherwise, building the documentation requires to have the quantization modules installed, which is not available in Python 3.9 and makes it harder for a contributor to build the doc. The code of the model file is clean and properly isolates those modules, so this is safe to do. The user will have an explicit error message when trying to use the models, since the base model contains the `requires_backends(self, "pytorch_quantization")` line. | 03-10-2022 20:53:17 | 03-10-2022 20:53:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,061 | closed | Fix a TF test name (LayoutLMModelTest) | # What does this PR do?
Change `LayoutLMModelTest` to `TFLayoutLMModelTest` in `test_modeling_tf_layoutlm.py`.
| 03-10-2022 20:15:19 | 03-10-2022 20:15:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>merge now (think it's not necessary to wait Matt's approval for this tiny and obvious fix :-) ) |
transformers | 16,060 | closed | Unable to load Wav2Vec2 fine-tuned models from local files | ## Environment info
- `transformers` version: 4.16.2
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
I am fine tuning a XLS-R model
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I fine tuned a model (facebook/wav2vec2-large-xlsr-53) using a variant of the example script https://huggingface.co/blog/fine-tune-wav2vec2-english
It produced a file speech_recognition_model.pt and a directory of checkpoints, as expected.
However, I am unable to load either PT or checkpoint. For example, when I do this
```
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
MODEL_ID = "speech_recognition_model.pt"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
```
I get this error
```
Traceback (most recent call last):
File "/Users/peter.wolf/opt/miniconda3/envs/wav2vec/lib/python3.9/site-packages/transformers/feature_extraction_utils.py", line 423, in get_feature_extractor_dict
text = reader.read()
File "/Users/peter.wolf/opt/miniconda3/envs/wav2vec/lib/python3.9/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
```
And when I try a checkpoint
```
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
MODEL_ID = "./training/checkpoint-90500/"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
```
I get this
```
Traceback (most recent call last):
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "/Applications/PyCharm CE.app/Contents/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "/Users/peter.wolf/dev/wav2vec_experiments/src/pt.py", line 3, in <module>
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
File "/Users/peter.wolf/opt/miniconda3/envs/wav2vec/lib/python3.9/site-packages/transformers/models/wav2vec2/processing_wav2vec2.py", line 127, in from_pretrained
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/Users/peter.wolf/opt/miniconda3/envs/wav2vec/lib/python3.9/site-packages/transformers/tokenization_utils_base.py", line 1757, in from_pretrained
raise EnvironmentError(
OSError: Can't load tokenizer for './training/checkpoint-90500/'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './training/checkpoint-90500/' is the correct path to a directory containing all relevant tokenizer files.
```
## Expected behavior
As far as I can tell, my code only differs from the examples by not pushing the models to Huggingface. I would expect that I can use both pretrained models (PT file) and checkpoints to do inference. However, I am new to this code, so it is likely that I am making some kind of error... but I can not see it. | 03-10-2022 20:14:36 | 03-10-2022 20:14:36 | Hey @peterwolf-smarsh,
Pretrained checkpoints do not contain a tokenizer, therefore you **cannot** load a processor from a pretrained checkpoint. E.g.:
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
MODEL_ID = "./training/checkpoint-90500/"
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
```
fails if `MODEL_ID` is just a pretrained tokenizer.
You should however be able to load the CTC model correctly. To me it looks like your checkpoint is broken in some way.
Could you maybe try to upload the model checkpoint to the Hub? Otherwise there is no way we can help you with the issue sadly
<|||||>Thank you Patrick for the quick help. I have uploaded my PT file and checkpoints directory
https://huggingface.co/opus111/issue-16060-pt-file/tree/main
Note that I am able to load the model. This works
```
MODEL_ID = "speech_recognition_model.pt"
model = Wav2Vec2ForCTC.from_pretrained(MODEL_ID)
```
Perhaps I just don't understand how to load a processor from the results of training<|||||>In order to load a processor, you need both tokenizer files and feature extractor files. They seem to be missing in your repo at the moment. If you compare this to [this repo](https://huggingface.co/facebook/wav2vec2-base-960h/tree/main) you see that there are all the files necessary for the tokenizer and feature extractor<|||||>Hmmm interesting... I assumed that those files should be saved automatically as part of checkpointing. Is there something I need to do, or is this a bug? Here is my training code below:
```
model = Wav2Vec2ForCTC.from_pretrained(
"facebook/wav2vec2-base",
ctc_loss_reduction="mean",
pad_token_id=processor.tokenizer.pad_token_id,
)
model.freeze_feature_extractor()
training_args = TrainingArguments(
output_dir="training",
group_by_length=True,
per_device_train_batch_size=32,
evaluation_strategy="steps",
num_train_epochs=30,
fp16=True,
gradient_checkpointing=True,
save_steps=500,
eval_steps=500,
logging_steps=500,
learning_rate=1e-4,
weight_decay=0.005,
warmup_steps=1000,
save_total_limit=2,
)
trainer = Trainer(
model=model,
data_collator=data_collator,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_prepared,
eval_dataset=test_prepared,
tokenizer=processor.feature_extractor,
)
trainer.train()
```<|||||>I see! Yeah good point actually! The way it's written right now would only save the `feature_extractor` and the model. You will need to save the tokenizer manually since it's not passed into the Trainer.
So in short you'll need a line like this one: https://github.com/huggingface/transformers/blob/0c55d47cded0c711a4359afb04d73ffb86b7d913/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L647 somewhere in the script<|||||>Excellent! Thank you so much for the help, and thank you for this wonderful package and all the tutorials :-D<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>FYI: I ran into the exact same issue, thanks for the hint to save the tokenizer manually. |
transformers | 16,059 | open | Add missing type hints | ### This issue is part of our **Great Code Cleanup 2022**. If you're interested in helping out, take a look at [this thread](https://twitter.com/carrigmat/status/1502319813510766599), or come [join us on Discord](https://t.co/kS42XBvpWH) and talk with other contributors!
# π Add missing type hints
Type hints are used inconsistently in the `transformers` repo across both TF and PT models, and it'd be nice to make them a complete, consistent thing for the core models, especially because we want to develop features that depend on them!
### Guide to contributing:
1. Ensure you've read our contributing [guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) π
2. Claim your architecture(s) in this thread (ensure no one is working on it). It's 100% okay to only take the TensorFlow or PyTorch version of a model, if you're not familiar with both frameworks! It's also okay to claim multiple models and group those changes into a single PR! π―
3. Implement the changes as in https://github.com/huggingface/transformers/pull/16057 or https://github.com/huggingface/transformers/pull/16074 (see the diff on the model architectures for a few examples) πͺ
4. Open the PR and tag me in it. You should run `make fixup` at the end to do a code quality check before your final commit!
### Tips for making your PR
1. The files you need to edit will be in `src/transformers/models/[model_name]/`
2. For TensorFlow, you want the `modeling_tf_[model_name].py` file. For PyTorch, you want the `modeling_[model_name].py` file.
3. Remember, you **do not** have to cover every class in that file!. The main thing we want to cover is the `call` (for TF) or `forward` (for PT) method for user-facing classes like `TFRobertaForMaskedLM` or `RobertaForSequenceClassification`. It's not necessary to add type hints to layers or base classes like `RobertaModel` or `TFRobertaPreTrainedModel` - these are trickier to write, and generally people do not use those classes as standalone models.
4. If you're unfamiliar with how type hints work, you can read the [Python library documentation on them](https://docs.python.org/3/library/typing.html), but it's probably even easier to just look at another PR that added them. Take a look at the list of changes in the pull requests linked above!
5. The types will usually be obvious - most inputs are `Optional[Union[np.ndarray, tf.Tensor]]` for TF models and `Optional[torch.Tensor]` for PyTorch models, and boolean inputs are `Optional[bool]`. Pay attention to the first input of TF models, though, which is usually `TFModelInputType` - this is because Keras handles that first input in a special way! Other inputs to pay attention to are `past_key_values`, which can vary between models, and also the model output type. For the base model classes like `RobertaModel`, you may have to look at the corresponding `MainLayer` to figure out the right output type! Also, note that the output type may be a tuple if `return_dict` is False, in which case you should specify `Union[Tuple, ...]`. Finally, note that in TF models, `training` is never `None`, so it should be `training: bool` and not `training: Optional[bool]`.
6. Note that some code is copied across our codebase. If you see a line like `# Copied from transformers.models.bert...`, this means that the code is copied from that source, and our scripts will automatically keep that in sync. If you see that, you should not edit the copied method! Instead, edit the original method it's copied from, and run `make fixup` to synchronize that across all the copies. Be sure you installed the development dependencies with `pip install -e ".[dev"]`, as described in the contributor guidelines above, to ensure that the code quality tools in `make fixup` can run.
### How can I find models that need type hints?
I used to maintain a list here, but it got out of date, I'm sorry. Instead, you can use [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing). If you run this, it will show you models in PyTorch or TF that are still missing type hints. Unlike my manually curated lists, it's guaranteed to be up to date - but do double-check that someone else in the thread hasn't claimed a model before you start, because the Colab code will only register type hints after the PR containing them is merged! | 03-10-2022 19:06:15 | 03-10-2022 19:06:15 | I would love to work on PyTorch `Albert`π<|||||>Hi, I would like to work on PyTorch `ImageGPT`<|||||>Hi, I would like to work on `CamemBERT` for PT & TF.
I will take a look at `LayoutLMv2` after the first one :smiley:
Edit: Because CamemBert depends on `Roberta` I will take PyTorch `Roberta` :+1: <|||||>Hello!
I'd like to take `Hubert` & `Wav2Vec2` for Pytorch.
Cheers!<|||||>I'll try PyTorch BERT to start!<|||||>@johnryan465 I just did it as an example, I'm sorry! I'm marking off the completed models now.
<|||||>@Rocketknight1 no worries, will try and do DistillBert instead<|||||>I'd like to work on GPT2 (TF).<|||||>@Rocketknight1 I switch to Roberta PyTorch because CamemBERT depends on Roberta modeling <|||||>Awesome! Hey @Rocketknight1 βΒ I'd like to work on Longformer for both PyTorch & TF! <|||||>I'd like to work on BigBird<|||||>I would like to work on Clip for pytorch.<|||||>Also, will work on `BeiT`, `Deit` and `ViT` (Pytorch)<|||||>I can work on ImageGPT. <|||||>I can work on Swin (Pytorch)<|||||>I'd like to work on XLM (Tensorflow)<|||||>I'll take T5 (Tensorflow)!<|||||>I'd like to claim GPT-2 (PyTorch).<|||||>Hi @Rocketknight1,
I would like to work on BART of both TF and PyTorch<|||||>ELECTRA TF - https://github.com/huggingface/transformers/pull/16104
ELECTRA PT - https://github.com/huggingface/transformers/pull/16103
DeBERTA PT - https://github.com/huggingface/transformers/pull/16105
<|||||>XLMRobertaXL (PyTorch)
<|||||>segformer pytorch<|||||>I'll take OpenAIGPT!<|||||>> Hi @Rocketknight1,
>
> I would like to work on BART of both TF and PyTorch
can you please confirm with emoji whether i am eligible to take these or not? @Rocketknight1 <|||||>I will work on XLM (PyTorch)<|||||>@robotjellyzone You can! Please note that we accepted a PR yesterday to add the TF decorator to BART, so make sure you're working on the most recent version of the library before you start your PR!<|||||>I'll take Distilbert (TensorFlow)<|||||>Happy to take T5 (PyTorch)
@Rocketknight1 isn't the list missing ConvNext? If so, I'm happy to take care of that one too :ok_hand: <|||||>I'll work on GPTJ<|||||>> @robotjellyzone You can! Please note that we accepted a PR yesterday to add the TF decorator to BART, so make sure you're working on the most recent version of the library before you start your PR!
OK sure! I will keep this in mind ππ... <|||||>I'll take Splinter and ~~Segformer~~ Rembert for torch
Edit: @p-mishra1 has Segformer. Taking Rembert instead<|||||>Looks like ImageGPT was done. I can take Luke in PyTorch. <|||||>I'd like to take PoolFormer<|||||>I'm going for `FlauBERT` now !<|||||>I'll work on FNet for PyTorch.
<|||||>Hello, I will work on SqueezeBERT for Pytorch!<|||||>I will also work on GPTNeo for Pytorch<|||||>I'd like to work on Perceiver for torch<|||||>I will also work on Pegasus for pytorch<|||||>I will work on XLMRoberta for TF<|||||>I will work on YOSO for PT<|||||>Hi, I'll take Marian (Pytorch)<|||||>Hi, I will work on RAG(pytorch).<|||||>Nevermind, XLMRoberta relies entirely on Roberta (for TF)
I will work on Reformer instead!<|||||>Hey, I would like to work on the BigBirdPegasus model of Pytorch.<|||||>Hey, I am looking into mBART model for TF and PyTorch implementations. If anyone, interested do let me know. <|||||>I will work on XLNet for TF and PT<|||||>Happy to take CTRL and MPNet for Tensorflow<|||||>I'm working on MobileBert for both TensorFlow & PyTorch.<|||||>Hi, I'd like to take OpenAIGPT (PyTorch).<|||||>Hi @DPT4, it has already been done here: https://github.com/huggingface/transformers/pull/16156<|||||>Hi @TristanBilot thanks for informing me. Then, I'd like to take MegatronBert for PyTorch.<|||||>This is Awesome! Hey, @Rocketknight1 I'd like to work on ConvBERT for both PyTorch & TF π€<|||||>I'd like to work on TransfoXL if available<|||||>I'll take Vilt for PyTorch seems available.<|||||> I will work on XGLM for PT<|||||>I would like to work on T5 (PyTorch)<|||||>I will work on ProphetNet, UniSpeech and UnispeechSat <|||||>I'll work on
TF - Blenderbot
PyTorch - Blenderbot and BlenderbotSmall<|||||>I can take up
PyTorch: PLBart and VisualBERT<|||||>Hi, I'll take Funnel<|||||>I will take IBert, Lxmert for PyTorch.<|||||>Hi all, with the end of the sprint I probably won't be actively maintaining this issue, but there are still models outstanding! If you're curious about what's left to do, I've added [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) that you can use to find model classes without type hints. We'd appreciate type hints for any and all of them! <|||||>@Vaibhavs10 Are you still working on the Wav2Vec model? I'm working on UniSpeech and some ```make fixup``` tests fail because of inconsistencies with that model. I could take care of it if you want<|||||>Hi, I'd like to work on Canine.<|||||>@hiromu166 I have already worked on that model in my PR #16425 <|||||>Hey, I'd like to work on ProphetNet(Pytorch)
edit: @sgugger Hey Sylvain. Found that the some sections in the docs (https://huggingface.co/docs/transformers/model_doc/prophetnet) are missing parameters explanations (for example in the ProphetNetConfig there's no definition for the decoder_start_token_id, and something similar happens for the ProphetNetModel and the ProphetNetEncoder). Not sure if it was done on purpose, so I'm only posting here!<|||||>Hi, I would like to work on `Pytorch VisualBERT`.<|||||>@Rocketknight1 It appears that all user facing classes in `Pytorch VisualBERT` are already annotated as mentioned in this PR #16544, so I would like to work on `Pytorch BigBirdPegasus`
<|||||>Hello @Rocketknight1 I would like to work on ConvBert tensorflow<|||||>@Rocketknight1 It seems that "ConvBert" is already done PR #16377 for both TensorFlow and Pytorch.
So I would like to work on "yoso" Pytorch<|||||>Hey there @Rocketknight1 I would like to work on MegatronBERT π (edit: pytorch)<|||||>Hi @Rocketknight1 I checked the Colab notebook and I will be adding the missing type hints for CVT (Pytorch)
> Hi all, with the end of the sprint I probably won't be actively maintaining this issue, but there are still models outstanding! If you're curious about what's left to do, I've added [this Colab notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) that you can use to find model classes without type hints. We'd appreciate type hints for any and all of them!
<|||||>Good shout @F02934 !
@Rocketknight1 I've checked the above Colab notebook and looks like MegatronBERT is done. I will add missing type hints for QDQBertModel <|||||>Can I take `GPTNeoxForCausalLM` and `GPTNeoXModel`?<|||||>I would like to add type hints for `RoFormer` (only in Pytorch). After running and checking the notebook Colab [notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G), I think I can take care of the seven classes missing type hints ποΈ.
Should I submit a single PR for all the seven updated classes?
> It's also okay to claim multiple models and group those changes into a single PR! dart
_Edit: I just read again the description of this issue and noticed my question was already answered_<|||||>Hi, I would like to work on `TensorFlow` : `OpenAIGPT`<|||||>Hi, I would like to work on `TensorFlow` : `CTRL`<|||||>Hello, according to the colab notebook, `XLMRobertaXL` (PyTorch) classes are still missing type annotations
They were mentioned here back on March 12th but there is no PR associated, so I can work on that if @manandey hasn't already<|||||>> I can work on that if @manandey hasn't already
@asofiaoliveira Feel free to work on this incase you are interested!
<|||||>Hi π€. `Vilt` (PyTorch) model remains without type hints, so I would like to work on it.<|||||>I'll take on PyTorch BigBirdPegasus and open a PR :)<|||||>M2M model missing type hints, I'll work on it and open a PR.<|||||>I'll create a PR for FSMT :)<|||||>Some type hints are missing in PyTorch UniSpeech, MPNet, and Nystromformer.
I'll take on it and group those changes into a single PR!<|||||>PyTorch SEWD model missing some type hints, I'll work on it and open a PR.<|||||>`TFConvBERTModel` does not have type hints. I would like to work on adding typing hints for it.<|||||>I would like to work on adding type hints for `TF MPNet` models.<|||||>I'm taking up `TF Rag`<|||||>I'd like to work on Lxmert for TF and PyTorch.<|||||>
Hi, I would like to work on Tensorflow LED
<|||||>HI, I would like to work on Tensorflow XLNet<|||||>I would like to work on Pytorch/BigBirdPegasus<|||||>Hey! I'll take `OpenAIGPT` both for TensorFlow & PyTorch<|||||>I would love to work on PyTorch SEWD for the remaining classes.<|||||>Hi!, I can create a PR for CTRL for Pytorch and Tensorflow.
<|||||>Hi, I'd like to work on MT5 for Pytorch and Tensorflow π<|||||>Hey, I would like to work on the Swin model files!<|||||>Hi I'd love to work on ~~Reformer, Data2Vec, and RoFormer pytorch π~~
Seems like a lot of these have been merged already but not marked off the list for pytorch (CTRL, OpenAIGPT, BigBirdPegasus, M2M, reformer, data2vec, roformer, visualbert). If there's anything left I'll take it lol<|||||>Hi @sirmammingtonham thank you so much for pointing that out! I just realized I'd been failing to keep the list of models up to date - I made a Colab to check our codebase instead, which is linked at the bottom of the initial post and will always be up to date.<|||||>This applies to other people who recently started, cc @mariagrandury @arcAman07 @RamitPahwa @rchan26 @WhiteWolf47. If you found the model you chose already has type hints, this is why! Please double-check in the Colab to see where type hints are still needed.<|||||>Hey just saw the codebase and realised a lot of models already have type hints. I did notice dpt list of models didn't have type hints and I would love to work on that issue.<|||||>Thanks @Rocketknight1 the notebook is super helpful! I can work on adding return types to tf xlm and tf GPTJ.<|||||>> This applies to other people who recently started, cc @mariagrandury @arcAman07 @RamitPahwa @rchan26 @WhiteWolf47. If you found the model you chose already has type hints, this is why! Please double-check in the Colab to see where type hints are still needed.
Hey, just checked out the colab notebook, i have decided to work on the YolosModel<|||||>Thanks all, and sorry again about the list being out of date initially!<|||||>Hi, I would love to work on FLAVA and LEVIT Pytorch , can you assign this to me?<|||||>Hi I was going to work on MT5 for Pytorch and TF, but looking at `src/transformers/models/mt5/modeling_mt5.py` and `src/transformers/models/mt5/modelling_tf_mt5`, it looks like each of the classes in these files override other models (`T5model`, `T5ForConditionalGeneration`, `T5EncoderModel`, `TFT5Model`, `TFT5ForConditionalGeneration`, `TFT5EncoderModel`). Each of these already have type hints, so I guess there isn't anything to do for MT5 now?
Edit: Sorry, just seen that the colab is where to look for models that still need work... In that case, I'll pick up `MCTCTForCTC` and `MCTCTModel`!
<|||||>@Rocketknight1 I will work on ViTModel<|||||>Hi @Rocketknight1 , ~I will take `TFViTPreTrainedModel` :)~ change of plans, I didn't notice that `PreTrained` classes were not priority.
I will take `TFPegasusModel` instead<|||||>I would like to work on `EsmForProteinFolding`<|||||>Hi folks,
We are almost done with this task! From the [notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) created by RK ([Rocketknight1](https://github.com/Rocketknight1)), I'm sharing here the remaining models to gain some visibility and encourage one last push.
1. PyTorch models:
- ASTModel
- EsmForProteinFolding
- TimesformerModel
2. TF models (excluding pre-trained classes):
- TFDeiTModel
- TFEncoderDecoderModel (WIP by [Batese2001](https://github.com/Batese2001))
- TFLEDForConditionalGeneration
- TFLEDModel
- TFLxmertForPreTraining
- TFMarianMTModel
- TFMarianModel
- TFRagModel
- TFRagTokenForGeneration
- TFSegformerDecodeHead
- TFTransfoXLLMHeadModel
- TFTransfoXLModel
- TFVisionEncoderDecoderModel
_Important_: Remember to **run** the [notebook](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) to check which models are available (the ones printed in each cell mean models without type hints) and look if anyone has claimed before start working on it.<|||||>Hello @Rocketknight1 I'd love to tackle TFEncoderDecoderModel if it is available <|||||>Hello @Rocketknight1 I would like to tackle TFVisionEncoderDeocderModel if possible with @miyu386 and @AdiaWu.<|||||>Hi @Rocketknight1 I'd love to take TFMarianModel with @Batese2001 and @mollerup23!<|||||>Good Morning @Rocketknight1,
I would love to help you with adding types to the following models:
1. TFAlbertPreTrainedModel
2. TFBartPretrainedModel
3. TFBertPreTrainedModel<|||||>@Rocketknight1 Are there other models that need type hints? I looked at the list generated by the Google Colab notebook, and noticed the 'TFPegasusPreTrainedModel' still needs type hints. If so, I'd like to pick this up.<|||||>Oops, yep, I didn't realize that last PR had auto-closed it. If you can find other models that need type hints, then please submit PRs to add them! Make sure to 'sync changes' in your fork before creating the PR branch so that there aren't any issues with rebasing and it should go smoothly.<|||||>Great! In that case I'd like to claim the TFPegasusPreTrainedModel. I'll make sure to sync before making any commits. Thanks!<|||||>@Rocketknight1 I would love to take TFDeiTModel if that's alright. <|||||>@Rocketknight1 Actually, looking deeper into the model, it looks like the type hints are already implemented in the call method for the TFDeiTModel class. Should I be adding the type hints to some of the other call methods in modeling_tf_deit.py?<|||||>@Batese2001 If the script is listing models as needing type hints but they already have them, it's possible that they're missing some part of the hints. Maybe the return type from the main model `call()` methods is not present?<|||||>@Rocketknight1 Hi Matt, can I work on TFBlenderbot for this issue?<|||||>@Rocketknight1 Hi Matt π , i would like to take `Graphormer` pytorch version<|||||>hi there π can anybody tell me how to perform all check before comitting, make fixup- not working and github ci/cd failing, is it because i'm using python 3.11 or some other error<|||||>Hi @dewasahu2003, to run `make fixup` you will need to install some dependencies for code formatting and quality tools. The easiest way to do this is `pip install transformers[quality]` or `pip install -e .[quality]` in the directory where you cloned the `transformers` repo.<|||||>Hi π @Rocketknight1,thanks for the helpful feedback
But I closed the pull request that I was facing issue with and open a new pull request with proper code formatting and `make fixup` command.
new pull request π [New Pull Request](https://github.com/huggingface/transformers/pull/23073)<|||||>If ALBERT or XLMRoBERTa (Tensorflow) are still available then I would like to work on any one of them.<|||||>@MuskaanCAIR You can use the [colab script](https://colab.research.google.com/drive/1EvZTslb50yfRqIcXjCZFrbod4HrPdA0G?usp=sharing) in the first post to see which models still need hints. Remember that models will be listed by that Colab as long as they're missing full type hints including return type hints on the `call()` method. It's less important to annotate layers and functions in the model files, although still appreciated!<|||||>Colab lists TFXLMRobertaPreTrainedModel as one of the models. Can I take that?
<|||||>Hey @Rocketknight1 hope you are doing great, I see that there is still work to do here. `ASTModel` feels like a great opportunity of learning so I will work on it if no one else has picked it up, (if that's the case feel free to yell at me :upside_down_face: ) |
transformers | 16,058 | closed | [WIP] Flax Speech Recognition Seq2Seq | This PR adds the example script `run_flax_speech_recognition_seq2seq.py`, which can be used to fine-tune any [`Speech Sequence-to-Sequence Model`](https://huggingface.co/docs/transformers/master/en/model_doc/auto#transformers.AutoModelForSpeechSeq2Seq) for automatic speech recognition **in Flax** on one of the [official speech recognition datasets](https://huggingface.co/datasets?task_ids=task_ids:automatic-speech-recognition) or a custom dataset.
What remains TODO:
- [x] Add an adapter layer when loading a pre-trained Wav2Vec2 encoder checkpoint through a keyword argument. See: https://github.com/huggingface/transformers/pull/16056
- [x] Flax length grouped sampler: group training data into mega batches sorted by input length in order to minimise the amount of padding required. Place the batch of `batch_size` containing the element of maximum length placed first, so that an OOM happens sooner rather than later. See: https://github.com/huggingface/transformers/pull/16058/commits/d66c59de31ce3d8c61127b63b9198b69a0930ccd
- [x] Fix compilation issues with the Flax DataCollator: once the training data is grouped by input length, pad the input sequences using `pad_to_multiple_of`. Independently pad the target labels using `max_length`. See: https://github.com/huggingface/transformers/pull/16058/commits/d5a5fbdfeca67275e75c55498af0a3282c1c0741
- [ ] Verify the script works and that it yields equivalent results to PyTorch. Fine-tune Wav2Vec2-2-Bart-large-cnn on `librispeech-asr` in order to reproduce the PyTorch training results at https://huggingface.co/sanchit-gandhi/wav2vec2-2-bart-large-cnn/tensorboard.
- [ ] Save the feature encoder outputs during/before the first epoch and use these as the model inputs for subsequent epochs.
- [x] Gradient accumulation using the Optax optimiser library functions (`apply_every` or `MultiSteps`). Huge memory overhead. Switch to a custom implementations (see below).
- [x] Custom gradient accumulation. See: https://github.com/huggingface/transformers/pull/16058/commits/3ef71d141fb9b643959326c123026c454c4efa13
- [x] Gradient Checkpointing (aborted) - requires an `nn.remat` module in every Flax model layer, and shown not to give huge improvements in the Dalle-mini project.
- [ ] (Load and Continue from Last Checkpoint) | 03-10-2022 18:26:40 | 03-10-2022 18:26:40 | Create a tiny model:
```python
import jax.numpy as jnp
from transformers import AutoFeatureExtractor, AutoTokenizer, FlaxSpeechEncoderDecoderModel
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
decoder_id = "hf-internal-testing/tiny-random-bart"
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(
encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True
)
model.config.encoder.feat_proj_dropout = 0.0
model.config.encoder.final_dropout = 0.0
model.config.encoder.mask_time_prob = 0.1
model.config.decoder_start_token_id = model.config.decoder.bos_token_id
model.config.pad_token_id = model.config.decoder.pad_token_id
model.config.eos_token_id = model.config.decoder.eos_token_id
model.config.max_length = 20
model.config.num_beams = 1
model.config.encoder.layerdrop = 0.0
model.config.use_cache = False
model.config.processor_class = "Wav2Vec2Processor"
# check if generation works
out = model.generate(jnp.ones((1, 2000)))
model.save_pretrained("./")
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
feature_extractor.save_pretrained("./")
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
tokenizer.save_pretrained("./")
```
Train on dummy dataset:
```bash
#!/usr/bin/env bash
python run_flax_speech_recognition_seq2seq.py \
--dataset_name="hf-internal-testing/librispeech_asr_dummy" \
--model_name_or_path="./" \
--output_dir="./" \
--dataset_config_name="clean" \
--train_split_name="validation" \
--eval_split_name="validation" \
--preprocessing_num_workers="1" \
--length_column_name="input_length" \
--overwrite_output_dir \
--num_train_epochs="2" \
--per_device_train_batch_size="2" \
--per_device_eval_batch_size="2" \
--gradient_accumulation_steps="2" \
--generation_max_length="40" \
--generation_num_beams="1" \
--learning_rate="3e-4" \
--warmup_steps="5" \
--evaluation_strategy="steps" \
--text_column_name="text" \
--freeze_feature_encoder \
--predict_with_generate \
--do_lower_case \
--do_train \
--do_eval
```<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Let's continue here without gradient_checkpointing for now. Don't think that this should block us for the first Wav2Vec2-Bart training run<|||||>Does it work with `gradient_checkpointing`? Will do a proper review on Monday, but in the mean time feel free to already go ahead and try it out with Wav2Vec2-Bart on 8 TPUs :-) <|||||><details>
<summary>Create 'tiny' Wav2Vec2-2-Bart model (`tied_word_embeddings=False`)</summary>
```python
import numpy as np
from transformers import AutoFeatureExtractor, AutoTokenizer, FlaxSpeechEncoderDecoderModel
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
decoder_id = "hf-internal-testing/tiny-random-bart"
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True,
encoder_add_adapter=True, decoder_tie_word_embeddings=False)
model.config.decoder_start_token_id = model.config.decoder.bos_token_id
model.config.pad_token_id = model.config.decoder.pad_token_id
model.config.eos_token_id = model.config.decoder.eos_token_id
model.config.max_length = 20
model.config.num_beams = 1
model.config.use_cache = False
model.config.processor_class = "Wav2Vec2Processor"
# check if generation works
out = model.generate(np.ones((1, 2000)))
model.save_pretrained("./")
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
feature_extractor.save_pretrained("./")
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
tokenizer.save_pretrained("./")
```
</details>
<details>
<summary>Run on 5% of Librispeech ASR on TPU with float32 (full) precision</summary>
```
#!/usr/bin/env bash
JAX_DEFAULT_MATMUL_PRECISION=float32 python run_flax_speech_recognition_seq2seq.py \
--dataset_name="librispeech_asr" \
--model_name_or_path="./" \
--dataset_config_name="clean" \
--train_split_name="train.100[:5%]" \
--eval_split_name="validation[:5%]" \
--output_dir="./" \
--preprocessing_num_workers="16" \
--length_column_name="input_length" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="2" \
--per_device_eval_batch_size="2" \
--logging_steps="1" \
--max_duration_in_seconds="10" \
--max_target_length="32" \
--generation_max_length="40" \
--generation_num_beams="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--text_column_name="text" \
--save_total_limit="1" \
--freeze_feature_encoder \
--predict_with_generate \
--do_lower_case \
--do_eval \
--do_train
```
</details>
<details>
<summary>'Tiny' model logs: l2-norm of encoder and decoder hidden-states (on a per-layer basis). Real-valued outputs on the first forward pass, backward pass and proceeding training steps.</summary>
```
03/18/2022 14:02:38 - INFO - __main__ - ***** Running training *****
03/18/2022 14:02:38 - INFO - __main__ - Num examples = 94
03/18/2022 14:02:38 - INFO - __main__ - Num Epochs = 1
03/18/2022 14:02:38 - INFO - __main__ - Instantaneous batch size per device = 2
03/18/2022 14:02:38 - INFO - __main__ - Total train batch size (w. parallel & distributed) = 16
03/18/2022 14:02:38 - INFO - __main__ - Total optimization steps = 5
Step... (1) | decoder_hidden_states: [31.988552 31.934507 31.932938] | encoder_hidden_states: [121.05971 121.14229 121.0128 121.08176 121.19736] | encoder_outputs: 0.053542863577604294 | inputs: 337.59503173828125 | logits: 19.7552
03247070312 | loss: 6.910846710205078 |
Step... (2) | decoder_hidden_states: [31.978403 31.85397 31.924433] | encoder_hidden_states: [120.8189 120.81773 120.836624 120.75368 120.73546 ] | encoder_outputs: 0.053248822689056396 | inputs: 332.2764587402344 | logits: 19.
764877319335938 | loss: 6.913311958312988 |
Step... (3) | decoder_hidden_states: [32.03605 32.0115 31.969864] | encoder_hidden_states: [120.74285 120.60301 120.512054 120.62747 120.67913 ] | encoder_outputs: 0.05340719223022461 | inputs: 345.63580322265625 | logits: 19.
75674057006836 | loss: 6.906001091003418 |
Step... (4) | decoder_hidden_states: [31.977776 31.981033 31.923492] | encoder_hidden_states: [121.15105 120.91719 121.1255 121.18452 121.01183] | encoder_outputs: 0.0538754016160965 | inputs: 369.9171447753906 | logits: 19.7641830
44433594 | loss: 6.903560638427734 |
Training...: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 5/5 [00:24<00:00, 4.94s/it]
Epoch ... (1/1): 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [00:24<00:00, 24.71s/it]
```
</details><|||||><details>
<summary>Create βlargeβ Wav2Vec2-2-Bart model (`tied_word_embeddings=False`)</summary>
```python
import numpy as np
from transformers import AutoFeatureExtractor, AutoTokenizer, FlaxSpeechEncoderDecoderModel
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
decoder_id = "hf-internal-testing/tiny-random-bart"
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_from_pt=True, decoder_from_pt=True,
encoder_add_adapter=True, decoder_tie_word_embeddings=False)
model.config.decoder_start_token_id = model.config.decoder.bos_token_id
model.config.pad_token_id = model.config.decoder.pad_token_id
model.config.eos_token_id = model.config.decoder.eos_token_id
model.config.max_length = 20
model.config.num_beams = 1
model.config.use_cache = False
model.config.processor_class = "Wav2Vec2Processor"
# check if generation works
out = model.generate(np.ones((1, 2000)))
model.save_pretrained("./")
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
feature_extractor.save_pretrained("./")
tokenizer = AutoTokenizer.from_pretrained(decoder_id)
tokenizer.save_pretrained("./")
```
</details>
<details>
<summary>Run on 5% of Librispeech ASR on TPU with float32 (full) precision - identical bash script to that for 'tiny'</summary>
```
#!/usr/bin/env bash
JAX_DEFAULT_MATMUL_PRECISION=float32 python run_flax_speech_recognition_seq2seq.py \
--dataset_name="librispeech_asr" \
--model_name_or_path="./" \
--dataset_config_name="clean" \
--train_split_name="train.100[:5%]" \
--eval_split_name="validation[:5%]" \
--output_dir="./" \
--preprocessing_num_workers="16" \
--length_column_name="input_length" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="2" \
--per_device_eval_batch_size="2" \
--logging_steps="1" \
--max_duration_in_seconds="10" \
--max_target_length="32" \
--generation_max_length="40" \
--generation_num_beams="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--text_column_name="text" \
--save_total_limit="1" \
--freeze_feature_encoder \
--predict_with_generate \
--do_lower_case \
--do_eval \
--do_train
```
</details>
<details>
<summary>'Large' model logs: l2-norm of encoder and decoder hidden-states (on a per-layer basis). On the first forward pass, the l2-norm of the encoder hidden-states is well defined. The logs show that the first decoder-hidden state returns 'nan' values.</summary>
```
03/18/2022 14:16:05 - INFO - __main__ - ***** Running training *****
03/18/2022 14:16:05 - INFO - __main__ - Num examples = 221
03/18/2022 14:16:05 - INFO - __main__ - Num Epochs = 1
03/18/2022 14:16:05 - INFO - __main__ - Instantaneous batch size per device = 2
03/18/2022 14:16:05 - INFO - __main__ - Total train batch size (w. parallel & distributed) = 16
03/18/2022 14:16:05 - INFO - __main__ - Total optimization steps = 13
Step... (1) | decoder_hidden_states: [nan nan nan nan nan nan nan nan nan nan nan nan nan] | encoder_hidden_states: [410571.8 410428.7 410573.44 410535.3 410262.44 409993.22 410005.3
410596.28 410562.3 410165.12 410397.44 409875.1 410200.16 410837.16
410408.94 410557.22 410525.5 410319.47 410763.8 410527.88 410446.94
410469.2 410431.88 410328.4 410323.3 ] | encoder_outputs: 15.634440422058105 | inputs: 439.39068603515625 | logits: nan | loss: nan |
Step... (2) | decoder_hidden_states: [nan nan nan nan nan nan nan nan nan nan nan nan nan] | encoder_hidden_states: [nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan nan
nan nan nan nan nan nan nan] | encoder_outputs: nan | inputs: 411.89910888671875 | logits: nan | loss: nan |
```<|||||><details>
<summary>Create 'large' Wav2Vec2-2-GPT2 model</summary>
```python
import numpy as np
from transformers import AutoFeatureExtractor, FlaxSpeechEncoderDecoderModel, GPT2Tokenizer
encoder_id = "facebook/wav2vec2-large-lv60"
decoder_id = "gpt2-medium"
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True)
# force GPT2 to append EOS to begin and end of seq
def build_inputs_with_special_tokens(self, token_ids_0, token_ids_1=None):
outputs = [self.bos_token_id] + token_ids_0 + [self.eos_token_id]
return outputs
GPT2Tokenizer.build_inputs_with_special_tokens = build_inputs_with_special_tokens
gpt2_tokenizer = GPT2Tokenizer.from_pretrained(decoder_id)
# set pad_token_id to unk_token_id, note: unk_token_id == eos_token_id == bos_token_id
gpt2_tokenizer.pad_token = gpt2_tokenizer.unk_token
gpt2_tokenizer.save_pretrained("./")
model.config.pad_token_id = gpt2_tokenizer.pad_token_id
model.config.decoder_start_token_id = model.config.decoder.bos_token_id
model.config.eos_token_id = model.config.decoder.eos_token_id
model.config.max_length = 40
model.config.num_beams = 1
model.config.use_cache = False
model.config.decoder.use_cache = False
model.config.processor_class = "Wav2Vec2Processor"
# check if generation works
out = model.generate(np.ones((1, 2000)))
model.save_pretrained("./")
feature_extractor = AutoFeatureExtractor.from_pretrained(encoder_id)
feature_extractor.save_pretrained("./")
```
</details>
<details>
<summary>Run on 5% of Librispeech ASR on TPU with float32 (full) precision</summary>
```
#!/usr/bin/env bash
JAX_DEFAULT_MATMUL_PRECISION=float32 python run_flax_speech_recognition_seq2seq.py \
--dataset_name="librispeech_asr" \
--model_name_or_path="./" \
--dataset_config_name="clean" \
--train_split_name="train.100[:5%]" \
--eval_split_name="validation[:5%]" \
--output_dir="./" \
--preprocessing_num_workers="16" \
--length_column_name="input_length" \
--overwrite_output_dir \
--num_train_epochs="1" \
--per_device_train_batch_size="2" \
--per_device_eval_batch_size="2" \
--logging_steps="1" \
--max_duration_in_seconds="10" \
--max_target_length="32" \
--generation_max_length="40" \
--generation_num_beams="1" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--text_column_name="text" \
--save_total_limit="1" \
--freeze_feature_encoder \
--predict_with_generate \
--do_lower_case \
--do_eval \
--do_train
```
</details>
<details>
<summary>Wav2Vec2-2-gpt2 'large' model logs: l2-norm of encoder and decoder hidden-states (on a per-layer basis). Real-valued outputs on the first forward pass, backward pass and proceeding training steps.</summary>
```
03/18/2022 14:29:28 - INFO - __main__ - ***** Running training *****
03/18/2022 14:29:28 - INFO - __main__ - Num examples = 231
03/18/2022 14:29:28 - INFO - __main__ - Num Epochs = 1
03/18/2022 14:29:28 - INFO - __main__ - Instantaneous batch size per device = 2
03/18/2022 14:29:28 - INFO - __main__ - Total train batch size (w. parallel & distributed) = 16
03/18/2022 14:29:28 - INFO - __main__ - Total optimization steps = 14
Step... (1) | decoder_hidden_states: [ 27.705173 984.6688 1069.8226 1278.6012 5160.4707 5699.3545
5908.879 6050.114 6143.709 6212.514 6243.5605 6270.414
6293.3115 6315.0156 6347.5215 6391.46 6452.412 6532.633
6639.2114 6795.3633 6964.0645 7185.9297 7702.9453 7484.8037
1918.5354 ] | encoder_hidden_states: [400464.72 400572.25 400476.56 400581. 400228.7 399510.75 400316.12
400524.7 400276.5 400623.3 400237.38 399885.06 400458.94 400540.94
400962.78 401065.44 400384.72 400907.28 401037.88 400596.56 400616.4
400706.3 400376.38 400392.06 400209.72] | encoder_outputs: 15.65604305267334 | inputs: 441.4826354980469 | logits: 153219.21875 | loss: 5.277106285095215 |
Step... (2) | decoder_hidden_states: [ 27.50412 982.7307 1082.1393 1266.6243 5323.2876 5468.919
5817.261 5984.31 6098.169 6172.6177 6210.377 6239.094
6264.5156 6287.0127 6320.5957 6366.2725 6433.3057 6517.029
6629.832 6783.8105 6960.3086 7182.701 7713.1733 7574.8164
1835.7288 ] | encoder_hidden_states: [399864.1 399652.38 399964.25 400088.06 399837.88 399556.25 399368.34
400199.38 399967.75 399810.5 400387.34 399937.6 399366.7 400193.97
400485.3 400250.5 400265.6 400288.16 399987.75 400048.06 399985.75
Step... (3) | decoder_hidden_states: [ 27.585026 967.4181 1059.0647 1267.0371 5161.367 5522.994
5914.909 6066.3604 6165.132 6233.869 6263.9297 6289.3525
6310.677 6330.4287 6361.243 6401.865 6460.8145 6535.998
6634.0107 6781.088 6954.6953 7167.119 7668.0586 7605.51
1945.8069 ] | encoder_hidden_states: [395648.75 395167.88 395541.88 395567.1 395223.38 395166.2 395408.7
395719.5 395231.8 395440.8 396132.2 395191.97 395275.75 395832.78
395969.94 396130.06 395876.16 395957.06 395777.8 395564.5 395641.7
395728.62 395525.12 395319.56 395593.94] | encoder_outputs: 15.668608665466309 | inputs: 469.32513427734375 | logits: 153649.0 | loss: 5.771542072296143 |
Step... (4) | decoder_hidden_states: [ 26.962208 973.65936 1086.2867 1296.3983 5358.009 5720.5425
5965.629 6129.83 6257.0303 6342.369 6381.131 6410.9604
6438.2354 6461.836 6497.192 6543.486 6610.591 6694.203
6808.67 6969.8984 7143.211 7362.4785 7883.0137 7938.967
1861.9117 ] | encoder_hidden_states: [396724.56 396978.3 397126.6 397024.62 396497.25 396432.4 396626.25
396505.7 396774.38 396963.28 396561.06 396375.7 396653.56 396788.94
397315.38 397261.66 397600.88 397143.75 397139.22 397015.8 397229.72
397317.12 397280.7 397117. 396934.78] | encoder_outputs: 15.405157089233398 | inputs: 413.68218994140625 | logits: 150058.84375 | loss: 5.734399795532227 |
Step... (5) | decoder_hidden_states: [ 27.54343 979.775 1076.0803 1301.9713 5163.4297 5679.1533 [34/1854]
5905.4297 6039.828 6160.733 6236.451 6273.1816 6301.7246
6326.33 6349.4707 6383.63 6429.527 6494.6816 6576.9424
6689.077 6847.369 7022.1494 7239.0347 7760.078 7641.672
1898.6593 ] | encoder_hidden_states: [406666.4 406645.62 406751.12 406984.3 406793.06 406148.7 406196.3
406840.38 406524.12 406709.25 406924.88 406276. 406272.8 406852.44
406731.38 406783.12 406927.25 406774.06 406759.97 406678.8 406413.12
406399.38 406823.28 406723.03 406760.56] | encoder_outputs: 15.645303726196289 | inputs: 433.8205261230469 | logits: 150390.6875 | loss: 6.245608329772949 |
Step... (6) | decoder_hidden_states: [ 27.222858 990.08167 1092.7056 1300.3113 5371.709 5725.07
5957.906 6129.3535 6248.9053 6329.303 6366.3906 6395.6523
6422.501 6445.4014 6480.754 6528.621 6596.4595 6680.3633
6796.694 6957.9355 7141.7573 7361.3135 7884.8945 7559.6064
1813.7799 ] | encoder_hidden_states: [400203.62 400314.06 400102.16 400408.5 400243.12 399775.75 399778.84
400241.03 400312.3 400378.88 400467.84 400262.25 400152.66 400762.97
400715.38 400965.47 400972.2 400946.34 400621.56 400494.2 400381.94
400251.25 400610. 400565.16 399859.5 ] | encoder_outputs: 15.666407585144043 | inputs: 389.3349914550781 | logits: 149033.015625 | loss: 5.595775127410889 |
Step... (7) | decoder_hidden_states: [ 27.487904 977.2353 1077.8563 1276.3774 5150.678 5677.568
5895.86 6052.6934 6171.8306 6248.2627 6286.8447 6314.7285
6340.3857 6362.9355 6397.525 6443.6978 6508.964 6591.554
6704.009 6859.457 7038.1436 7254.1885 7778.0864 7743.4062
1841.6183 ] | encoder_hidden_states: [400044.25 399508.06 399957.38 400000.25 399269.62 399271.3 399618.38
399673.8 399372.75 399543.03 399945.75 399298.53 399452.9 399628.9
399918.3 399923.44 399984.3 400002.2 400012.94 399586.66 399763.53
399891.75 399923.03 399489.12 399915.62] | encoder_outputs: 15.693841934204102 | inputs: 439.5829162597656 | logits: 149439.1875 | loss: 5.356408596038818 |
Step... (8) | decoder_hidden_states: [ 27.260107 981.9591 1082.9978 1303.3995 5347.8193 5725.1963
5988.419 6157.9927 6286.791 6367.6123 6408.1953 6437.547
6463.213 6486.0117 6520.4287 6567.293 6633.672 6717.458
6832.828 6996.592 7176.0205 7395.158 7912.5283 7769.053
1882.0337 ] | encoder_hidden_states: [381122. 380513.5 381451.97 381533.62 380671. 380749.44 380397.56
381154.94 381103.56 381389.25 381325.7 380440.2 380236.6 380851.62
381388.38 381022.12 381248.9 381925.8 381047.94 380761.06 381296.44
380768.56 381124.38 380821.7 381050.06] | encoder_outputs: 15.531478881835938 | inputs: 428.942626953125 | logits: 151706.375 | loss: 5.479863166809082 |
Step... (9) | decoder_hidden_states: [ 27.203522 970.3016 1070.8975 1267.3228 5149.606 5495.448
5720.803 5930.7812 6054.88 6134.3813 6173.323 6201.705
6227.37 6249.925 6284.9014 6332.4463 6399.3584 6485.218
6600.2188 6764.827 6945.5586 7167.542 7712.4214 7688.0728
1921.8232 ] | encoder_hidden_states: [397437.84 397162.2 397749.75 397334.8 397161.62 396930.25 396887.62
397239.2 397335.62 397099.62 397290.4 397293.7 397000.25 397579.4
397912.12 397778.8 397601.94 397803.25 397521.16 397613.8 397667.56
397828.3 397397.75 397247.25 397604.25] | encoder_outputs: 15.765214920043945 | inputs: 418.4049987792969 | logits: 153519.6875 | loss: 5.3556318283081055 |
Step... (10) | decoder_hidden_states: [ 27.101421 971.5001 1079.2378 1281.2228 5339.7983 5706.5225
5975.9727 6123.8633 6253.355 6338.587 6379.96 6409.5786
6435.078 6458.2617 6492.785 6539.077 6604.3154 6686.5317
6799.6772 6959.0547 7134.6025 7348.9043 7864.45 7982.283
1889.574 ] | encoder_hidden_states: [402673.25 402494.44 402451.8 402582.56 402037.75 401663.47 402210.
402500.9 402219.3 402320.1 402611.62 402244.44 402333.94 402710.44
402725.62 402615.78 402902. 403058.94 402817.12 402506.88 402510.16
402621.66 402858.06 402372.06 402555.2 ] | encoder_outputs: 15.740177154541016 | inputs: 427.6776123046875 | logits: 151765.953125 | loss: 5.4407196044921875 |
Step... (11) | decoder_hidden_states: [ 27.101858 960.69806 1057.977 1275.9325 4993.126 5745.655
6005.66 6165.303 6280.248 6355.59 6392.543 6419.9526
6444.1123 6465.8066 6498.427 6542.979 6606.0117 6687.626
6797.332 6948.961 7114.5967 7316.5913 7814.159 7582.8237
1949.6685 ] | encoder_hidden_states: [392330.47 392076.62 392226. 392236.4 391949.8 391717.7 392134.75
392420.97 391965.97 392092.62 392427. 391836.06 392229.9 392766.7
392375.94 392607.5 392755.4 392962.8 392651.25 392396.16 392297.78
392674.38 392437.44 392127.75 392342.3 ] | encoder_outputs: 15.853521347045898 | inputs: 443.4962463378906 | logits: 154125.15625 | loss: 4.923087120056152 |
Step... (12) | decoder_hidden_states: [ 27.225206 968.9116 1067.0066 1284.6025 5543.5396 5725.6973
5977.792 6148.306 6284.0425 6358.6274 6394.204 6420.883
6445.0244 6466.761 6500.2197 6545.107 6607.967 6688.363
6796.278 6947.9785 7114.134 7320.1875 7820.391 7790.4746
1938.8198 ] | encoder_hidden_states: [389132.75 389117.12 389249.44 389016.8 389135.5 388934.06 388708.4
389077.2 389071.62 389255.16 389186.56 389063.25 388612.8 388886.75
389398.06 389742.66 389516.25 389593.62 389379.5 389351.7 389328.75
389318. 389152.44 389027.44 389372.9 ] | encoder_outputs: 15.960776329040527 | inputs: 445.12548828125 | logits: 153455.5625 | loss: 5.53969669342041 |
Step... (13) | decoder_hidden_states: [ 27.60433 964.4414 1066.7817 1286.687 5361.041 5721.004
5953.537 6131.306 6260.2393 6339.0967 6373.6455 6400.8584
6425.586 6447.6943 6481.8457 6526.724 6591.548 6674.255
6781.3926 6929.332 7095.1914 7306.254 7807.132 7699.7607
1906.3538 ] | encoder_hidden_states: [391200.66 390364.25 390781.3 390942.88 390509.3 390213.06 390245.28
390339.44 390532.6 390483.56 390849.3 390550.12 390550.4 391182.88
390956.2 391014.22 390799.22 391230.6 391087.06 390722.16 390951.2
391262.56 391213.2 390638.3 390842.12] | encoder_outputs: 16.03491973876953 | inputs: 460.640380859375 | logits: 150500.109375 | loss: 5.521194934844971 |
Training...: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 14/14 [04:26<00:00, 19.02s/it]
Epoch ... (1/1): 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 1/1 [04:26<00:00, 266.27s/it]
```
</details><|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Training is moved to https://github.com/sanchit-gandhi/seq2seq-speech . Should we close this one for now @sanchit-gandhi ? |
transformers | 16,057 | closed | Adding type hints for TFRoBERTa | As the title suggests, this PR adds type hints to the TFRoBERTa model classes. | 03-10-2022 18:19:42 | 03-10-2022 18:19:42 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,056 | closed | Fix Loading of Flax(Speech)EncoderDecoderModel kwargs from PreTrained Encoder-Decoder Checkpoints | This PR modifies the `from_encoder_decoder_pretrained` class methods in the Flax(Speech)EncoderDecoderModels to correctly instantiate an encoder and a decoder from specified pre-trained model names or paths **given** a set of keyword arguments (kwargs). We should be able to pass encoder/decoder specific kwargs into the `from_encoder_decoder_pretrained` method to override the config of the pre-trained checkpoints. The following code snippet provides a short PyTorch script to demonstrate this desired behaviour using the encoder kwarg `add_adapter`:
```python
from transformers import SpeechEncoderDecoderModel
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
decoder_id = "hf-internal-testing/tiny-random-bart"
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True)
print(model.config.encoder.add_adapter)
model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=False)
print(model.config.encoder.add_adapter)
```
Output:
```python
True
False
```
Reproducing the same script using the equivalent Flax model yields an error, the model not being able to be correctly instantiated:
```python
from transformers import FlaxSpeechEncoderDecoderModel
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
decoder_id = "hf-internal-testing/tiny-random-bart"
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_from_pt=True, decoder_from_pt=True)
```
Output:
```python
File ~/transformers/src/transformers/models/wav2vec2/modeling_flax_wav2vec2.py:857, in FlaxWav2Vec2PreTrainedModel.__init__(self, config, input_shape, seed, dtype, **kwargs)
849 def __init__(
850 self,
851 config: Wav2Vec2Config,
(...)
855 **kwargs,
856 ):
--> 857 module = self.module_class(config=config, dtype=dtype, **kwargs)
858 super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype)
TypeError: __init__() got an unexpected keyword argument 'add_adapter'
```
The encoder and decoder kwargs are incorrectly passed to the `__init__` method of the respective module. This raises a TypeError when loading from pre-trained checkpoints, the kwargs being unexpected arguments.
This PR amends the loading of the encoder and decoder models to handle such kwargs correctly. It's implementation is verified through a new rigorous test: `test_encoder_decoder_model_from_encoder_decoder_pretrained`. An encoder-decoder model is loaded with two kwargs set opposite to that specified in the standalone encoder and decoder configs. It asserts that the corresponding attributes in the _new_ encoder-decoder model config are in-fact set opposite to those in the standalone encoder and decoder configs.
Following these changes, the Flax script yields the desired output:
```python
from transformers import FlaxSpeechEncoderDecoderModel
encoder_id = "hf-internal-testing/tiny-random-wav2vec2"
decoder_id = "hf-internal-testing/tiny-random-bart"
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_from_pt=True, decoder_from_pt=True)
print(model.config.encoder.add_adapter)
model = FlaxSpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=False, encoder_from_pt=True, decoder_from_pt=True)
print(model.config.encoder.add_adapter)
```
Output:
```python
True
False
``` | 03-10-2022 18:04:33 | 03-10-2022 18:04:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,055 | closed | [Test] different update approach | null | 03-10-2022 17:53:41 | 03-10-2022 17:53:41 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16055). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,054 | closed | `torch.dtype` not JSON serializable if config is nested dict | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Linux-5.4.0-1066-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@stas00
## Information
Model I am using (Bert, XLNet ...): custom model that subclasses `VisionEncoderDecoderModel`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
Training an encoder-decoder model where encoder is a `VisionTextDualEncoderModel`, i.e., input is image + text, output is text. When saving the model ran into the following error:`TypeError: Object of type dtype is not JSON serializable`.
Due to the model architecture, my encoder config has nested dicts, and it turns out that in the nested dicts, `'torch.dtype'` has value `torch.float32` instead of a string.
Changing `configuration_utils.dict_torch_dtype_to_str` to check for nested dicts resolved the issue:
```
python
def dict_torch_dtype_to_str(self, d: Dict[str, Any]) -> None:
"""
Checks whether the passed dictionary has a *torch_dtype* key and if it's not None, converts torch.dtype to a
string of just the type. For example, `torch.float32` get converted into *"float32"* string, which can then be
stored in the json format.
"""
if d.get("torch_dtype", None) is not None and not isinstance(d["torch_dtype"], str):
d["torch_dtype"] = str(d["torch_dtype"]).split(".")[1]
for value in d.values():
if isinstance(value, dict):
self.dict_torch_dtype_to_str(value)
```
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
(Image + text)-to-text generation.
## To reproduce
Steps to reproduce the behavior:
1. Load a regular encoder decoder model
2. Modify the model config by adding a nested dict to either encoder config or decoder config
3. Set value of `'torch.dtype'` to `torch.float32` in the nested dict
4. Try saving the model
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`TypeError: Object of type dtype is not JSON serializable` | 03-10-2022 17:23:06 | 03-10-2022 17:23:06 | Thank you for the report and proposing the fix, which looks great, @feifang24
the next step is to make a PR out of your proposed fix if that works for you.<|||||>That would be awesome! Thank you @stas00 for getting back to me so quickly π€© I will file a PR and tag you as a reviewer.<|||||>still broken in diffusers,
** edit: just kidding, not diffusers, that was just pointing at transformers (so. many. wrappers.) - in save_pretrained, lines 2133 and/or 2140, in tokenization_utils_base.py
here's a simple idea, not sure if it would interfere with anything else... now just trying to figure out how to gracefully apply this to a pipeline without replicating the entire save_pretrained function...
class dtypeEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, torch.dtype): # might wanna add np, jnp types too?
return str(obj)
return json.JSONEncoder.default(self, obj)
## tests:
d = {"torch_dtype": torch.float16}
json.dumps(d) ## fail: TypeError: Object of type dtype is not JSON serializable
json.dumps(d, cls=dtypeEncoder) ## woot: {"torch_dtype": "torch.float16"} |
transformers | 16,053 | closed | bug in model.generate() / beam_search score per token | Tried on transformers 4.16.2, not platform dependent (bug in the algo).
## Copy/paste of an email sent to cwkeam:
I tried to generate some phrases with transformers (BART), and then tried to get the perplexity of each character, and noticed the characters scores were wrong.
If I had to give a half baked patch, it would look something like this:
```
generation_utils.py, line 2259
scores = None #torch.zeros((batch_size*num_beams, 0, ), dtype=torch.float, device=input_ids.device) if (return_dict_in_generate and output_scores) else None
generation_utils.py, line 2319
#if output_scores:
# scores += (next_token_scores,)
generation_utils.py, line 2337, insert:
#this is necessary because the next_token_scores get sorted (apparently in place)
#and also the variable gets overwritten (by a softmax, etc...)
old_input_scores = next_token_scores.view(batch_size*num_beams, vocab_size).clone()
generation_utils.py, line 2374:
#I'm not sure the clone() is necessary
if return_dict_in_generate and output_scores:
scores = old_input_scores[beam_idx, :].unsqueeze(-2) if scores is None else [torch.cat](http://torch.cat/)( (scores[beam_idx, :, :].clone(), old_input_scores[beam_idx, :].unsqueeze(-2)), dim=-2)
```
This will give you proper scores for *the beams that generated the output*, not *the beams that were present when the nth token was generated*.
My pseudo-patch downsides:
- only corrects beam_sample, doesn't deal with the other generation functions
- the k best beam scores among n>k are not chosen and returned (this is done in the beam_scorer.finalize for the beams themselves)
I don't really know the coding standards for transformers, and am not very good with pytorch, but would you be interested in having such a patch ? The main point is token/phrase perplexity.
## Copy paste of a notebook reproducing the bug
```
import os,sys
import re, itertools, collections
import pandas, numpy
import transformers, torch
import torch.nn as nn
#not model dependent
from transformers import BartTokenizer, BartForConditionalGeneration, AdamW, BartConfig
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
bart_model = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
prompt = ['I went to a restaurant. '+review+'. The food was very <mask> in my opinion.' for review in ('The ambiance was bad',)]
#show the probability of the 5 most probable tokens at each token generated
tokens = tokenizer(prompt, return_tensors="pt", padding=True)
token_ids, attn_mask = tokens['input_ids'], tokens['attention_mask']
for i in range(2):
out = bart_model.generate(token_ids, max_length=100, do_sample=True,top_k=500,temperature=1.5, num_beams=4, num_return_sequences=1, output_scores=True, return_dict_in_generate=True, output_hidden_states=True)
outscores = torch.cat([out['scores'][i].unsqueeze(1) for i in range(len(out['scores']))], dim=1)
print("best beam is seq: ", out['beam_indices'][0][-1].item())
num_beams, num_chars, char_dim = outscores.shape
num_top = 5
print(out['beam_indices'])
for s in range(num_beams):
for c in range(num_chars):
probas = nn.functional.softmax(outscores[s,c,:], dim=-1)
top_proba, top_idx = probas.topk(num_top, dim=-1)
num_seqs = top_proba.shape[0]
print("seq"+str(s)+"char"+str(c),"---".join([ str("{:0.1f}".format(p.item()*100))+" "+tokenizer.decode(top_idx[i]) for i,p in enumerate(top_proba)]))
print("out",tokenizer.batch_decode(out['sequences'], skip_special_tokens=True))
```
# example output and analysis
```
out ['I went to a restaurant. The ambiance was bad. The food was very bland and unauthentic in my opinion.']
```
```
seq1char0 99.7 <s>---0.0 The---0.0 I---0.0 There---0.0 Each
seq1char1 99.9 I---0.0 </s>---0.0 We---0.0 It---0.0 You
seq1char2 100.0 went---0.0 ate---0.0 go---0.0 Went---0.0 came
seq1char3 100.0 to---0.0 a---0.0 on---0.0 in---0.0 at
seq1char4 100.0 a---0.0 A---0.0 this---0.0 an---0.0 the
seq1char5 100.0 restaurant---0.0 Chinese---0.0 Mexican---0.0 Thai---0.0 hotel
seq1char6 100.0 .---0.0 ,---0.0 where---0.0 in---0.0 that
seq1char7 99.9 The---0.0 It---0.0 The---0.0 the---0.0 I
seq1char8 100.0 amb---0.0 Amb---0.0 lighting---0.0 vibe---0.0 restaurant
seq1char9 100.0 iance---0.0 iances---0.0 ience---0.0 ation---0.0 amb
seq1char10 100.0 was---0.0 β---0.0 is---0.0 ,---0.0 were
seq1char11 100.0 bad---0.0 terrible---0.0 horrible---0.0 good---0.0 negative
seq1char12 100.0 .---0.0 ,---0.0 ;---0.0 and---0.0 -
seq1char13 99.8 The---0.0 It---0.0 the---0.0 I---0.0 There
seq1char14 99.9 food---0.0 restaurant---0.0 Food---0.0 music---0.0 wine
seq1char15 99.9 was---0.0 is---0.0 .---0.0 ,---0.0 in
seq1char16 99.8 very---0.0 not---0.0 terrible---0.0 bad---0.0 too
seq1char17 12.9 bad---10.6 good---3.2 poor---2.5 bland---2.3 mediocre
seq1char18 23.5 ,---15.2 .---12.5 in---9.3 and---7.6 but
seq1char19 69.1 my---3.0 the---1.1 its---0.9 a---0.8 general
seq1char20 15.4 app---7.8 interesting---6.2 ----5.1 inspired---3.4 original
seq1char21 93.2 my---0.5 the---0.3 their---0.2 its---0.2 a
seq1char22 99.7 opinion---0.0 view---0.0 favor---0.0 humble---0.0 taste
seq1char23 99.8 opinion---0.0 favor---0.0 view---0.0 taste---0.0 humble
seq1char24 99.9 .---0.0 ,---0.0 and---0.0 ;---0.0 ..
seq1char25 99.9 .---0.0 ,---0.0 and---0.0 ;---0.0 ..
seq1char26 100.0 .---0.0 ,---0.0 </s>---0.0 ;---0.0 and
```
You can see it reports generating 'opinion' twice, which is not the case in the output phrase.
This is because
scores += (logits_warper(input_ids, next_token_scores_processed),) #generation_utils.py, line 2319
is not the probability of the beams after a fork .
If say beam 0 has such high score that all beams get replaced by a copy of beam 0 with a new token each, then the scores update should be
scores = scores[beam_0==0].repeat(1,4) + scores_for_new_tokens_added_to_beam_0
This is more or less what my pseudo-patch does correctly.
Tell me if you want a patch (I'm not ultra-proficient with pytorch but can make a first version) or want to take a look yourselves.
@cwkeam | 03-10-2022 16:35:50 | 03-10-2022 16:35:50 | (this really should be tagged to HF team @patrickvonplaten and I'm not sure if I'm at authority to answer but here's my reply)
So I think that the issue you're noticing is coming from the fact that `scores` is saved **before** the actual beam search process at step `k`.
```
If say beam 0 has such high score that all beams get replaced by a copy of beam 0 with a new token each, then the scores update should be
scores = scores[beam_0==0].repeat(1,4) + #scores_for_new_tokens_added_to_beam_0
```
Though it wouldn't be implemented this way, what you're expressing is that you want `scores` to reflect the beam selection done by the `BeamScorer`. I believe so because `If say beam 0 has such high score that all beams get replaced by a copy of beam 0 with a new token each` is something that happens inside `BeamScorer.process`.
Currently the code goes:
```python
outputs = # ... model outputs
# ...
next_token_scores_processed = logits_processor(input_ids, next_token_scores)
next_token_scores = next_token_scores_processed + beam_scores[:, None].expand_as(next_token_scores)
# Store scores, attentions and hidden_states when required
if return_dict_in_generate:
if output_scores:
scores += (next_token_scores,)
if output_attentions:
decoder_attentions += (
(outputs.decoder_attentions,) if self.config.is_encoder_decoder else (outputs.attentions,)
)
if self.config.is_encoder_decoder:
cross_attentions += (outputs.cross_attentions,)
if output_hidden_states:
decoder_hidden_states += (
(outputs.decoder_hidden_states,)
if self.config.is_encoder_decoder
else (outputs.hidden_states,)
)
# ... THEN do beam scorer process logic that actually picks the tokens to go with
beam_outputs = beam_scorer.process(
input_ids,
next_token_scores,
# ...
)
beam_scores = beam_outputs["next_beam_scores"]
beam_next_tokens = beam_outputs["next_beam_tokens"]
beam_idx = beam_outputs["next_beam_indices"]
input_ids = torch.cat([input_ids[beam_idx, :], beam_next_tokens.unsqueeze(-1)], dim=-1)
```
So currently it's:
```
STEP_1 input_ids -> step_1_scores -> SAVE
beamsearch(step_1_scores) -> beam_scores_1 -> STEP_2 input_ids
STEP_2 input_ids -> step_2_scores -> SAVE
beamsearch(step_2_scores) -> beam_scores_2 -> STEP_3 input_ids
```
and I think what you want is
```
STEP_1 input_ids -> step_1_scores
beamsearch(step_1_scores) -> beam_scores_1 -> SAVE -> STEP_2 input_ids
STEP_2 input_ids -> step_2_scores
beamsearch(step_2_scores) -> beam_scores_2 -> SAVE -> STEP_3 input_ids
```
or maybe even save before AND after beam search.
I think that @rbbb 's ultimate source of confusion is due to the fact that `beam_scores_1 != step_2_scores`.
```This will give you proper scores for the beams that generated the output, not the beams that were present when the nth token was generated.```
@patrickvonplaten what do you think?
<|||||>Yes, it depends what you want in those scores. I think currently it's the score for the n beams that were present in the search at the time that token in the sequence was chosen (even if a beam gets overwritten).
For most applications, I *think* people would want the probability of each token at each step of sequence creation (and ultimately, perplexity). But ultimately you'd have to ask people who use the library.<|||||>Hey @rbbb,
We've had quite some discussion about what exactly we should save in the `scores` list. Could you maybe take a look at this PR: https://github.com/huggingface/transformers/pull/14654 and all the linked issues to understand why `beam_scores` are implemented the way they are?
As you can see in the PR we don't save the conditional beam scores anymore, but the processed per token probabilities, so that it should be rather straight-forward to compute the perplexity<|||||>@patrickvonplaten
Looking at your discussion on the discuss page linked, this is exactly:
"The problem now is that the scores donβt really give any information about the probability of token j at time i which is what most people seem to be interested in."
So this issue is exactly another way of saying "I'd like to have the token probabilities (forward if it's autoregressive) for a given beam", which is exactly #14612, #14086, #14065 and the request at the end of #10012.
Also I think @cwkeam is right, my patch confuses scores before generating n_beams*2 next tokens and after. It does give legible output about which tokens are generated with which probability (might not be 100% correct though).
I guess I just expected to be able to do tokenizer.batch_decode(torch.topk(outscores[0,:,:], k=5)[1]) to see the top 5 tokens that could be generated at each step on the beam that generated the first phrase, and an additional softmax for the probas ...
Closing this issue for now as it is exactly similar to the previous issues mentioned. |
transformers | 16,052 | closed | Update ArrowWriter path in examples | `ArrowWriter` is no longer in the top-level namespace in `datasets` v2.0 (see https://github.com/huggingface/datasets/pull/3875#discussion_r822924405), so I'm replacing the`datasets.ArrowWriter` occurances with `datasets.arrow_writer.ArrowWriter`. | 03-10-2022 15:52:08 | 03-10-2022 15:52:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Both are research projects that have `dataets` pinned to version 1.0.0 in their requirements, so I think they should be left as is. |
transformers | 16,051 | closed | TF: clearer model variable naming | ### This issue is part of our **Great Code Cleanup 2022**. If you're interested in helping out, take a look at [this thread](https://twitter.com/carrigmat/status/1502319813510766599), or come [join us on Discord](https://t.co/kS42XBvpWH) and talk with other contributors!
As introduced in https://github.com/huggingface/transformers/issues/15908 and implemented in https://github.com/huggingface/transformers/pull/15907, we now have a new `@unpack_inputs` decorator to unpack TensorFlow model `call()` arguments. In essence, if we apply the decorator, we can replace `inputs["foo"]` with `foo`, making the code for the layer/model much shorter and clearer.
This issue is a call for contributors, to implement the new decorator in the architectures below. If you wish to contribute, reply in this thread which architectures you'd like to take :)
### Guide to contributing:
1. Ensure you've read our contributing [guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) π
2. Claim your architecture(s) in this thread (confirm no one is working on it) π―
3. Implement the changes as in https://github.com/huggingface/transformers/pull/15907 (see the diff on the model architectures for a few examples) πͺ
- The file you want to look at is in `src/transformers/models/[model_name]/modeling_tf_[model_name].py`
- In functions that have an `input_processing` call, remove it and add the `@unpack_inputs` decorator
- Replace any use of the `inputs` variable in those functions (e.g. `inputs["foo"]` -> `foo`)
5. Test your changes on the architecture with `RUN_SLOW=1 py.test -vv tests/[model_name]/test_modeling_tf_[model_name].py` π»
6. Open the PR and tag me in it (don't forget to run `make fixup` before your final commit) π
- Note that some code is copied across our codebase. If you see a line like `# Copied from transformers.models.bert...`, this means that the code is copied from that source, and our scripts will automatically keep that in sync. If you see that, you should not edit the copied method! Instead, edit the original method it's copied from, and run make fixup to synchronize that across all the copies. Be sure you installed the development dependencies with `pip install -e ".[dev]"`, as described in the contributor guidelines above, to ensure that the code quality tools in `make fixup` can run.
### Models updated:
- [x] (the templates)
- [x] albert
- [x] auto
- [x] bart
- [x] bert
- [x] blenderbot
- [x] blenderbot_small
- [x] camembert
- [x] clip
- [x] convbert
- [x] convnext
- [x] ctrl
- [x] deberta
- [x] deberta_v2
- [x] distilbert
- [x] dpr
- [x] electra
- [x] encoder_decoder
- [x] flaubert
- [x] funnel
- [x] gpt2
- [x] hubert
- [x] layoutlm
- [x] led
- [x] longformer
- [x] lxmert
- [x] marian
- [x] mbart
- [x] mobilebert
- [x] mpnet
- [x] mt5
- [x] openai
- [x] pegasus
- [x] rag
- [x] rembert
- [x] roberta
- [x] roformer
- [x] speech_to_text
- [x] t5
- [x] tapas
- [x] transfo_xl
- [x] vision_encoder_decoder
- [x] vit
- [x] wave2vec2 π lgnore this one for now, looking into the issue @infinite-Joy raised
- [x] xlm
- [x] xlm_roberta
- [x] xlnet | 03-10-2022 15:30:36 | 03-10-2022 15:30:36 | My favorite sport is deleting code :mechanical_arm:
I would be interested in taking `bart` :rocket: <|||||>Hi..would love to work on `encoder_decoder`<|||||>I'd like to work on gpt2!<|||||>I'll take longformer! <|||||>I can take CLIP and OpenAI. <|||||>Hi,
I would like to work on vision_encoder_decoder & distilbert<|||||>I'll take `mobilebert `and `layoutlm`<|||||>I'll take `mbart` and `t5`<|||||>I'll work on `vit`<|||||>I'll work on `albert`<|||||>electra, tapas, deberta v1 and v2, pegasus, xlnet<|||||>@Abdelrhman-Hosny The user doing the `t5` type annotations for TF said they're going to do the decorator too! I see you closed that PR already, but don't worry, it's being handled!<|||||>I have just made PR for `mpnet`, `rag`, `roformer`<|||||>I'll do `convnext`.
Also I found that the `hubert` and `camembert` (which inherits from roberta) models do not seem to have any `input_processing` call, so probably those can be marked as done<|||||>I can also do `led` and `lxmert`.<|||||>I would like to work on xlm_roberta<|||||>working on wave2vec2<|||||>I would like to work on `xlm`.<|||||>As @infinite-Joy noticed on discord, `wave2vec2` possibly needs a decorator update. I will be looking into it (and locking it out of this series of changes, for now) π <|||||>created PR for `transfo_xl` #16166 <|||||>Seems like xlm_roberta doesn't need modifications, I will work flaubert instead<|||||>If mt5 is up for grabs, I am in !<|||||>working on funnel<|||||>working on blenderbot<|||||> working on blenderbot_small<|||||>I'll do `marian` next<|||||>I'll work on convbert.<|||||>`auto` does not seem to have any `input_processing` methods. I'll take `ctrl` and don't know if any models after that are left (I could not find any)<|||||>Created PR for `XLM` https://github.com/huggingface/transformers/pull/16247<|||||>@robotjellyzone are you still intending to do DistilBERT? If not, i'd like to take that one, please.<|||||>> @robotjellyzone are you still intending to do DistilBERT? If not, i'd like to take that one, please.
Hi @jmwoloso , actually I am done with the modifications in the distilbert but got busy in my job. Tomorrow will going to push the changes.
But one thing, if you need something then you can take that vision_encoder_decoder model that one I have not touched yet <|||||>Hi @gante I don't seem to see any `inputs` methods in the `encoder_decoder` model can you pls help me in detecting where it needs changing? <|||||>Hi @bhavika since you already worked on two models if you don't mind please I'd like to submit PR for `lxmert` <|||||>@silvererudite it is possible that it doesn't have the function, I listed all TF models :) <|||||>
> Hi @bhavika since you already worked on two models if you don't mind please I'd like to submit PR for `lxmert`
Go ahead @silvererudite <|||||>> @silvererudite it is possible that it doesn't have the function, I listed all TF models :)
@gante I checked and there does not seem to be any `inputs` function in the model file, seems identical...I think `encoder_decoder` can be marked done then<|||||>Everyone, thank you so much for contributing to this issue π§‘ Together, we deleted >4000 lines of code, making the TF codebase more readable and easier to maintain πͺ
Tomorrow I will be updating any remaining architecture and closing this issue -- I'm assuming that claimed architectures will not be worked on this week. If you've claimed an architecture and are still interested in working on it, please let me know :)
Finally, if you've enjoyed this experience, there are other similar initiatives going on! Check issues with the `Good First Issue` (like [this one](https://github.com/huggingface/transformers/issues/16292)), or venture out in the harder `Good Second Issue`-labeled issues π |
transformers | 16,050 | closed | Text Generation Pipeline not using Target Tokenizer | ## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-3.10.0-1160.53.1.el7.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyTorch version (GPU?): 1.8.1+cu101 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu)
- Jax version: 0.3.1
- JaxLib version: 0.3.0
### Who can help
@patrickvonplaten @patil-suraj @Narsil
## Information
I would like to fine tune MBART, using two different tokenizers: one for the source, and one for the target.
That is mostly implemented, but has a few issues, when the tokenizers are really different.
Specifically, I get UNKs all over the target data, a wrong vocab length, and in the translation pipeline, get rubbish on the output (tokens from the source based on token id from the target)
## Expected behavior
Multiple SPMs should be properly implemented
I have made a specific PR for the pipeline: https://github.com/huggingface/transformers/pull/16049
And a PR for all of it: https://github.com/huggingface/transformers/pull/15946
And they work locally, however the tests don't pass and I'll need help to get this to work.
| 03-10-2022 15:14:25 | 03-10-2022 15:14:25 | Not stale - there's a PR
https://github.com/huggingface/transformers/pull/16049<|||||>Gently pinging @Narsil here :-)<|||||>Thanks for the ping.
I think we also need @patil-suraj input on this too.
The tests do not pass right now. Because `tgt_lang` is not defined by default on the tokenizer `AutoTokenizer.from_pretrained("facebook/mbart-large-cc25")`.
We can probably hack this in pipelines to avoid requiring modifying the `tokenizer`.
```diff
diff --git a/src/transformers/pipelines/text2text_generation.py b/src/transformers/pipelines/text2text_generation.py
index 54a3cfd60..4d3545e4f 100644
--- a/src/transformers/pipelines/text2text_generation.py
+++ b/src/transformers/pipelines/text2text_generation.py
@@ -301,6 +301,11 @@ class TranslationPipeline(Text2TextGenerationPipeline):
# translation, XX, to YY
preprocess_params["src_lang"] = items[1]
preprocess_params["tgt_lang"] = items[3]
+ # Necessary for MBart
+ if "src_lang" in preprocess_params:
+ self.tokenizer.src_lang = preprocess_params["src_lang"]
+ if "tgt_lang" in preprocess_params:
+ self.tokenizer.tgt_lang = preprocess_params["tgt_lang"]
return preprocess_params, forward_params, postprocess_params
def __call__(self, *args, **kwargs):
```
@patil-suraj mentionned that `.decode` should already use the `target_tokenizer` (https://github.com/huggingface/transformers/pull/16049#issuecomment-1064375571) so do we really need to handle it in the pipelines ?
Another thing is that more recent Marian seems to use 2 entirely different tokenizers and this is going to make it harder to respect `added_tokens` and stuff like this IIUC: https://github.com/huggingface/transformers/issues/15982
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,049 | closed | Use the target tokenizer in text generation pipeline | # What does this PR do?
A subset of https://github.com/huggingface/transformers/pull/15946
This PR addresses the issue with pipelines in decoding text of different target side tokenizer
This is split from the main PR because this affects the huggingface model hub and API, while the other code is applicable to run in a local copy | 03-10-2022 15:03:09 | 03-10-2022 15:03:09 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16049). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @AmitMY ,
Thanks for this PR ! It would be nice to get it working with Marian correctly.
Unfortunately, this approach cannot work inside within pipelines.
`.as_target_tokenizer()` does not exist in most tokenizers, and therefore cannot be used (pipeline code is very generic), It's also mutating internal state which is likely going to cause issues in threaded/multiprocessing context.
Here I can think of a better approach if we do indeed need two very distinct tokenizers (it seems to be the case, but the solution for #15946 might change the direction).
The way the pipeline is set, we would need to define two tokenizer, a `preprocess_tokenizer` and a `postprocess_tokenizer` (or `source`/`target`). Each would be unique, and be used accordingly.
By default since most models use the same tokenizer, we could just set both variables to the same object and that should solve the issue at hand.
Before jumping to coding this, I think we should fix the first issue at hand #16050.
Btw, in that issue you mention Mbart, but in the code you modify Marian.
And a note for readers: there's also an open issue to support fast Marian tokenizers https://github.com/huggingface/transformers/issues/15982 which could end up being linked to this work.
<|||||>Not reviewing but commenting for information.
> `.as_target_tokenizer() does not exist in most tokenizers, and therefore cannot be used `
@Narsil `as_target_tokenizer` defined in `PreTrainedTokenizerBase` so it should be available for all tokenizers. By default it's a no op. cf
https://github.com/huggingface/transformers/blob/e66743e6c9601a4b12ffc2335a9f60a41d1ca60c/src/transformers/tokenization_utils_base.py#L3405-L3411
> By default since most models use the same tokenizer, we could just set both variables to the same object and that should solve the issue at hand.
Right now I think there are only two models that support two vocabs: FSMT and Marian (#15831), but for these two tokenizers it's not necessary to call `as_target_tokenizer` since they use the target tokenizer in `.decode/batch_decode`<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,048 | closed | MarianMTModel is not trained after finished training and loading | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.12.0.dev20220208+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
I am training MarianMTModel for translation from scratch on EuroParl dataset.
* validation BLEU during training is cca 30, checkpointing every 5000 steps, with load_best_model_at_end argument
* At the end of the training I save the model with `trainer.save_model('./trained')`.
* I then load this model with `MarianMTModel.from_pretrained("./trained")`. Loaded model does not appear to be trained, BLEU score on validation set is around 4
* Came around this:
https://github.com/huggingface/transformers/blob/e66743e6c9601a4b12ffc2335a9f60a41d1ca60c/src/transformers/models/marian/modeling_marian.py#L1211-L1224
* Finally, solved the issue by setting `model._keys_to_ignore_on_save = None`
I believe the problem is exactly the same as in issue #9525
I do not know about `_keys_to_ignore_on_load_missing`, but I think the `_keys_to_ignore_on_save` should be None.
My config also has `"static_position_embeddings": true,`, but still it seems like, embedings are trained too...
## To reproduce
Steps to reproduce the behavior:
1. Train MarianMTModel from scratch
2. Save checkpoint/best model with `trainer.save_model('./trained')`
3. Load checkpoint with from_pretrained
4. BLEU score is not the same (it is much lower)
## Expected behavior
Model should also save weights of positional #embeddings.
| 03-10-2022 15:02:12 | 03-10-2022 15:02:12 | Found out the real issue. I was resetting the model incorrectly... I did reset_parameters on all layers (including positional embeddings), but in combination with static positional embeddings it made the model unusable after loading. My bad. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.