repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
15,947
open
Tranformers documentation translation to Spanish
Hi! Let's bring the documentation to all the Spanish-speaking community :) Who would want to translate? **Please follow the instructions in the [Translating guide](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md)**. Here is a list of the files ready for translation. Let us know here if you'd like to translate any and we'll add your name to the list. Some notes: - Please translate using an informal tone (imagine you are talking with a friend about `transformers` 🤗). For example, use `Tú` instead of `Usted`; or `colabora` instead of `colabore`. - Please translate in a gender-neutral way. For example, instead of "Nosotros podemos" it could be "Podemos"; or "Los que quieran" could be "Las personas que quieran." - Add your translations to the folder called `es` inside the [`source` folder](https://github.com/huggingface/transformers/tree/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source). - Register your translation in [es/_toctree.yml](https://github.com/huggingface/transformers/blob/main/docs/source/es/_toctree.yml); please follow the order of the [English version](https://github.com/huggingface/transformers/blob/main/docs/source/en/_toctree.yml). - Once you're finished, open a pull request and tag this issue by including `#issue-number` in the description, where `issue-number` is the number of this issue. - 🙋 If you'd like others to help you with the translation, you can also post in our [forums](https://discuss.huggingface.co/) or tag [@osanseviero](https://twitter.com/osanseviero) on Twitter to gain some visibility. ### Get Started section - [x] [quicktour.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/quicktour.mdx). @Duedme - [x] [installation.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/installation.mdx). @lilianabs ### Tutorial section - [x] [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/pipeline_tutorial.mdx) @FernandoLpz - [x] [autoclass_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/autoclass_tutorial.mdx) @Duedme - [x] [preprocessing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/preprocessing.mdx) @yharyarias - [x] [training.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/training.mdx) @yharyarias - [x] [accelerate.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/accelerate.mdx) @Sangohe - [x] [model_sharing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.mdx) @Gerard-170 - [x] [multilingual.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/multilingual.mdx) @SimplyJuanjo ## How-to guides - [x] [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx "fast_tokenizers.mdx") @jloayza10 - [x] [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx "create_a_model.mdx") @ignacioct - [x] [custom_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/custom_models.mdx "custom_models.mdx") @donelianc - [x] [run_scripts.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/run_scripts.mdx "run_scripts.mdx") WIP @donelianc - [x] [sagemaker.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/sagemaker.mdx "sagemaker.mdx") @SimplyJuanjo - [x] [converting_tensorflow_models.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/converting_tensorflow_models.mdx "converting_tensorflow_models.mdx") @donelianc - [ ] [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx "serialization.mdx") - [ ] [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx "performance.mdx") - [ ] [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx "parallelism.mdx") WIP @astrocronopio - [ ] [benchmarks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/benchmarks.mdx "benchmarks.mdx") - [ ] [migration.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/migration.mdx "migration.mdx") - [ ] [troubleshooting.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/troubleshooting.mdx "troubleshooting.mdx") - [ ] [debugging.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/debugging.mdx "debugging.mdx") WIP @SimplyJuanjo - [ ] [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx "community.mdx") - [ ] [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx "docs/source/en/add_new_model.mdx") - [ ] [add_new_pipeline.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/add_new_pipeline.mdx "add_new_pipeline.mdx") - [ ] [testing.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/testing.mdx "testing.mdx") - [ ] [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx "pr_checks.mdx") ## FINE-TUNE FOR DOWNSTREAM TASKS - [ ] [sequence_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/sequence_classification.mdx "sequence_classification.mdx") - [ ] [token_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/token_classification.mdx "token_classification.mdx") WIP @gpalomeque - [ ] [question_answering.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/question_answering.mdx "question_answering.mdx") - [x] [language_modeling.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/language_modeling.mdx "language_modeling.mdx") @jQuinRivero - [ ] [translation.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/translation.mdx "translation.mdx") - [x] [summarization.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/summarization.mdx "summarization.mdx") @AguilaCudicio - [ ] [audio_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/audio_classification.mdx "audio_classification.mdx") - [ ] [asr.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/asr.mdx "asr.mdx") - [x] [image_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/image_classification.mdx "image_classification.mdx") @SimplyJuanjo - [ ] [multiple_choice.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/multiple_choice.mdx "multiple_choice.mdx") ## CONCEPTUAL GUIDES - [x] [philosophy.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/philosophy.mdx "philosophy.mdx") @[jkmg](https://github.com/jkmg) - [ ] [glossary.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/glossary.mdx "glossary.mdx") - [ ] [pad_truncation.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/pad_truncation.mdx "docs/source/en/pad_truncation.mdx") - [x] [bertology.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/bertology.mdx "bertology.mdx") @jQuinRivero - [ ] [perplexity.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/perplexity.mdx "perplexity.mdx") FYI @osanseviero @stevhliu @sgugger @mishig25
03-05-2022 06:03:55
03-05-2022 06:03:55
Hi, I can do the translation of **quicktour.mdx** in two days at the latest. <|||||>I'll translate "Fine-tune a pretrained model" into Spanish to file [training.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/training.mdx).<|||||>Thanks, @Duedme and @yharyarias! That would be great. Please let me know anything you need.<|||||>Hi, I can translate [pipeline_tutorial.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/pipeline_tutorial.mdx) 🙂<|||||>Hi, I would like to do the translation for [accelerate.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/accelerate.mdx)<|||||>Hi, I will work on [model_sharing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/model_sharing.mdx) :D <|||||>Hey all! I updated the initial comment cc'ing the WIP translations. When you create a PR, please make sure to tag this issue by adding #15947 to it. <|||||>Hi, I can work on [installation.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/installation.mdx) :D<|||||>Hi, I would like to translate [preprocessing.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/preprocessing.mdx)<|||||>I made this Pull-Request #16158 to translate quicktour.mdx. I would like to work on **autoclass_tutorial.mdx**.<|||||>I new in the community but I'll like to translate [multilingual.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/multilingual.mdx) I'm finishing my Udacity Deep Learning Nanodegree this week, therefore I can do the translation next monday if that's okey.<|||||>Thanks, @SimplyJuanjo! Welcome to the HF community and congrats on the Nanodegree 🤗. Sure, next week is perfect!<|||||>I made this pull request https://github.com/huggingface/transformers/pull/16329 to translate multilingual.mdx. Is everything done the right way? @omarespejel @osanseviero <|||||>Hello, I would like to translate philosophy.mdx<|||||>Hey, I would like to translate sagemaker.mdx<|||||>Hi, I'd like to translate [fast_tokenizers.mdx](https://github.com/huggingface/transformers/blob/main/docs/source/en/fast_tokenizers.mdx)<|||||>Thanks, @jloayza10 @SimplyJuanjo @jkmg! I just added your names to those files 🤗. Please let me know anything you need.<|||||>Hi! I would like to translate [create_a_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/create_a_model.mdx), if it's still available <|||||>Hey! I would like to translate language_modeling.mdx. <|||||>Thank you @ignacioct and @jQuinRivero! I added your names to those files above. Will be reviewing your PRs.<|||||>Hi! I would like to translate [parallelism.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/parallelism.mdx). Do I translate English words on images as well?<|||||>Hi, @omarespejel I would like to translate bertology.mdx next!<|||||>Gracias @jQuinRivero y @astrocronopio! I added you in the main comment to the file you will translate 🤗 @astrocronopio, do not worry about the text in images .<|||||>Sorry for the delay, I've been settling into a new position as a junior AI programmer this month. I made this pull request https://github.com/huggingface/transformers/pull/17262 to translate sagemaker.mdx. Should I also translate the documentations mentioned in the table of contents of this documentation? @omarespejel @osanseviero @sgugger Additionally, I would like to translate [image_classification.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/image_classification.mdx) if possible.<|||||>Congrats on the new position and we wish you a lot of success in it, @SimplyJuanjo! 🤗 No, for the moment it is not required to translate the documentation mentioned in `sagemaker.mdx`. Thanks for noting that! Thanks for the `sagemaker.mdx` PR! Thank you! I will tag you for `image_classfication.mdx` 🚀 <|||||>Thx @omarespejel! 🤗 Already added the `image_classification.mdx` to my PR. I would like to translate now `debbuging.mdx` if possible.<|||||>@SimplyJuanjo thank you for your PR. I am checking it. Sure! You can start `debugging.mdx`, that would be great! Thank you. I will add your name to the checklist above.<|||||>Hola, me gustaría contribuir traduciendo token_classification.mdx<|||||>Hola @gpalomeque! Thank you very much! That would be great. Let me add you to the list.<|||||>Hola @omarespejel, I'd like to translate the chapter _custom_models.mdx_.<|||||>Sounds good! Feel free to open a PR @donelianc <|||||>Hola envié el pull request 17992. Veo que aun esta disponible la tarea question_answering.mdx y me gustaría realizar la traducción correspondiente. Saludos.<|||||>Hola @gpalomeque! Muchas gracias por tu traducción de `token_classification.mdx` #17992. Coloqué algunos comentarios. Sorry for my late reply and review. Are you still interested in translating `question_answering.mdx`? <|||||>Hi @donelianc! I assigned you to `run_scripts` as you mentioned in your previous PR! Thanks! :)<|||||>Hi, I'd like to translate summarization.mdx @omarespejel<|||||>Hi @AguilaCudicio! That would be amazing. Thanks I added you to the list :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi all, I'd like to translate question_answering.mdx and multiple_choice.mdx :) @omarespejel <|||||>Feel free to open a PR @alceballosa :) <|||||>Hey 🤗 team! I noticed this task being inactive for some days now. I'll try to keep working on it, but updating the task checklist for the latest contributions would be helpful to avoid duplications. I'll translate [serialization.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/serialization.mdx) and submit a PR when it's ready. <|||||>Hi all! I'd like to contribute with [performance.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/performance.mdx) spanish translation. Could it be assigned to me first please?<|||||>Hi folks, I'd like to translate the [pr_checks.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/pr_checks.mdx) doc. I'll submit a PR when ready.<|||||>Hi all, I'd like to translate [asr.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/tasks/asr.mdx). As before, I'll submit a PR once the translation is ready. <|||||>I made this pull request https://github.com/huggingface/transformers/pull/20566 to add Spanish translation of [debugging.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/en/debugging.mdx) and also corrected one typo in the original english doc. @omarespejel @osanseviero @sgugger Additionally, I would like to translate [add_new_model.mdx](https://github.com/huggingface/transformers/blob/b9a768b3ffa80c4c19d024f9f42d5917e7d8109e/docs/source/en/add_new_model.mdx) if possible.<|||||>Hi all! I would like to translate [community.mdx](https://github.com/huggingface/transformers/blob/9c9db751e29432e8924624ef44856cd9fa671ef3/docs/source/en/community.mdx). I'll make a pull request when the translation is ready! I'm not a native speaker of Spanish (I'm Japanese) but I'll do my best with the skills I earned working in Guatemala for 2 years :)<|||||>Hey all! As some people were interested in a place to discuss about translations, we opened a category in the [HF Discord server](http://hf.co/join/discord) with a category for internationalization and translation efforts, including a Spanish channel!
transformers
15,946
closed
[HELP NEEDED] Use target tokenizer for all functionality in target side
# What does this PR do? When using the MarianTokenizer with a different source and target SPMs, it is expected to get the length of the target vocabulary when performing: ```python with tokenizer.as_target_tokenizer(): print(len(tokenizer)) ``` Along with the expectation to be able to get the target vocabulary, and in translation time to use the target tokenizer. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Did you write any new necessary tests? ## HELP NEEDED Tests do not pass, as the token IDs returned change. However, the IDs are taken directly from the SPM model and not the VOCAB file, and I do not understand why my new values are less correct compared to the ones in the test files.
03-05-2022 00:31:05
03-05-2022 00:31:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15946). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @AmitMY , I am not 100% sure not knowing all the internals here, but it's pretty frequent that researchers add code on top of SPM and modify the ids. A behavior which is mimicked in `transformers` so that the actual model works. So the difference of ids could be just that, and we really the code to stay the same. I don't have a lot of background on this code to give you a possible better approach, sorry :(<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,945
closed
Unable to run run_glue.py offline
## Environment info An host without internet access but with ~/.cache/huggingface pre-populated ### Who can help @sgugger @LysandreJik @SaulLu ## Information Model I am using (Bert, XLNet ...): ``` --model_name_or_path bert-base-uncased --task_name mrpc --do_train --output_dir /home/jobuser/output_bart_test --per_device_train_batch_size 16 --do_eval --do_predict --per_device_eval_batch_size 8 --overwrite_output_dir --cache_dir /home/jobuser/hf_cache ``` The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce 1. Run https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_glue.py with internet using ``` --model_name_or_path bert-base-uncased --task_name mrpc --do_train --output_dir /home/jobuser/output_bart_test --per_device_train_batch_size 16 --do_eval --do_predict --per_device_eval_batch_size 8 --overwrite_output_dir --cache_dir /home/jobuser/hf_cache ``` 2. Run the same command without internet (literally disconnect the wire / wifi) 3. It fails at https://github.com/huggingface/transformers/blob/master/examples/tensorflow/text-classification/run_glue.py#L282 ## Expected behavior It should leverage the existing caches and train without breaking
03-04-2022 21:45:17
03-04-2022 21:45:17
You need to set the env variable `HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1` to be able to use the libraries in offline mode, as pointed out in the [documentation](https://huggingface.co/docs/transformers/installation#offline-mode)<|||||>@sgugger thanks for the blazing fast response! I forgot to add that I set `HF_DATASETS_OFFLINE=1`. I didnt know about `TRANSFORMERS_OFFLINE=1`. It works now.
transformers
15,944
closed
TF generate refactor - past without encoder outputs
# What does this PR do? As discussed in the original TF generate refactor plan (https://github.com/huggingface/transformers/pull/15562), removes the `encoder_outputs` from `past`. In practice, these changes consist mostly in: 1. Delete the lines flagged by Patrick; 2. Adapt `prepare_inputs_for_generation` and `_reorder_cache` from PT to TF, for each class. Three important notes: 1. Beam search was still in the old format, and a few changes there were needed to enable the changes above. They were mostly about how `past` or `encoder_outputs` were handled; 2. Some models have `cross_attn_head_mask` in `prepare_inputs_for_generation`, in their PT implementation, but raised errors in TF -> I've deleted it from the function output; 3. I've run `RUN_SLOW=1 pytest -vv tests/model_name/test_modeling_tf_model_name.py` for all affected models.
03-04-2022 20:29:21
03-04-2022 20:29:21
Let's merge this? cc @Rocketknight1 ?
transformers
15,943
closed
MarianTokenizer: get vocab length in `as_target_tokenizer` mode
# What does this PR do? When using the MarianTokenizer with a different source and target SPMs, it is expected to get the length of the target vocabulary when performing: ```python with tokenizer.as_target_tokenizer(): print(len(tokenizer)) ``` This PR introduces a minimal change to get the correct size depending on the selected SPM. ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Did you write any new necessary tests?
03-04-2022 17:49:34
03-04-2022 17:49:34
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15943). All of your documentation changes will be reflected on that endpoint.
transformers
15,942
closed
Made MaskFormerModelTest faster
# What does this PR do? This PR changes the configuration inside `tests/maskformer/test_modeling_maskformer.py::MaskFormerModelTest` creating a smaller model to make the test run faster
03-04-2022 17:23:42
03-04-2022 17:23:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15942). All of your documentation changes will be reflected on that endpoint.
transformers
15,941
closed
[LayoutLMv2] Update requires_backends of feature extractor
# What does this PR do? This PR moves the `requires_backends` to the call method instead of the init of `LayoutLMv2FeatureExtractor`. Fixes #15269
03-04-2022 17:12:28
03-04-2022 17:12:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15941). All of your documentation changes will be reflected on that endpoint.
transformers
15,940
closed
Override _pad in LEDTokenizer to deal with global_attention_mask
# What does this PR do? Fix #14648. This PR allows `LEDTokenizer._pad` to treat `global_attention_mask` if it is provided in `encoded_inputs`. Without this, `global_attention_mask` won't be padded, while other tensors will be padded (if users specify padding), and causes the following error (if `return_tensors` is specified) ``` ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. ``` ## More context Basically, this PR copies the method `_pad` defined in `https://github.com/huggingface/transformers/blob/e3645fd2806b1e0b9daec89a72e316b71be8609c/src/transformers/tokenization_utils_base.py#L3159` to `tokenization_led.py` and tokenization_led_fast.py, and added the following block to deal with `global_attention_mask`: https://github.com/huggingface/transformers/blob/df6b574ae2b12a71548cf06c7789b3f2fc60571d/src/transformers/models/led/tokenization_led.py#L113-L116 https://github.com/huggingface/transformers/blob/df6b574ae2b12a71548cf06c7789b3f2fc60571d/src/transformers/models/led/tokenization_led.py#L127-L130 ## The effect See [this comment](https://github.com/huggingface/transformers/pull/15940#issuecomment-1066113823)
03-04-2022 15:54:34
03-04-2022 15:54:34
_The documentation is not available anymore as the PR was closed or merged._<|||||>## Results before/after this PR ```python import numpy as np from transformers import LEDTokenizer, LEDTokenizerFast tokenizer_slow = LEDTokenizer.from_pretrained("allenai/led-base-16384") tokenizer_fast = LEDTokenizerFast.from_pretrained("allenai/led-base-16384") text_1 = "I love dogs" text_2 = "I love dogs and cats" texts = [text_1, text_2] model_inputs_slow = tokenizer_slow(texts, max_length=tokenizer_slow.model_max_length, padding=False, truncation=True) model_inputs_fast = tokenizer_fast(texts, max_length=tokenizer_fast.model_max_length, padding=False, truncation=True) model_inputs_slow["global_attention_mask"] = [np.zeros_like(input).tolist() for input in model_inputs_slow["input_ids"]] model_inputs_fast["global_attention_mask"] = [np.zeros_like(input).tolist() for input in model_inputs_fast["input_ids"]] # put global attention on <s> token for input in model_inputs_slow["global_attention_mask"][:]: input[0] = 1 for input in model_inputs_fast["global_attention_mask"][:]: input[0] = 1 print("`model_inputs` without padding (slow tokenizer)") print(model_inputs_slow) print("`model_inputs` without padding (fast tokenizer)") print(model_inputs_fast) model_inputs_slow = tokenizer_slow.pad( model_inputs_slow, padding=True, max_length=tokenizer_slow.model_max_length, ) model_inputs_fast = tokenizer_fast.pad( model_inputs_fast, padding=True, max_length=tokenizer_slow.model_max_length, ) print("=" * 30) print("`model_inputs` with padding (slow tokenizer)") print(model_inputs_slow) print("`model_inputs` with padding (fast tokenizer)") print(model_inputs_fast) ``` ## Outputs before this PR (note that `global_attention_mask` is not padded) ```python `model_inputs` without padding (slow tokenizer) {'input_ids': [[0, 100, 657, 3678, 2], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]} `model_inputs` without padding (fast tokenizer) {'input_ids': [[0, 100, 657, 3678, 2], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]} ============================== `model_inputs` with padding (slow tokenizer) {'input_ids': [[0, 100, 657, 3678, 2, 1, 1], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]} `model_inputs` with padding (fast tokenizer) {'input_ids': [[0, 100, 657, 3678, 2, 1, 1], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]} ``` ## Outputs with this PR ```python `model_inputs` without padding (slow tokenizer) {'input_ids': [[0, 100, 657, 3678, 2], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]} `model_inputs` without padding (fast tokenizer) {'input_ids': [[0, 100, 657, 3678, 2], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0], [1, 0, 0, 0, 0, 0, 0]]} ============================== `model_inputs` with padding (slow tokenizer) {'input_ids': [[0, 100, 657, 3678, 2, 1, 1], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0, -1, -1], [1, 0, 0, 0, 0, 0, 0]]} `model_inputs` with padding (fast tokenizer) {'input_ids': [[0, 100, 657, 3678, 2, 1, 1], [0, 100, 657, 3678, 8, 10017, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 0, 0], [1, 1, 1, 1, 1, 1, 1]], 'global_attention_mask': [[1, 0, 0, 0, 0, -1, -1], [1, 0, 0, 0, 0, 0, 0]]} ``` <|||||>- Call `super()._pad` then dealing with `global_attention_mask` (cc @sgugger ) - Add a comment about using `-1` instead of `0`.<|||||>Would wait a bit for @sgugger to check the part regarding `calling the super method` before merge.
transformers
15,939
closed
Fix LayoutLMv2 test
# What does this PR do? This PR fixes a failing LayoutLMv2 test, `test_torch_encode_plus_sent_to_model`. It also removes `requires_scatter`, because LayoutLMv2 doesn't depend on it.
03-04-2022 13:56:44
03-04-2022 13:56:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15939). All of your documentation changes will be reflected on that endpoint.
transformers
15,938
closed
Backprop Test for Freeze FlaxWav2Vec2 Feature Encoder
This PR correctly implements a back propagation test to verify the functionality of the `freeze_feature_encoder` argument added to the FlaxWav2Vec2 Model in #15873. It tests: 1. That the computed loss for the frozen feature encoder model and unfrozen model are **equal**. 2. That the gradients of the frozen feature encoder **differ** to those of the unfrozen feature encoder. 3. That the gradients of all other unfrozen layers remain **equal**.
03-04-2022 12:48:55
03-04-2022 12:48:55
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15938). All of your documentation changes will be reflected on that endpoint.<|||||>If @patil-suraj is happy with this test I'll merge!
transformers
15,937
closed
Add `ForInstanceSegmentation` models to `image-segmentation` pipelines
# What does this PR do? Add `ForInstanceSegmentation` models to `image-segmentation` pipelines - Requires: https://github.com/huggingface/transformers/pull/15936 - Requires: https://github.com/huggingface/transformers/pull/15934 Marking it as draft in the meantime. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-04-2022 11:24:15
03-04-2022 11:24:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15937). All of your documentation changes will be reflected on that endpoint.
transformers
15,936
closed
Returning outputs only when asked for for MaskFormer.
# What does this PR do? Change the return output from `()` to `None` which seems more aligned with the rest of the library. Also `auxiliary_logits` seem optional and don't seem to be used by the feature extractor, so this PR makes them optional too. Note: I couldn't test the modeling tests, there seem to be no fast tests, and the slow tests are failing for reasons seemingly unrelated to this PR. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-04-2022 10:34:08
03-04-2022 10:34:08
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15936). All of your documentation changes will be reflected on that endpoint.
transformers
15,935
closed
parm:logging_dir not works when
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.16.2 - Platform:linux - Python version:3.7.11 - PyTorch version (GPU?):1.10.2 GPU - Tensorflow version (GPU?):no tensorflow - Using GPU in script?:yes - Using distributed or parallel set-up in script?: ## Information Model I am using (Bert, XLNet ...):unicamp-dl/translation-en-pt-t5 The problem arises when using: * [ ] the official example scripts: (give details below) `from transformers import BertForSequenceClassification, Trainer, TrainingArguments model = BertForSequenceClassification.from_pretrained("bert-large-uncased") training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=3, # total # of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs ) trainer = Trainer( model=model, # the instantiated 🤗 Transformers model to be trained args=training_args, # training arguments, defined above train_dataset=train_dataset, # training dataset eval_dataset=test_dataset # evaluation dataset )` * [ ] my own modified scripts: (give details below) ` model_checkpoint = "unicamp-dl/translation-en-pt-t5" model = AutoModelForSeq2SeqLM.from_pretrained(model_checkpoint) args = Seq2SeqTrainingArguments( output_dir=f"{model_name}-finetuned-{source_lang}-to-{target_lang}-{time_stamp}", overwrite_output_dir=True, evaluation_strategy="epoch", prediction_loss_only=False, learning_rate=2e-5, per_device_train_batch_size=batch_size, # batch size per device during training per_device_eval_batch_size=batch_size, # batch size for evaluation weight_decay=0.01, save_steps=3000, eval_steps=10000, logging_dir='./logs', logging_steps=500, save_total_limit=3, num_train_epochs=10, predict_with_generate=True, fp16=True, push_to_hub=False ) trainer = Seq2SeqTrainer( model, args, train_dataset=tokenized_datasets["train"], eval_dataset=tokenized_datasets["valid"], data_collator=data_collator, tokenizer=tokenizer, compute_metrics=compute_metrics ) trainer.train() ` I could not found the dir logs which should be found in same dir of the script above ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ![image](https://user-images.githubusercontent.com/44344942/156742529-e062727f-c60c-4d22-8134-7325ee0bd29d.png) @sgugger thanks
03-04-2022 10:02:45
03-04-2022 10:02:45
How do you launch the script? In particular if you launch it from another directory, the logging dir will appear there.<|||||>thanks very much,I found it is because I launch the script with Environment without Tensorflow installed(by the way, I get no warning about this during the process, haha)<|||||>I have same problem. When Run with 1 GPU or no GPU environment, That option works fine. but with multi GPU it not works.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,934
closed
Adding `MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING`
# What does this PR do? Adding `MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING` with MaskFormerForInstanceSegmentation in it. And `AutoModelForInstanceSegmentation`. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-04-2022 09:46:23
03-04-2022 09:46:23
transformers
15,933
closed
TypeError: meshgrid() got an unexpected keyword argument 'indexing'
when i tried to run the example given in ViLTModel below. I encountered problem"TypeError: meshgrid() got an unexpected keyword argument 'indexing'" `from transformers import ViltProcessor, ViltModel from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "hello world" processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm") model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm") inputs = processor(image, text, return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state` - `transformers` version: 4.16.2 - Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-centos-8.2.2004-Core - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.1+cu111 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA
03-04-2022 06:51:03
03-04-2022 06:51:03
Hi, You may need to upgrade to PyTorch 1.10 for this to work.<|||||>Sorry i'm new in nlp, I changed the pytorch to 1.10 and it comes another problem. Some weights of the model checkpoint at dandelin/vilt-b32-mlm were not used when initializing ViltModel: ['mlm_score.transform.LayerNorm.bias', 'mlm_score.transform.dense.weight', 'mlm_score.bias', 'mlm_score.decoder.weight', 'mlm_score.transform.LayerNorm.weight', 'mlm_score.transform.dense.bias'] - This IS expected if you are initializing ViltModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing ViltModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).<|||||>I have found a solution from the internet, thanks a lot.
transformers
15,932
closed
Update comments in class BatchEncoding
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Update comments in class BatchEncoding ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-04-2022 03:11:06
03-04-2022 03:11:06
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,931
closed
Update training scripts docs
This PR updates the [Examples](https://huggingface.co/docs/transformers/examples) docs with examples for running a training script on distributed setups, with mixed precision, TPUs and Accelerate. It also adds examples for helpful options like using your own custom dataset, resume training from a checkpoint, and uploading to the Hub. The examples are centered around one task, but the user is expected to be able to generalize this guide to training scripts for other tasks as well. Feel free to let me know if I'm missing any major difference between the tasks that should be mentioned. I also created this guide as a new `mdx` file to be able to use the framework switcher, so we can remove `examples` from the toctree when we merge.
03-04-2022 00:07:54
03-04-2022 00:07:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15931). All of your documentation changes will be reflected on that endpoint.
transformers
15,930
open
issues in `generate()`
## A collection of issues for `generate()` method that we need to address (**I don't assign these issues to anyone yet. I might work on some of these later. The purpose here is to make these issues transparent to the community, and to serve as a TODO list so we won't forget.**) As an extension to the model equivalence tests across framework, I prepared a list of potential issues for `generate()` which gives different results across frameworks. - Currently only look at the type and shape, not the values inside the tensors - We might need to wait the current **`generate()` refactorization PR(s) merged** before starting address these issues. - Here is the [Colab notebook]( https://colab.research.google.com/drive/1294oyVDrwvnuw_P3QtFE_CZ7DoMdthyq?usp=sharing) which demonstrates the issues. Summary of issues (also included as comments in the above notebook) - Flax's `generate()` doesn't support `return_dict_in_generate` (gives errors), `output_scores`, `output_attentions`, while PT/TF accept them - PT/TF/Flax `BeamSearchEncoderDecoderOutput` outputs have different keys - `scores` have different types: PT -> `tuple` , TF/Flax -> `tensor` - `scores` have different shape: PT -> (15, 4, 50257), TF -> (1, 4, 50257), Flax -> (1,) - `sequences_scores` have different types: PT -> tensor , TF -> None , Flax -> no such attribute - `PT.sequences_scores` seems to be `Flax.scores` - TF's `sequences` has a shorter length by 1 than PT's `sequences`
03-03-2022 21:07:27
03-03-2022 21:07:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,929
closed
Tests for MaskFormerFeatureExtractor's post_process*** methods
# What does this PR do? This PR adds more tests for `MaskFormerFeatureExtractor` `post_process***` methods
03-03-2022 19:36:46
03-03-2022 19:36:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15929). All of your documentation changes will be reflected on that endpoint.
transformers
15,928
closed
Fix #15898
Fixes #15898 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger I guess it's obvious looking at the [glue example] (https://github.com/huggingface/transformers/blob/4cd7ed4b3b7360aef3a9fb16dfcc105001188717/examples/pytorch/text-classification/run_glue.py#L448) referenced in the issue, but just to be sure: is it guaranteed that the actual logits (final output of the models) are always placed in the first position? I looked at the models of `MODEL_FOR_CAUSAL_LM_MAPPING_NAMES` and it seems a convention. I'm a bit confused because there is a `past_index` arg in `Trainer` that indexes the output of the model, which made me question if the position of the logits is guaranteed, but I guess it's never 0 or there's any other reason I don't know of. You know better than me for sure.
03-03-2022 19:10:49
03-03-2022 19:10:49
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15928). All of your documentation changes will be reflected on that endpoint.<|||||>Yes, that's enforced in all the model outputs. The `past_index` is not going to temper with that :-) Thanks a lot for fixing!
transformers
15,927
closed
Privacy&Security: Network Contact Every Model Load
I decided to add this after commenting on https://github.com/deepset-ai/haystack/issues/2118 ### Who can help @sgugger ## Information The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Use a tool such as wireshark to observe network activity, or set up an SSL-supporting proxy with a local certificate. 2. Load a cached model from huggingface.co, in a loop. 3. Observe the code query the network to update the model every load, even though it is cached. ## Expected behavior - When offline, network accesses for cached material should fail gracefully by default, easing and encouraging users to work in safer offline environments. - Cached material need only be refreshed daily at most, not every load. Otherwise the user's load-to-load behavior is broadcast to their network. - Access to the network for huggingface.co material should happen only when users request it. Otherwise, frequent automatic model updating can be leveraged by an adversary able to make a fake ssl certificate, to arbitrarily and frequently mutate the model the user is running. - The cache should store models as true git repositories. so that their integrity can be manually verified using the normal git tools. Ideally it would automatically perform this integriy verification, and detect when server material has changed. This is a multifaceted situation, and each part is valuable in its own right. I have memory and accuracy issues and may have already opened a similar issue to this, or stated something slightly false above. If so, I apologise. I wanted to take responsibility for my comments in the project linked at top.
03-03-2022 17:58:10
03-03-2022 17:58:10
cc @LysandreJik @Narsil This relates to things we talked about internally. I agree with the first point, the second and third point deserve their debate so thank you for writing this issue! For the fourth point, the situation is a bit more complicated as every user would need to download the whole repo when using a model, even if they don't need all the files (for instance most downloaded models have weights in PyTorch, TF and Flax).<|||||>Thanks @xloem, Thank you for writing this ! As @sgugger said, we had some internal debate about this and are considering our options. Here are my initial thoughts (just trying to bring food for thought) > When offline, network accesses for cached material should fail gracefully by default, easing and encouraging users to work in safer offline environments. Yes, probably with a warning so that users are aware they might not be using the latest version. (There's a flag `TRANSFORMERS_OFFLINE=1` by the way to remove network access globally across the code base afaik). > Cached material need only be refreshed daily at most, not every load. Otherwise the user's load-to-load behavior is broadcast to their network. We should still be careful that some users might actually be testing the updates to their model that they do by pushing on the hub. IMO a global form of caching to bypass network should be opt-in, not opt-out. And indeed it would be very nice to have. > Access to the network for huggingface.co material should happen only when users request it. Otherwise, frequent automatic model updating can be leveraged by an adversary able to make a fake ssl certificate, to arbitrarily and frequently mutate the model the user is running. Arguable, but it's also a nice thing when I upload a model while training to see the changes happening live. `AutoModel.from_pretrained("gpt2")` in my personal view is already asking for network (whereas `Automodel.from_pretrained("./gpt2")` is not) An attacker triggering many updates on a model is an issue that goes beyond SSL IMO. Defending against such things would have to be more enabled than by default > The cache should store models as true git repositories. so that their integrity can be manually verified using the normal git tools. Ideally it would automatically perform this integrity verification, and detect when server material has changed. Agree with @sgugger , as long as multiple files for weights are on the hub, doing a full git clone seems hard. There might be ways to consider using `GIT_LFS_SMUDGE=0` and download specifically some files. We also have to keep in mind that the repo itself might contain data totally unrelated to the model that users might want to ignore. Another thing to consider is that any change to the caching mechanism of `transformers` is going to lead to a full redownload of every user out there. It's doable, but not to be done lightly. One element in favor of "repo" based caching would be to reduce the number of HEAD calls during `from_pretrained` currently there's a bit too many when things are in cache, and on a flaky network it adds up pretty fast. There could be ways to emulate that "repo" based caching without doing an actual git clone (which would enable finer grained control over what happens there for the reasons mentionned above).<|||||>When you say an attacker triggering many updates on a model is an issue that goes beyond SSL, could you elaborate more on what you mean and why you might generally require users to enable a more secure setup, rather than having it a default? With transformers getting larger and larger, it seems the centralisation and frequent default network downloads for models could become a significant danger to nations. As someone who's worked with wireshark and implemented man in the middle attacks, I see network access as an opportunity for a network peer to mutate the data received, each request. It isn't a complex, impossible thing: the protocols are all public and people study them. This can be mitigated by making it nondefault or even emitting output when HEAD requests are made. I see that changing the default behavior could make things harder for people with existing setups that rely on the network behavior, but the plan could still be made for a future major release. As someone who's worked with git a lot, I don't understand well the concerns around the git clone. git-lfs is a separate system from git, and has support for download of individual files. git also has partial filter cloning now that can prevent download of unneeded git objects, although it isn't well documented in my experience. I agree that enabling all these inside python is an engineering challenge. The value of using git repositories is that it exposes the backend to the user and their administrators so they can perform their own audits and review changes provided by model updates. Just thoughts. Thanks for keeping this issue open. <|||||>we had an internal (?) discussion about changing the cached file layout to better map with `git` workflow, i.e. to be able to know if we have the latest version of a model just by doing one HTTP call. Does anyone remember where this discussion was?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,926
closed
Update doc test readme
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes doc test readme ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 17:55:15
03-03-2022 17:55:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15926). All of your documentation changes will be reflected on that endpoint.<|||||>> Thanks, for some reason it passed for me for the xxxForMaskedImageModeling models without doing this trick There was another problem that we solved only here: https://github.com/huggingface/transformers/pull/15911 . Think rebasing to master should solve that :-)
transformers
15,925
closed
Non-unique `local` in toctree
`local` field in toctree is treated as an id; therefore, it has to be unique. Because of this, in https://huggingface.co/docs/transformers/multilingual, you cannot reach the second `multilingual` page I will work on the doc-builder that checks locals are unique; [line here](https://github.com/huggingface/transformers/blob/master/docs/source/_toctree.yml#L32) <img width="300" alt="Screenshot 2022-03-03 at 18 08 52" src="https://user-images.githubusercontent.com/11827707/156615355-af17d9fc-0c8b-488c-ada3-77edc923bf15.png">
03-03-2022 17:11:44
03-03-2022 17:11:44
Thanks for this! I think we can actually remove the first `multilingual` page since it is outdated now. On a related note, can we also remove the `custom_datasets` [page](https://huggingface.co/docs/transformers/master/en/custom_datasets) since it has also been replaced by the fine-tune for downstream tasks section? Is this also why for section headers that share the same name, the second header cannot be reached (i.e., feature extractor [here](https://huggingface.co/docs/transformers/master/en/preprocessing#feature-extractor))?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@mishig25 Add you added something to error when entries are not unique?
transformers
15,924
closed
Re-enabling all fast pipeline tests.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 16:54:26
03-03-2022 16:54:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15924). All of your documentation changes will be reflected on that endpoint.
transformers
15,923
closed
Large change to enable MaskFormerForInstanceSegmentation
# What does this PR do? Changing the goal of this PR. I'll split this PR into more manageable chunks instead: - Adding the AutoDicts : https://github.com/huggingface/transformers/pull/15934 - Making modifications for FeatureExtractor https://github.com/huggingface/transformers/pull/15916 - Making modifications (breaking change) to MaskFormerForInstance (for the return outputs to be None instead of empty list) https://github.com/huggingface/transformers/pull/15936 - Finally a PR for the pipeline that depends on the previous 3 (which are not ordered themselves). https://github.com/huggingface/transformers/pull/15937 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 16:29:14
03-03-2022 16:29:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15923). All of your documentation changes will be reflected on that endpoint.<|||||>closing in favor of https://github.com/huggingface/transformers/pull/15937
transformers
15,922
closed
Do a pull in case docs were updated during build
# What does this PR do? Now that the build of the documentation takes a long time, it can happen that the doc has changed when trying to update the doc-build repo in the case of a release (for instance there was a failure with the v4.17.0 release doc update since the master doc was updated during its build). This PR stashes, pulls and pops the stash to avoid that.
03-03-2022 15:31:16
03-03-2022 15:31:16
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15922). All of your documentation changes will be reflected on that endpoint.
transformers
15,921
closed
Simplify release utils
# What does this PR do? This removes parts of the release util script that are now obsolete.
03-03-2022 15:26:57
03-03-2022 15:26:57
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,920
closed
Fix Embedding Module Bug in Flax Models
This PR fixes a widespread bug relating to the way in which Embedding Modules are defined for a number of different Flax Modules. In all of these instances, `embed_tokens` is defined as an optional `nn.Embed` attribute in the corresponding Flax Module: https://github.com/huggingface/transformers/blob/b693cbf99c5a180dde8b32ded2fb82ea735aab15/src/transformers/models/bart/modeling_flax_bart.py#L698-L701 When `embed_tokens` is specified as an `nn.Embed` module and passed as an argument to said Flax module, there are no issues. However, when `embed_tokens` is omitted from the arguments, it defaults to `None`. The `call` method in the Flax Module then attempts to override this attribute with an `nn.Embed` module: https://github.com/huggingface/transformers/blob/b693cbf99c5a180dde8b32ded2fb82ea735aab15/src/transformers/models/bart/modeling_flax_bart.py#L711-L716 Modifying a submodule's attributes after constructing it violates the _stateless_ design philosophy adopted by Flax. Doing so results in a [`SetAttributeFrozenModuleError`](https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.SetAttributeFrozenModuleError). This PR addresses this issue by forcing `embed_tokens` to be a required argument for the Flax Module, thus avoiding the need to ever override the attribute. There is a requirement to keep the Embedding Module defined as `embed_tokens` in order to remain consistent with the analogous PyTorch models and facilitate the conversion of Flax models to PyTorch. For reference, the alternative workarounds are not feasible: 1. Defining a new `nn.Embed` module `embeddings` in `setup` and setting `name=embed_tokens` 2. Defining a new `nn.Embed` module `embeddings` in `setup` and passing the `embed_tokens` as parameters in `module.apply` For 1, it is not possible to define two modules with the same name. For 2, the module name differs from the PyTorch script, and so Flax to PyTorch conversion is not possible. Currently, the modified Flax Modules are only used in an Encoder-Decoder configuration, where the embeddings of the encoder are tied to the decoder. Hence, in all instances, `embed_tokens` is specified as an `nn.Embed` argument _within_ the Module definition, and thus cannot be partioned. Since there are no standalone Encoder/Decoder models, this edge-case cannot yet be tested. Developing standalone Causal LMs in Flax is currently a WIP. Once complete, there will be a suite of models with which to run these edge-case tests.
03-03-2022 15:19:28
03-03-2022 15:19:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15920). All of your documentation changes will be reflected on that endpoint.<|||||>@sanchit-gandhi thanks for the PR - could you add a code snippet that shows how this design leads to an error? Not a big fan in general of adding `nn.Module` as class attributes of another `nn.Module` - so not sure if this PR is the correct way. @patil-suraj what do you think?<|||||>Of course @patrickvonplaten! Here's an excerpt of code that demonstrates the [`SetAttributeFrozenModuleError`](https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.SetAttributeFrozenModuleError) described above. Here, `embed_tokens` is passed as an argument to the `encoder`, thus enabling it to be initialised correctly. Conversely, `embed_tokens` is omitted as an argument to the `decoder`, and so defaults to `None`. When the `decoder` is initialised, Flax attempts to override the model attribute `embed_tokens` that has already been constructed as a default argument (`None`). This causes the script to fail, as the module attributes are frozen after construction: ```python import flax.linen as nn from transformers import BartConfig import jax import jax.numpy as jnp from transformers.models.bart.modeling_flax_bart import FlaxBartEncoder, FlaxBartDecoder, FlaxBartPreTrainedModel class FlaxDummyBartModule(nn.Module): config: BartConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation def setup(self): self.embedding = nn.Embed( self.config.vocab_size, self.config.d_model, embedding_init=jax.nn.initializers.normal(self.config.init_std), ) self.encoder = FlaxBartEncoder(self.config, dtype=self.dtype, embed_tokens=self.embedding) self.decoder = FlaxBartDecoder(self.config, dtype=self.dtype) def __call__( self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, position_ids, decoder_position_ids, output_attentions: bool = False, output_hidden_states: bool = False, return_dict: bool = True, deterministic: bool = True, ): encoder_outputs = self.encoder( input_ids=input_ids, attention_mask=attention_mask, position_ids=position_ids, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, deterministic=deterministic, ) decoder_outputs = self.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, position_ids=decoder_position_ids, encoder_hidden_states=encoder_outputs[0], encoder_attention_mask=attention_mask, output_attentions=output_attentions, output_hidden_states=output_hidden_states, return_dict=return_dict, deterministic=deterministic, ) return decoder_outputs class FlaxDummyBartModel(FlaxBartPreTrainedModel): config: BartConfig dtype: jnp.dtype = jnp.float32 # the dtype of the computation module_class = FlaxDummyBartModule model = FlaxDummyBartModel.from_pretrained('hf-internal-testing/tiny-random-bart', from_pt=True) ``` <details open> <summary> Output </summary> <br> ```python SetAttributeInModuleSetupError Traceback (most recent call last) Input In [4], in <cell line: 1>() ----> 1 FlaxDummyBartModel.from_pretrained('hf-internal-testing/tiny-random-bart', from_pt=True) File ~/transformers/src/transformers/modeling_flax_utils.py:550, in FlaxPreTrainedModel.from_pretrained(cls, pretrained_model_name_or_path, dtype, *model_args, **kwargs) 547 resolved_archive_file = None 549 # init random models --> 550 model = cls(config, *model_args, **model_kwargs) 552 if from_pt: 553 state = load_pytorch_checkpoint_in_flax_state_dict(model, resolved_archive_file) File ~/transformers/src/transformers/models/bart/modeling_flax_bart.py:933, in FlaxBartPreTrainedModel.__init__(self, config, input_shape, seed, dtype, **kwargs) 924 def __init__( 925 self, 926 config: BartConfig, (...) 930 **kwargs 931 ): 932 module = self.module_class(config=config, dtype=dtype, **kwargs) --> 933 super().__init__(config, module, input_shape=input_shape, seed=seed, dtype=dtype) File ~/transformers/src/transformers/modeling_flax_utils.py:116, in FlaxPreTrainedModel.__init__(self, config, module, input_shape, seed, dtype) 113 self.dtype = dtype 115 # randomly initialized parameters --> 116 random_params = self.init_weights(self.key, input_shape) 118 # save required_params as set 119 self._required_params = set(flatten_dict(unfreeze(random_params)).keys()) File ~/transformers/src/transformers/models/bart/modeling_flax_bart.py:963, in FlaxBartPreTrainedModel.init_weights(self, rng, input_shape) 953 module_init_outputs = self.module.init( 954 rngs, 955 input_ids, (...) 960 return_dict=False, 961 ) 962 else: --> 963 module_init_outputs = self.module.init( 964 rngs, 965 input_ids, 966 attention_mask, 967 decoder_input_ids, 968 decoder_attention_mask, 969 position_ids, 970 decoder_position_ids, 971 ) 972 return module_init_outputs["params"] [... skipping hidden 11 frame] Input In [2], in FlaxDummyBartModule.__call__(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, position_ids, decoder_position_ids, output_attentions, output_hidden_states, return_dict, deterministic) 15 def __call__( 16 self, 17 input_ids, (...) 26 deterministic: bool = True, 27 ): 28 encoder_outputs = self.encoder( 29 input_ids=input_ids, 30 attention_mask=attention_mask, (...) 35 deterministic=deterministic, 36 ) ---> 38 decoder_outputs = self.decoder( 39 input_ids=decoder_input_ids, 40 attention_mask=decoder_attention_mask, 41 position_ids=decoder_position_ids, 42 encoder_hidden_states=encoder_outputs[0], 43 encoder_attention_mask=attention_mask, 44 output_attentions=output_attentions, 45 output_hidden_states=output_hidden_states, 46 return_dict=return_dict, 47 deterministic=deterministic, 48 ) 50 return decoder_outputs [... skipping hidden 6 frame] File ~/transformers/src/transformers/models/bart/modeling_flax_bart.py:783, in FlaxBartDecoder.setup(self) 780 self.embed_scale = math.sqrt(self.config.d_model) if self.config.scale_embedding else 1.0 782 if self.embed_tokens is None: --> 783 self.embed_tokens = nn.Embed( 784 self.config.vocab_size, 785 embed_dim, 786 embedding_init=jax.nn.initializers.normal(self.config.init_std), 787 ) 789 # Bart is set up so that if padding_idx is specified then offset the embedding ids by 2 790 # and adjust num_embeddings appropriately. Other models don't have this hack 791 self.offset = 2 File ~/venv/lib/python3.8/site-packages/flax/linen/module.py:673, in Module.__setattr__(self, name, val) 671 if is_dataclass_attr: 672 if self._state.in_setup: --> 673 raise errors.SetAttributeInModuleSetupError() 674 object.__setattr__(self, name, val) 675 # Submodules are being defined and attached in setup() 676 else: SetAttributeInModuleSetupError: Module construction attributes are frozen. (https://flax.readthedocs.io/en/latest/flax.errors.html#flax.errors.SetAttributeInModuleSetupError) ``` </details> With regards to adding an `nn.Module` as a class attribute to another `nn.Module`, there are several pre-existing models that employ this design philosophy, including the FlaxBartModule itself: https://github.com/huggingface/transformers/blob/9932ee4b4bca9045d941af6687ef69eedcf68483/src/transformers/models/bart/modeling_flax_bart.py#L855-L862 I too agree that it is not the most elegant solution. However, defining `embed_tokens` as an `nn.Module` and passing it as an argument to another `nn.Module` does ensure consistency between the parameter naming in Flax and PyTorch. By extension, it facilitates model conversion between Flax and PyTorch, which is crucial. <|||||>I agree with this approach. Had an offline discussion about this with @sanchit-gandhi. Since flax module attributes are frozen, it's impossible to initialise and set embeddings if the embedding is defined as a module attribute. This is currently a bug in all FlaxBart like models. As demonstrated by the above example. And to keep naming consistency with PT, we cannot introduce a new name for these embeddings. I am not sure if there's any other way to avoid this. So I am okay with this PR. <|||||>Thanks for the code snippet @sanchit-gandhi. If you're ok with it @patil-suraj - it's good to go for me, I trust your judgement here! All encoder-decoder models pass the embedding weights from the higher level class to the encoder decoder, so I assume that there is no pretrained model that has encoder or decoder embedding weights directly saved in the dictionary, which is why this PR should have 0 breaking changes, right @patil-suraj ? <|||||>> this PR should have 0 breaking changes, right @patil-suraj ? @patrickvonplaten yes, you are right. Also just ran slow tests for BART to confirm this and they all pass.
transformers
15,919
closed
camembert tokinizer
# Initialize CamemBERT tokenizer tokenizer=CamembertTokenizer.from_pretrained('camembert-base',do_lower_case=True) TypeError: 'NoneType' object is not callable
03-03-2022 14:57:58
03-03-2022 14:57:58
Make sure you install `sentencepiece`!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,918
closed
Do not change the output from tuple to list - to match PT's version
# What does this PR do? Current `TFPegasusDecoder` change `all_self_attns` and `all_cross_attns` from `tuple` to `list` before return the outputs: https://github.com/huggingface/transformers/blob/3c4fbc616f74120c3900d07c772b7d2d9c7a53dd/src/transformers/models/pegasus/modeling_tf_pegasus.py#L1061-L1065 Pytorch `PegasusDecoder` returns them as tuple (i.e. not extra step to change them to `list`). This causes (WIP) PT-TF-equivalence test failed. This PR fixes this, as well as for 5 other models (+ template) with the same issue: - [x] bart - [x] blenderbot - [x] blenderbot_small - [x] marian - [x] mbart Think change `list` to `tuple` is acceptable (i.e. not a [too] breaking change)? cc @sgugger for this aspect. TF: @gante @Rocketknight1 Bart, Pegasus: @patrickvonplaten
03-03-2022 14:39:53
03-03-2022 14:39:53
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15918). All of your documentation changes will be reflected on that endpoint.<|||||>I can't think of any general TF reason to prefer lists to tuples, so if the tests pass and nothing broke then LGTM!<|||||>Would like to hear from @patrickvonplaten when he has time , since he is more involved in these models.<|||||>Nice!
transformers
15,917
closed
Enabling MaskFormer in pipelines
# What does this PR do? Enabling MaskFormer in pipelines. ```python pipeline = pipeline( model=MaskFormerForInstanceSegmentation.from_pretrained(".."), feature_extractor=AutoFeatureExtractor.from_pretrained("..."), task="image-segmentation", ) ``` However `pipeline(model="...")` doesn't work yet: - We need to add `pipeline_tag` on the model hub for it to work - We need to add `MaskFormerForInstanceSegmentation` to `MODEL_FOR_INSTANCE_SEGMENTATION_MAPPING` for the automated tests on random models to work <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 14:16:08
03-03-2022 14:16:08
I'll go ahead and merge this, and create another PR for the dict.
transformers
15,916
closed
Minor fixes for MaskFormer
# What does this PR do? Fix a bug found by @Narsil in the `feature_extractor.post_process_panoptic_segmentation`
03-03-2022 14:11:45
03-03-2022 14:11:45
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15916). All of your documentation changes will be reflected on that endpoint.
transformers
15,915
closed
Maskformer
# What does this PR do? Fix a bug found by @Narsil in the `feature_extractor.post_process_panoptic_segmentation`
03-03-2022 14:04:19
03-03-2022 14:04:19
transformers
15,914
closed
query() of generator `max_length` being succeeded
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Visual Studio Code, Windows 10, WSL 2 Bash - Python version: 3.8.8 - PyTorch CUDA: 10.2 ### Who can help Models: - GPT-2, GPT: @patrickvonplaten, @LysandreJik Library: - Pipelines: @Narsil --> ## Information Model I am using (GPT-2): The problem arises when using: * [X] the official example scripts: (give details below) * [] my own modified scripts: (give details below) The tasks I am working on is: * [] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## Expected behavior All outputs' lengths should be within `min_length` and `max_length`. --- Based on [SO post](https://stackoverflow.com/q/71338307/17840900). Goal: set `min_length` and `max_length` in Hugging Face 🤗 Transformers generator query. I've passed `50, 200` as these parameters. Yet, the length of my outputs are much higher... There's no runtime failure. ```python from transformers import pipeline, set_seed generator = pipeline('text-generation', model='gpt2') set_seed(42) def query(payload, multiple, min_char_len, max_char_len): print(min_char_len, max_char_len) list_dict = generator(payload, min_length=min_char_len, max_length=max_char_len, num_return_sequences=multiple) test = [d['generated_text'].split(payload)[1].strip() for d in list_dict] for t in test: print(len(t)) return test query('example', 1, 50, 200) ``` Output: ``` 50 200 Setting `pad_token_id` to `eos_token_id`:50256 for open-end generation. 1015 ```
03-03-2022 13:58:22
03-03-2022 13:58:22
Hi @danielbellhv , This is actually working as intended. `max_length` is expressed in number of tokens (which is not necessarily obvious I admit). First of all I would recommend using `max_new_tokens=200` which are the number of "additional" tokens which is easier for you to control. `max_length` still exists but it causes confusion regularly. (We don't have `min_new_tokens` yet, but I'll note the suggestion actually). Then what's a token ? Models, don't ingest the text one character at a time, but one `token` at a time. There are different algorithms to achieve this but basically "My name is Nicolas" gets transformers into ["my", " name", " is", " nic", "olas"] for instance, and each of those `tokens` have a number. So when you are generating tokens, they can contain themselves 1 or more characters (usually several and almost any common word for instance). That's why you are seeing 1015 instead of your expected 200 (the tokens here have an average of 5 chars) Does this explain a bit better what is happening ? Also this parameter is usually used to control the amount of time your model takes to run (the more tokens the more time, it scales linearly roughly) Also thanks for reporting this, pipelines are aimed at people not knowing machine learning, and we certainly don't expect you to know what a token is. We're starting some thoughts on how to make all pipelines smoother. Cheers, have a great day ! <|||||>Hi @Narsil. Ah yes; tokens explains the length of the outputs. However, when swapping out `max_length=max_char_len` for `max_new_tokens=max_char_len` - I got similar lengths. Is there such a parameter that controls the length as characters? I doubt it, since as you explained the model processes tokens and not characters. **Solution:** Rename `min_char_len, max_char_len` to `min_tokens, max_tokens` and simply reduce their values by a ~1/4 or 1/5. Cheers<|||||>Yes @danielbellhv , As you said, currently there's not option to get `char` control over the generated text. We can definitely think about it, it's a rather large change which is not obvious, but definitely aligned with our vision of pipelines<|||||>@Narsil Yeah, it might be a huge undertaking just for that slight benefit. Users could just generate with `n tokens` and string split the first `n chars` instead
transformers
15,913
closed
Support CLIPTokenizerFast for CLIPProcessor
# What does this PR do? Fixes #15888 Support CLIPTokenizerFast for CLIPProcessor. Update CLIPProcessor test code for saving and loading CLIPProcessor with fast tokenizer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 13:04:08
03-03-2022 13:04:08
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15913). All of your documentation changes will be reflected on that endpoint.
transformers
15,912
closed
[WIP]Resnet with variants
# What does this PR do? This PR shows how including the required changes to make `resnet-d` variant look but also to quickly decide to use one embedding type over another. Usage: ```python # normal embeddings config = ResNetConfig(embedding_type='basic') # embeddings is replace by 3 3x3 convs config = ResNetConfig(embedding_type='deep') # normal shortcut (stride=2) in a conv layer to downsample config = ResNetConfig(shortcut_type='basic') # resnet-d shortcut, avg pool to downsample (no stride = 2) in the conv config = ResNetConfig(shortcut_type='avg_down') # resnet-d like config config = ResNetConfig(embedding_type='deep', shortcut_type='avg_down') # use it as usual model = ResNetForImageClassification(config) ``` For example, [mmlab*** ](https://github.com/open-mmlab/mmdetection/blob/master/mmdet/models/backbones/resnet.py) allows creating different ResNet architectures similar to how this pr does by allowing the user to pass `embedding_type` and `shortcut_type`
03-03-2022 12:59:50
03-03-2022 12:59:50
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15912). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,911
closed
[Doctests] Fix ignore bug and add more doc tests
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR finishes all doctests for the speech models and fixes a bug with the newly added doctest flag. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 12:44:51
03-03-2022 12:44:51
transformers
15,910
closed
Flax XLM-RoBERTa
# 🌟 New model addition: Flax XLM-RoBERTa ## Model description The Flax version of RoBERTa and PyTorch and TF XLM-RoBERTa have been there for a while, but some pieces were missing that prevented us from doing pre-training and other tasks on Flask XLM-RoBERTa. In this PR #15900, I am adding those parts that were necessary to support XLM-RoBERTa in Flax based on the existing XLM-RoBERTa and Flax RoBERTa implementations. ## Open source status * [x] the model implementation is available: it is based off the existing implementation of XLM-RoBERTa and Flax RoBERTa in HuggingFace. * [x] the model weights are available: not as Flax, but loading from PyTorch seems to work. * [x] who are the authors: @aconneau I think, but not sure who ported it to HF. Maybe @sgugger, @patrickvonplaten?
03-03-2022 12:09:35
03-03-2022 12:09:35
transformers
15,909
closed
[Tests] Add attentions_option to ModelTesterMixin
# What does this PR do? The library is called HuggingFace Transformers, I know.. but we recently have additions of non-Transformer based models, that don't use attention (namely ConvNeXT and PoolFormer). Soon, we'll also have ResNet in the library (which will back Transformer-based models such as DETR). As these models don't use attention, they need to overwrite 3 tests: * `test_attention_outputs` * `test_retain_grad_hidden_states_attentions` * `test_model_outputs_equivalence` This PR adds a `has_attentions` attribute to `ModelTesterMixin` which can be set to False for such models. It will then make sure these tests are properly tested, without taking into account attention.
03-03-2022 12:04:52
03-03-2022 12:04:52
_The documentation is not available anymore as the PR was closed or merged._<|||||>You'll need to fix the failing tests as well.<|||||>Thanks for this work @NielsRogge ! I left a review comment (question) about `test_forward_signature`. And then a minor question here: https://github.com/huggingface/transformers/blob/114b62cc9927d85804540d46a6f85e57ba13ea51/tests/test_modeling_common.py#L1046-L1049 It is a bit strange (to me) to have `config.output_attentions = True` without any condition like `if self.has_attentions:`. WDYT?<|||||>@LysandreJik I've fixed the failing tests, failing test seems unrelated<|||||>Feel free to merge if every test passes after a rebase
transformers
15,908
closed
RFC -- TF: unpack model inputs through a decorator
# RFC This is an RFC concerning how model/layer inputs are handled in TF. The details are below and it includes a working demo with the proposed solution -- comments (and concerns) are deeply appreciated. cc @Rocketknight1 @LysandreJik @sgugger ## Motivation In all TF models and main layers, in their `call` function, we start by processing their inputs with `input_processing()` ([example](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_tf_bert.py#L741)). This function is needed to handle multiple input formats, from a dictionary of tensors packed in the first argument of `call` to variables of diverse types passed through keyword arguments (or a combination of both), as well as to handle some control logic. The `input_processing()` function outputs a dictionary containing the processed function arguments, often held in the `inputs` variable. Sadly, it forces us to access all function arguments through the `input` dictionary throughout its corpus, which results in verbose and less clear code ([example](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_tf_bert.py#L815)). Furthermore, it is not beginner-friendly -- it's not obvious at first how, why, and when we need to use the `input` dictionary, as opposed to the function arguments. ## Proposed solution The `inputs` dictionary is essentially a dictionary containing the function arguments and keyword arguments (plus potentially extra `**kwargs` that were unpacked in the process). If we were to call `call` again using these processed inputs, no input processing should be needed. The proposed solution is precisely that -- a decorator that runs `input_processing()` with whatever is passed to the function, sending its output to `call`. As a proof of concept, I've built the decorator and applied it to a single class in [this draft PR](https://github.com/huggingface/transformers/pull/15907) -- all associated tests pass. As you can see, the resulting code is shorter, clearer, and makes use of the expected function arguments throughout its corpus. It is also backward compatible. Cons: type hinting and decorators don't go along very well, if we were to rely more heavily on type hinting. ## Proposed plan for adoption If we agree to go forward with this solution, here's my proposed plan: 1. Open a PR with the decorator, including its application on an NLP model and some other non-NLP model. This would confirm that the decorator works fine for multiple modalities; 2. Applying the decorator is straightforward, so we could open an issue with the `Good First Issue` label. I'd keep track of collaborators, and give it a deadline (e.g. 1 month) after which I'd take over remaining models; 3. After all models are updated, move `input_processing()` so as to be an internal function of the decorator.
03-03-2022 11:34:50
03-03-2022 11:34:50
This is a really good idea, I think! The loss of type hinting might be an issue, though - I think our models are almost totally type-hinted in PT, though that's probably just to get them to compile with Torchscript and isn't strictly necessary for TF. Probably there's some way to salvage this, though - Py3.10 added [PEP 612](https://www.python.org/dev/peps/pep-0612/) which specifically addresses this. Unfortunately, we can't force people to update to Py3.10 quite yet, but maybe there's a path where we start with your proposal and switch to this one once it's acceptable to make Py3.10 the minimum version?<|||||>I do agree that every `call()` immediately calling `input_processing()` is currently quite ugly and confusing for newcomers, and some kind of fix would be extremely welcome, though.<|||||>The type hints are necessary for the documentation (see for instance [TFBertModel.call](https://huggingface.co/docs/transformers/model_doc/bert#transformers.TFBertModel.call) and the nice feature where you can hover) so I would be really sad to see them disappear. We also need to keep the docstring (for obvious reasons) and diverse attributes like `__module__`, `__name__`, `__qualname__` for the doc building. The good news is, I believe you can fix this some [`functools.wrap`](https://docs.python.org/3.6/library/functools.html#functools.wraps) dark magic :-)<|||||>@sgugger if the live docs are to be trusted, then the proposed decorator requires no doc-related changes 🎊 (but maybe I'm missing something 🤔 ) In the [draft PR](https://github.com/huggingface/transformers/pull/15907), I've applied the decorator to `TFBertModel.call`, and waited for the doc CI to run. The docs had no changes: [PR docs for TFBertModel.call](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15907/en/model_doc/bert#transformers.TFBertModel.call) / [master docs for TFBertModel.call](https://huggingface.co/docs/transformers/model_doc/bert#transformers.TFBertModel.call)<|||||>So there is no problem with type hints then :-) The doc-builder relies on the signature for the formatting of params, so if it's rendered normally it means the signature did not change.<|||||>I can't believe I didn't begin my first comment with it, so belatedly: it's a great idea! I love how elegant it makes the code! Just played with it locally and I don't see any change in IDE, so definitely go for this! You can polish the PR to make it not-draft and your plan seems a good one afterward. No need to wait for @LysandreJik as I'm pretty confident he'll love it as much as I do :-) (and he can review the cleaned PR when he's back).
transformers
15,907
closed
TF: Unpack model inputs through a decorator
# What does this PR do? Unpacks TF model inputs through a decorator, improving code clarity. To be read with issue https://github.com/huggingface/transformers/issues/15908, which holds the description of the problem, the proposed solution, and future plan. Closes https://github.com/huggingface/transformers/issues/15908 How to review: start with `modeling_tf_utils.py`, and then check the changes in the models. I've run `RUN_SLOW=1 py.test -vv tests/model_name/test_modeling_tf_model_name.py` for all involved models, and the changes were applied to BERT (plus its copy-linked architectures) and Speech2Text, showing that it works for multiple modalities.
03-03-2022 10:49:53
03-03-2022 10:49:53
_The documentation is not available anymore as the PR was closed or merged._<|||||>As suggested by @sgugger in #15908, I've opened the PR :)<|||||>This looks great! Also pinging @patrickvonplaten for review as it's quite central<|||||>Thanks for running the tests on it. It is way more readable like that indeed.<|||||>Looks very nice! Thanks for cleaning this up<|||||>Looks very nice! Thanks for cleaning this up<|||||>It seems like everyone is in agreement :) @Rocketknight1 can you have a final look, and approve if you agree with the change?<|||||>> Is the plan to wait and see how things go with BERT, and assuming no problems to repeat those edits across all model classes? Yeah! I'm also going to open an issue with the `good first issue` tag and try to get contributors in to replicate the change over other models. After a month or so, I will update any missing model.
transformers
15,906
closed
[Fix link in pipeline doc]
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes link in pipeline doc. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2022 10:00:41
03-03-2022 10:00:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15906). All of your documentation changes will be reflected on that endpoint.
transformers
15,905
closed
Add vision models to doc tests
# What does this PR do? This PR adds all vision models to the doc tests (and fixes the code examples of `xxxForMaskedImageModeling` models). Thanks @patrickvonplaten for reviving these.
03-03-2022 08:12:44
03-03-2022 08:12:44
transformers
15,904
closed
Incomplete padding support for Funnel Transformer.
Looks like there's a tiny bug in the funnel transformer implementation. https://github.com/huggingface/transformers/blob/39249c9589f5eb9677807e74221e2eb3ea1b4a35/src/transformers/models/funnel/modeling_funnel.py#L783 Should be: ```python module.word_embeddings.weight.data[module.word_embeddings.padding_idx].zero_() ``` Though a `pad_token_id` is not a field in `FunnelConfig` the `FunnelEmbeddings` do use the field if its present in the config. I could make a PR to fix the line & add a usually assign `config.pad_token_id = None`?
03-03-2022 08:10:02
03-03-2022 08:10:02
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,903
closed
Fix doc links in release utils
# What does this PR do? This PR fixes the link replacements in the release utils, since the links changed with the new doc front.
03-02-2022 22:51:45
03-02-2022 22:51:45
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15903). All of your documentation changes will be reflected on that endpoint.
transformers
15,902
open
[WIP] Add Fusion-in-Decoder
# What does this PR do? This PR adds the Fusion-in-Decoder model to the repository. Paper: https://arxiv.org/abs/2007.01282 Code: https://github.com/facebookresearch/FiD ## Who can review? Anyone in the community is free to review the PR once the tests have passed. @patil-suraj, @patrickvonplaten, @qqaatw
03-02-2022 19:50:46
03-02-2022 19:50:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15902). All of your documentation changes will be reflected on that endpoint.<|||||>In their [model](https://github.com/facebookresearch/FiD/blob/main/src/model.py) code, they are using `EncoderWrapper` and `CheckpointWrapper` on top of `T5ForConditionalGeneration`. The model loads without adding them too but then it gives warnings like: > Some weights of the model checkpoint at ../../../FiD/pretrained_models/nq_reader_base/ were not used when initializing T5ForConditionalGeneration: ['encoder.encoder.block.10.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.3.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.0.module.layer.0.layer_norm.weight', 'encoder.encoder.block.10.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.4.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.11.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.11.module.layer.0.layer_norm.weight', 'encoder.encoder.block.1.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.4.module.layer.1.layer_norm.weight', 'encoder.encoder.block.5.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.6.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.5.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.1.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.4.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.1.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.2.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.8.module.layer.1.layer_norm.weight', 'encoder.encoder.block.7.module.layer.0.layer_norm.weight', 'encoder.encoder.block.6.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.10.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.4.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.9.module.layer.1.layer_norm.weight', 'encoder.encoder.block.9.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.3.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.2.module.layer.0.layer_norm.weight', 'encoder.encoder.block.11.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.10.module.layer.0.layer_norm.weight', 'encoder.encoder.block.4.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.9.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.5.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.6.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.4.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.1.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.0.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.3.module.layer.0.layer_norm.weight', 'encoder.encoder.block.3.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.0.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.9.module.layer.0.layer_norm.weight', 'encoder.encoder.block.2.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.2.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.0.module.layer.1.layer_norm.weight', 'encoder.encoder.block.1.module.layer.0.layer_norm.weight', 'encoder.encoder.block.5.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.11.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.5.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.2.module.layer.1.layer_norm.weight', 'encoder.encoder.block.5.module.layer.0.layer_norm.weight', 'encoder.encoder.block.5.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.3.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.3.module.layer.1.layer_norm.weight', 'encoder.encoder.block.6.module.layer.0.layer_norm.weight', 'encoder.encoder.block.0.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.8.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.0.module.layer.0.SelfAttention.relative_attention_bias.weight', 'encoder.encoder.block.5.module.layer.1.layer_norm.weight', 'encoder.encoder.block.11.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.1.module.layer.1.layer_norm.weight', 'encoder.encoder.block.6.module.layer.1.layer_norm.weight', 'encoder.encoder.block.7.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.0.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.10.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.7.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.3.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.9.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.10.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.2.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.0.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.1.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.4.module.layer.0.layer_norm.weight', 'encoder.encoder.block.2.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.9.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.final_layer_norm.weight', 'encoder.encoder.block.6.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.6.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.10.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.11.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.7.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.7.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.10.module.layer.1.layer_norm.weight', 'encoder.encoder.block.9.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.3.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.11.module.layer.1.layer_norm.weight', 'encoder.encoder.block.8.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.4.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.6.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.7.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.7.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.7.module.layer.1.layer_norm.weight', 'encoder.encoder.block.0.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.8.module.layer.1.DenseReluDense.wi.weight', 'encoder.encoder.block.1.module.layer.0.SelfAttention.o.weight', 'encoder.encoder.block.9.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.8.module.layer.0.SelfAttention.v.weight', 'encoder.encoder.block.8.module.layer.0.SelfAttention.k.weight', 'encoder.encoder.block.8.module.layer.1.DenseReluDense.wo.weight', 'encoder.encoder.block.2.module.layer.0.SelfAttention.q.weight', 'encoder.encoder.block.8.module.layer.0.layer_norm.weight', 'encoder.encoder.embed_tokens.weight', 'encoder.encoder.block.11.module.layer.1.DenseReluDense.wi.weight'] This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). > Some weights of T5ForConditionalGeneration were not initialized from the model checkpoint at ../../../FiD/pretrained_models/nq_reader_base/ and are newly initialized: ['encoder.block.1.layer.0.SelfAttention.k.weight', 'encoder.block.11.layer.1.layer_norm.weight', 'encoder.block.11.layer.1.DenseReluDense.wo.weight', 'encoder.block.11.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.0.SelfAttention.v.weight', 'encoder.block.6.layer.1.DenseReluDense.wi.weight', 'encoder.block.2.layer.0.layer_norm.weight', 'encoder.block.5.layer.1.DenseReluDense.wo.weight', 'encoder.block.10.layer.0.SelfAttention.o.weight', 'encoder.block.11.layer.0.SelfAttention.q.weight', 'encoder.block.11.layer.0.layer_norm.weight', 'encoder.block.8.layer.0.layer_norm.weight', 'encoder.block.1.layer.0.SelfAttention.v.weight', 'encoder.final_layer_norm.weight', 'encoder.block.3.layer.1.layer_norm.weight', 'encoder.block.5.layer.0.layer_norm.weight', 'encoder.block.0.layer.1.DenseReluDense.wo.weight', 'encoder.block.2.layer.1.DenseReluDense.wi.weight', 'encoder.block.9.layer.1.layer_norm.weight', 'encoder.block.6.layer.1.layer_norm.weight', 'encoder.block.10.layer.1.DenseReluDense.wo.weight', 'encoder.block.2.layer.0.SelfAttention.o.weight', 'encoder.block.7.layer.1.DenseReluDense.wi.weight', 'encoder.block.5.layer.1.DenseReluDense.wi.weight', 'encoder.block.9.layer.0.SelfAttention.o.weight', 'encoder.block.10.layer.0.SelfAttention.v.weight', 'encoder.block.2.layer.0.SelfAttention.k.weight', 'encoder.block.7.layer.0.SelfAttention.q.weight', 'encoder.block.0.layer.1.layer_norm.weight', 'encoder.block.10.layer.0.SelfAttention.k.weight', 'encoder.block.7.layer.0.SelfAttention.v.weight', 'encoder.block.3.layer.0.SelfAttention.v.weight', 'encoder.block.4.layer.0.SelfAttention.k.weight', 'encoder.block.7.layer.0.SelfAttention.o.weight', 'encoder.block.1.layer.1.DenseReluDense.wi.weight', 'encoder.block.0.layer.1.DenseReluDense.wi.weight', 'encoder.block.0.layer.0.SelfAttention.v.weight', 'encoder.block.5.layer.1.layer_norm.weight', 'encoder.block.5.layer.0.SelfAttention.o.weight', 'encoder.block.11.layer.0.SelfAttention.k.weight', 'encoder.block.11.layer.0.SelfAttention.v.weight', 'encoder.block.9.layer.0.SelfAttention.q.weight', 'encoder.block.7.layer.1.DenseReluDense.wo.weight', 'encoder.block.5.layer.0.SelfAttention.q.weight', 'encoder.block.8.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.SelfAttention.k.weight', 'encoder.block.9.layer.1.DenseReluDense.wo.weight', 'encoder.block.10.layer.0.layer_norm.weight', 'encoder.block.6.layer.1.DenseReluDense.wo.weight', 'encoder.block.8.layer.0.SelfAttention.v.weight', 'encoder.block.0.layer.0.SelfAttention.relative_attention_bias.weight', 'encoder.block.7.layer.0.SelfAttention.k.weight', 'encoder.block.6.layer.0.SelfAttention.q.weight', 'encoder.block.7.layer.0.layer_norm.weight', 'encoder.block.5.layer.0.SelfAttention.k.weight', 'encoder.block.11.layer.0.SelfAttention.o.weight', 'encoder.block.4.layer.0.SelfAttention.q.weight', 'encoder.block.7.layer.1.layer_norm.weight', 'encoder.block.2.layer.1.layer_norm.weight', 'encoder.block.5.layer.0.SelfAttention.v.weight', 'encoder.block.3.layer.1.DenseReluDense.wi.weight', 'encoder.block.0.layer.0.layer_norm.weight', 'encoder.block.6.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.SelfAttention.o.weight', 'encoder.block.3.layer.0.layer_norm.weight', 'encoder.block.10.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.1.DenseReluDense.wo.weight', 'encoder.block.9.layer.0.SelfAttention.k.weight', 'encoder.block.2.layer.0.SelfAttention.v.weight', 'encoder.block.1.layer.0.layer_norm.weight', 'encoder.block.1.layer.0.SelfAttention.o.weight', 'encoder.block.2.layer.0.SelfAttention.q.weight', 'encoder.block.8.layer.1.DenseReluDense.wo.weight', 'encoder.block.2.layer.1.DenseReluDense.wo.weight', 'encoder.block.9.layer.1.DenseReluDense.wi.weight', 'encoder.block.6.layer.0.SelfAttention.v.weight', 'encoder.block.9.layer.0.layer_norm.weight', 'encoder.block.8.layer.0.SelfAttention.q.weight', 'encoder.block.1.layer.0.SelfAttention.q.weight', 'encoder.block.8.layer.0.SelfAttention.o.weight', 'encoder.block.10.layer.1.layer_norm.weight', 'encoder.block.0.layer.0.SelfAttention.o.weight', 'encoder.block.1.layer.1.layer_norm.weight', 'encoder.block.6.layer.0.layer_norm.weight', 'encoder.block.3.layer.1.DenseReluDense.wo.weight', 'encoder.block.8.layer.0.SelfAttention.k.weight', 'encoder.block.6.layer.0.SelfAttention.k.weight', 'encoder.block.1.layer.1.DenseReluDense.wo.weight', 'encoder.block.8.layer.1.layer_norm.weight', 'encoder.block.0.layer.0.SelfAttention.q.weight', 'encoder.block.9.layer.0.SelfAttention.v.weight', 'encoder.block.4.layer.0.layer_norm.weight', 'encoder.block.4.layer.1.DenseReluDense.wi.weight', 'encoder.block.4.layer.1.layer_norm.weight', 'encoder.block.10.layer.0.SelfAttention.q.weight', 'encoder.block.0.layer.0.SelfAttention.k.weight', 'encoder.block.3.layer.0.SelfAttention.q.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. Adding those modules on top of `T5ForConditionalGeneration` would be the right thing to do? If yes, then are there any existing examples I can look into to understand how to implement it?<|||||>Think we should rename the weigths here then -> @patil-suraj think you know best how to guide @bhavitvyamalik here
transformers
15,901
closed
tiny tweak to allow BatchEncoding.token_to_char when token doesn't correspond to chars
Tagging @n1t0 @thomwolf @sgugger but this PR should be extremely quick to review for anyone. # Problem `BatchEncoding.token_to_char` is supposed to return the char spans in the original string; however, right now, for tokens such as "\<s>, \</s>, \<CLS>" that don't correspond to any chars in the original string, an error is raised `TypeError: type object argument after * must be an iterable, not NoneType`. Run the following snippet replicate: ``` from transformers import AutoTokenizer, AutoModel model_name = "bert-base-uncased" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModel.from_pretrained(model_name) text = "He is an absolutely amazing software developer" tokenized_text = tokenizer(text) tokenized_text.token_to_chars(0) # 0 corresponds to <CLS> ``` # Fix The solution is to return `None` instead of raising an error for tokens not corresponding to any chars in the original string. ## P.S. I am lost as to why `run_tests_torch` failed for ` if [ -f test_list.txt ]; then python -m pytest -n 3 --dist=loadfile -s --make-reports=tests_torch $(cat test_list.txt) | tee tests_output.txt fi`. Some help would be appreciated.
03-02-2022 19:42:10
03-02-2022 19:42:10
_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Can I follow up on this? Would love to get some feedback -- even if it's about the specific reasons why this PR is not up to standard. Thanks!<|||||>Sorry for taking so long to review, could you just rebase on `main` so that the test suite passes? cc @SaulLu for knowledge<|||||>Done! @sgugger @LysandreJik <|||||>Thanks again for your contribution!
transformers
15,900
closed
Add missing support for Flax XLM-RoBERTa
# What does this PR do? The PyTorch and TF XLM-RoBERTa model and the Flax model for RoBERTa were added a while ago, but there were still a few missing pieces in order to be able to run pre-training and other tasks using Flax XLM-RoBERTa. This PR adds those parts. Fixes #15910. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @LysandreJik, @patrickvonplaten
03-02-2022 17:02:26
03-02-2022 17:02:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15900). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks a lot @versae !<|||||>I run the test using the PyTorch weights and it seems to work. I'm also already training a model using a TPUv3-8 with the code in this PR and it seems to be doing fine :) <|||||>Thanks @versae ! Uploaded flax checkpoint for base model, and can confirm that the tests are passing. Merging!
transformers
15,899
closed
Missing tags on Docker Hub after version 4.9.1
## Environment info - `transformers` version: > 4.9.1 ### Who can help @mfuntowicz ? ## Information https://hub.docker.com/r/huggingface/transformers-pytorch-gpu/tags I was expecting 1 tag per transformers version on the docker hub, but the most recent tag (aside from `latest`) is for 4.9.1 and was pushed on July 26, 2021. ## To reproduce Steps to reproduce the behavior (and see existing tags): 1. Either visit https://hub.docker.com/r/huggingface/transformers-pytorch-gpu/tags and sort by Newest 2. Or run `wget -q https://registry.hub.docker.com/v1/repositories/huggingface/transformers-pytorch-gpu/tags -O - | sed -e 's/[][]//g' -e 's/"//g' -e 's/ //g' | tr '}' '\n' | awk -F: '{print $3}'` Current tags available: ``` latest 2.10.0 2.11.0 2.6.0 2.7.0 2.8.0 2.9.0 2.9.1 3.0.0 3.0.1 3.0.2 3.1.0 3.2.0 3.3.0 3.3.1 3.4.0 3.5.0 3.5.1 4 4.0.0 4.0.1 4.1.0 4.1.1 4.2.0 4.2.1 4.2.2 4.3.0 4.3.1 4.3.2 4.3.3 4.4.0 4.4.1 4.4.2 4.5.0 4.5.1 4.6.1 4.7.0 4.8.0 4.8.1 4.8.2 4.9.0 4.9.1 ```` ## Expected behavior Each time a image is pushed with the `latest` tag, it could also be tagged with the current version number, like it was before version 4.9.1. (I was expecting to see 4.16.2 for example)
03-02-2022 16:54:44
03-02-2022 16:54:44
cc'ing @mfuntowicz (among others)<|||||>Hey! We indeed stopped pushing images a while back, but we've started doing so again a few weeks ago. It makes sense to push images with the versions. Will try to make some time and do so.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,898
closed
Error using evaluation in run_clm.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.17.0.dev0 - Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.3 - PyTorch version (GPU?): 1.10.2+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help @stas00, @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using: GPT-J-6B Running on GPU cluster with 10 x NVIDIA A100 40G The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Purpose is to fine-tune GPT-J for generating smart contract code. ## To reproduce Steps to reproduce the behavior: Run example training script `transformers/examples/pytorch/language-modeling/run_clm.py`. HF launch script: ```bash deepspeed --num_gpus=10 ./examples/pytorch/language-modeling/run_clm.py \ --deepspeed ./ds_config.json \ --model_name_or_path EleutherAI/gpt-j-6B \ --run_name gpt-j \ --dataset_name andstor/smart_contracts \ --dataset_config_name andstor--smart_contracts \ --output_dir ./finetuned \ --save_steps 100 \ --report_to all \ --logging_first_step \ --logging_steps 5 \ --evaluation_strategy steps \ --eval_steps 5 \ --block_size 1024 \ --do_train \ --do_eval \ --fp16 true \ --num_train_epochs 2 \ --gradient_accumulation_steps 2 \ --eval_accumulation_steps 2 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 ``` DeepSpeed config: ```json { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 2, "allgather_partitions": true, "allgather_bucket_size": 2e8, "overlap_comm": true, "reduce_scatter": true, "reduce_bucket_size": 2e8, "contiguous_gradients": true, "cpu_offload": true }, "gradient_accumulation_steps": "auto", "gradient_clipping": "auto", "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": false } ``` Fails with this error: ```python-traceback Traceback (most recent call last): File "./examples/pytorch/language-modeling/run_clm.py", line 541, in <module> main() File "./examples/pytorch/language-modeling/run_clm.py", line 489, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/cluster/home/andstorh/transformers/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 1473, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval) File "/cluster/home/andstorh/transformers/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 1600, in _maybe_log_save_evaluate metrics = self.evaluate(ignore_keys=ignore_keys_for_eval) File "/cluster/home/andstorh/transformers/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 2255, in evaluate output = eval_loop( File "/cluster/home/andstorh/transformers/.venv/lib/python3.8/site-packages/transformers/trainer.py", line 2446, in evaluation_loop logits = self.preprocess_logits_for_metrics(logits, labels) File "./examples/pytorch/language-modeling/run_clm.py", line 457, in preprocess_logits_for_metrics return logits.argmax(dim=-1) AttributeError: 'tuple' object has no attribute 'argmax' ``` The model runs fine without evaluation turned on. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The example script should run without producing an error.
03-02-2022 16:23:37
03-02-2022 16:23:37
Most likely this PR https://github.com/huggingface/transformers/pull/15473 introduced the breakage, since this code didn't exist before: https://github.com/huggingface/transformers/blob/130b987880a9b1ade5c76dc1413c12c8924fda50/examples/pytorch/language-modeling/run_clm.py#L456-L457 Most likely the new code has been merged w/o a test exercising this particular code path and your use case triggered the issue. pinging the author: @davidleonfdez and the reviewer: @sgugger to unblock you, @andstor, until this is sorted out please switch to a commit before that PR, that is: ``` git clone https://github.com/huggingface/transformers cd transformers git checkout 4f5faaf04407d4 ```<|||||>Indeed, the problem comes from the model returning more than one logit (it has `use_cache` set to `True` in its config) which we didn't anticipate in that PR. I will send a fix when I have time.<|||||>Thanks ❤️ Switch to the commit before that PR did do the trick 👌<|||||>> Indeed, the problem comes from the model returning more than one logit (it has `use_cache` set to `True` in its config) which we didn't anticipate in that PR. I will send a fix when I have time. Sorry, maybe I wasn't as careful with the examples as I should have been 😞. I've just learned about `past_key_values`. I had tested the example with GPT2, whose config has `keys_to_ignore_at_inference = ["past_key_values"]`, so it doesn't return a tuple. I can try to fix it. <|||||>@davidleonfdez If you want to work on a fix, look at how the `compute_metrics` in the `run_glue` script is defined. I believe you just need to add a similar test as [this one](https://github.com/huggingface/transformers/blob/4cd7ed4b3b7360aef3a9fb16dfcc105001188717/examples/pytorch/text-classification/run_glue.py#L448) at the beginning of the `preprocess_logits_for_metrics` function for the case where the model returns more than one logit.<|||||>> @davidleonfdez If you want to work on a fix, look at how the `compute_metrics` in the `run_glue` script is defined. I believe you just need to add a similar test as [this one](https://github.com/huggingface/transformers/blob/4cd7ed4b3b7360aef3a9fb16dfcc105001188717/examples/pytorch/text-classification/run_glue.py#L448) at the beginning of the `preprocess_logits_for_metrics` function for the case where the model returns more than one logit. Thanks!
transformers
15,897
closed
Update readme with how to train offline and fix BPE command
# What does this PR do? Just some minor updates to the README with how to do offline training and fixed the BPE command to allow for easy copy and paste. @lvwerra
03-02-2022 16:20:12
03-02-2022 16:20:12
_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15897). All of your documentation changes will be reflected on that endpoint.<|||||>@lvwerra thanks for the edits, it reads a lot better!<|||||>Awesome, thanks for integrating the changes. There seems to be one comment still open? Do you disagree there or should we integrate it as well?<|||||>@lvwerra whoops, completely missed that comment 😅 just added it 😃<|||||>@ncoop57 Is it possible that you just closed the comment without applying it? 😃 <|||||>> @ncoop57 Is it possible that you just closed the comment without applying it? 😃 🤦 this is why I should not manage PRs from the GitHub app 😅. Okay @lvwerra , I think I commited all the changes 🤞<|||||>Looks good - feel free to merge it when you are ready and all the checks have passed. 🚀
transformers
15,896
closed
Fix a TF Vision Encoder Decoder test
# What does this PR do? Simple fix for the following CI failure: ``` tests.vision_encoder_decoder.test_modeling_tf_vision_encoder_decoder.TFViT2GPT2EncoderDecoderModelTest.test_pt_tf_equivalence ``` This is due to a few PyTorch device issue (need `pt_inputs` to be on the correct device). Tested on GCP GPU VM - it pass now. @patrickvonplaten @sgugger
03-02-2022 16:19:52
03-02-2022 16:19:52
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15896). All of your documentation changes will be reflected on that endpoint.<|||||>Probably wait @patrickvonplaten too, but once ready, should I merge this PR myself ? (or still should be one of you to do so?)<|||||>You can merge your own PRs as long as you have the approvals you want :-)
transformers
15,895
closed
Fix SegformerForImageClassification
# What does this PR do? After #15889 and removing the deprecated `reshape_last_stage` attribute for 6 checkpoints on the hub, this PR makes sure `SegformerForImageClassification` properly uses the features from the backbone in order to compute the image class.
03-02-2022 16:03:04
03-02-2022 16:03:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15895). All of your documentation changes will be reflected on that endpoint.
transformers
15,894
closed
testing github pr
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-02-2022 15:32:19
03-02-2022 15:32:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15894). All of your documentation changes will be reflected on that endpoint.
transformers
15,893
closed
Updating the slow tests:
Linked to https://github.com/huggingface/transformers/pull/15826 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-02-2022 15:23:40
03-02-2022 15:23:40
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15893). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15893). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge can you also take a look here?<|||||>Not up to date enough here with Vision to give a good review here
transformers
15,892
closed
[XGLM] run sampling test on CPU to be deterministic
# What does this PR do? This PR makes `test_xglm_sample` run on CPU to make it deterministic and pass.
03-02-2022 15:22:07
03-02-2022 15:22:07
Think you can rebase to master to solve the failing test
transformers
15,891
closed
Update delete-dev-doc job to match build-dev-doc
# What does this PR do? This PR fixes the `delete-dev-doc` job to use the same runner as `buld-dev-doc`. It also fixes the `build-dev-doc` job failures that happen if the `doc-build-dev` repo got an update during the time the doc is built while not doing the stash if not necessary (which was the bug #15882 was trying to fix). In passing, it makes the commit messages a little bit better (the sha for pushed during the dev doc updates are the sha of the merges, not the commits, and in the delete job, the variable picked always returned merge).
03-02-2022 15:21:07
03-02-2022 15:21:07
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15891). All of your documentation changes will be reflected on that endpoint.<|||||>Were you able to fix them? I was trying to fix the `delete_doc_job` as well in https://github.com/huggingface/transformers/pull/15894/files with no success https://github.com/huggingface/transformers/pull/15894/files#r817899504<|||||>Working on it :-)
transformers
15,890
closed
The tests were not updated after the addition of `torch.diag`
in the scoring (which is more correct) # What does this PR do? Fixes 2 slow tests. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-02-2022 15:13:13
03-02-2022 15:13:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15890). All of your documentation changes will be reflected on that endpoint.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15890). All of your documentation changes will be reflected on that endpoint.<|||||>Not up to date enough here with Vision to give a good review here
transformers
15,889
closed
[SegFormer] Add deprecation warning
# What does this PR do? This PR takes it a bit more light compared to #15748. Rather than removing the `reshape_last_stage` argument right away, this PR instead tells users that this argument is deprecated and will soon be removed. After some weeks, we can then merge #15748.
03-02-2022 14:25:42
03-02-2022 14:25:42
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15889). All of your documentation changes will be reflected on that endpoint.
transformers
15,888
closed
CLIPProcessor with CLIPTokenizerFast
# 🚀 Feature request Current `CLIPProcessor` doesn't support `CLIPTokenizerFast` requiring `CLIPTokenizer`. In my thinking, there is no reason not to support `CLIPTokenizerFast` for `CLIPProcessor` ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution https://github.com/huggingface/transformers/blob/v4.16.2/src/transformers/models/clip/processing_clip.py#L23 it may be easy by modifying upper python code. I think I can contribute.
03-02-2022 11:48:52
03-02-2022 11:48:52
Hey @cosmoquester ! The `CLIPTokenizerFast` was not used in the processor because there was an issue with it which is now fixed, cf #15067 So yes, we can now support `CLIPTokenizerFast` for `CLIPProcessor`. Feel free to open a PR!
transformers
15,887
closed
Fix Bug in FlaxWav2Vec2 Slow Test
This PR fixes a small bug the `test_inference_ctc_robust_batched` FlaxWav2Vec2 slow test. Currently, the input processor is run with `return_tensors="pt"` and `trunctation=True`: https://github.com/huggingface/transformers/blob/05c237ea94e08786abbac6c6185cfdfa262a8c53/tests/wav2vec2/test_modeling_flax_wav2vec2.py#L387 However, as outlined in `feature_extraction_sequence_utils.py`, when setting `trunctation=True`, one must **also** specify the `max_len`: https://github.com/huggingface/transformers/blob/6e57a56987ff201747f5f01bbce3ed2c0fda1910/src/transformers/feature_extraction_sequence_utils.py#L326-L327 The modifications make the change to `return_tensors="np"` and remove the `truncation` flag. The first change returns the native tensor type for Flax (`np` as opposed to `pt`), and the second change aligns the test with its PyTorch counterpart: https://github.com/huggingface/transformers/blob/6e57a56987ff201747f5f01bbce3ed2c0fda1910/tests/wav2vec2/test_modeling_wav2vec2.py#L1170
03-02-2022 10:41:04
03-02-2022 10:41:04
transformers
15,886
closed
Update CLIPFeatureExtractor to convert PIL image to RGB
# What does this PR do? <!-- - Converts PIL image to `RGB` if processed by CLIPFeatureExtractor. --> Currently PIL images with `RGBA` format throws an error when being processed by CLIPFeatureExtractor. CLIPFeatureExtractor.normalize() throws the following error: `.../sentence-transformer/lib/python3.9/site-packages/transformers/image_utils.py", line 185, in normalize return (image - mean) / std ValueError: operands could not be broadcast together with shapes (4,224,224) (3,) ` The original [clip model preprocesses PIL Images](https://github.com/openai/CLIP/blob/main/clip/clip.py#L74) by converting all PIL images into `RGB` format. ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR.
03-02-2022 10:09:54
03-02-2022 10:09:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15886). All of your documentation changes will be reflected on that endpoint.<|||||>@patil-suraj @sgugger any idea why the `Build dev documentation / build_and_package` check failed? based on the error log, I initially suspected that master branch is out of sync (i.e. there are new commits), however, I just verified that it is up to date 🤔<|||||> You can rebase to master to solve the failing test.<|||||>@patil-suraj I checkout my branch and ran `git rebase master`, it returns `Current branch convert-pil-image-to-rgb-clip-model is up to date.` <|||||>You should rebase with `huggingface/transformers`, for example if your remote is called `upstream` you can run the following commands to rebase. ```bash git fetch upstream git rebase upstream/master ``` and then push.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @hengkuanwee , this slipped through the cracks. Do you maybe want to open new PR for this ?<|||||>hi @patil-suraj, sure i'll open a new PR for this!
transformers
15,885
closed
After vocabulary extension the tokenizer keeps on running.
I am training a simple binary classification model using Hugging face models using pytorch. Bert PyTorch HuggingFace. `keyword_lst` has `20k` new token which I add to tokenizer. `I take mean of old tokenizer to update new tokenizers.` I am training this model for `4,00,000 data points.` Here is the code: ``` tok_orig = tr.RobertaTokenizer.from_pretrained("../models/unitary_roberta/tokenizer") tokenizer = tr.RobertaTokenizer.from_pretrained("../models/unitary_roberta/tokenizer") tokenizer.add_tokens(keyword_lst) # do tokenization train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=512, return_tensors="pt") val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=512, return_tensors="pt") # make datasets train_data = HateDataset(train_encodings, train_labels) val_data = HateDataset(val_encodings, val_labels) # load model model = tr.RobertaForSequenceClassification.from_pretrained("../models/unitary_roberta/model", num_labels=2) # add embedding params for new vocab words model.resize_token_embeddings(len(tokenizer)) weights = model.roberta.embeddings.word_embeddings.weight # initialize new embedding weights as mean of original tokens with torch.no_grad(): emb = [] for i in range(len(keyword_lst)): word = keyword_lst[i] # first & last tokens are just string start/end; don't keep tok_ids = tok_orig(word)["input_ids"][1:-1] tok_weights = weights[tok_ids] # average over tokens in original tokenization weight_mean = torch.mean(tok_weights, axis=0) emb.append(weight_mean) weights[-len(keyword_lst):,:] = torch.vstack(emb).requires_grad_() ``` The tokenizer keeps on running. :(
03-02-2022 09:24:05
03-02-2022 09:24:05
Hi @pratikchhapolika ! The best place to ask this question would be to use the [forum](https://discuss.huggingface.co/). We use issues for bug reports and feature requests.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,884
closed
Fix tiny typo in docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Just fixes a tiny typo in the docs. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-02-2022 08:55:11
03-02-2022 08:55:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15884). All of your documentation changes will be reflected on that endpoint.
transformers
15,883
closed
Failed to find input with name: attention_mask in the model input def list
There's something wrong while using ONNX Runtime for the `TFDistilBertForSequenceClassification` model. It throws the following error while using the TensorFlow model, `Failed to find input with the name: attention_mask in the model input def list`. However, if I only pass `input_ids`, then it works. For Pytorch models, both the `input_ids` & `attention_mask` works. ### Steps to reproduce the error: - Finetune a `TFDistilBertForSequenceClassification` model. - Export the model using `transformers.convert_graph_to_onnx.convert()`. - Use ONNX Runtime to load the model and create an inference session. - Pass a `dict` of `input_ids` and `attention_mask` to `session.run()`. It does not accept the `attention_mask` during inference, this is what the error says. But for PyTorch models, it works completely fine. ### Here's the code snippet that I've used. ``` from transformers.convert_graph_to_onnx import convert from pathlib import Path import onnxruntime model_ckpt = "model/" onnx_model_path = Path("onnx/model.onnx") convert(framework="tf", model=model_ckpt, tokenizer=tokenizer, output=onnx_model_path, opset=12, pipeline_name="text-classification") sess = onnxruntime.InferenceSession("onnx/model.quant.onnx") inputs = {'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1]], 'input_ids': [[101, 2129, 2052, 2017, 2360, 4875, 1999, 3059, 102]]} logits_onnx = onnx_model.run(None, inputs)[0] logits_onnx.shape ``` ### It throws the following error: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-79-722e62f57d27> in <module> ----> 1 logits_onnx = onnx_model.run(None, inputs)[0] 2 logits_onnx.shape ~/.local/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options) 190 output_names = [output.name for output in self._outputs_meta] 191 try: --> 192 return self._sess.run(output_names, input_feed, run_options) 193 except C.EPFail as err: 194 if self._enable_fallback: RuntimeError: Failed to find input with name: attention_mask in the model input def list ``` Also, is it important to pass 'attention_mask' during the inference time? How does it impact the results?
03-02-2022 05:53:36
03-02-2022 05:53:36
cc @lewtun <|||||>Hey @imVParashar thanks for raising the issue. We recently had to fix the (deprecated) TensorFlow exporter in #15856, so if you install from `master` the following should work: ```python from pathlib import Path import numpy as np import onnxruntime from transformers import AutoTokenizer from transformers.convert_graph_to_onnx import convert model_ckpt = "distilbert-base-uncased-finetuned-sst-2-english" onnx_model_path = Path("onnx/model.onnx") tokenizer = AutoTokenizer.from_pretrained(model_ckpt) convert( framework="tf", model=model_ckpt, tokenizer=tokenizer, output=onnx_model_path, opset=12, pipeline_name="text-classification", ) sess = onnxruntime.InferenceSession("onnx/model.onnx") inputs = tokenizer("Running TensorFlow ONNX", return_tensors="tf") # ONNX Runtime expects NumPy arrays inputs = {k: v.numpy() for k, v in inputs.items()} logits_onnx = sess.run(None, inputs)[0] logits_onnx.shape ``` **Warning:** As far as I know, ONNX models that are exported from TensorFlow / Keras only work for a _fixed input shape_. Although my example above works, you'll find it throws an error if you change the sequence length / batch size. I need to dig deeper into this issue, but if it's a problem for you, I suggest loading your TensorFlow model in PyTorch via the `from_tf=True` argument in the `from_pretrained()` method. If you save and export that model, you should be able to provide dynamic shapes via the `transformers.onnx` exporter. Incidentally, the `convert_graph_to_onnx` module is deprecated and so the recommended way to do this is via the `transformers.onnx` module, e.g. ``` python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english --feature=sequence-classification onnx/ ``` <|||||>Thank you so much. The approach you mentioned, worked for me. :)
transformers
15,882
closed
Remove stash for now
Remove stash application to prevent failures on master.
03-02-2022 03:25:18
03-02-2022 03:25:18
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15882). All of your documentation changes will be reflected on that endpoint.
transformers
15,881
closed
fix deepspeed tests
Making deepspeed tests work again after the big tests structure reshuffle. @LysandreJik - we can't have `tests/deepspeed/__init__.py` as then it conflicts with the actual `deepspeed` package, please see: https://github.com/huggingface/transformers/pull/15725#issuecomment-1056027398 Additionally I don't think we should turn tests into packages, as it'd cause all kinds of problems. Instead, I think we should extract any package-worthy code that is re-used by multiple tests and put it under `src/transformers` like `testing_utils.py`. Please let me know if the failing doc builder is a false alarm. Thanks.
03-02-2022 01:21:00
03-02-2022 01:21:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15881). All of your documentation changes will be reflected on that endpoint.
transformers
15,880
closed
Broken link error when using the CLIPTokenizerFast class
## Environment info - `transformers` version: 4.17.0.dev0 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.10 - Python version: 3.8.12 - PyTorch version (GPU?): 1.11.0.dev20220116 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> ### Who can help Models: @LysandreJik The model is clip-vit-base-patch16 Library: - Tokenizers: @SaulLu ## Information I am suddenly getting the error ```requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/openai/clip-vit-base-patch16/resolve/main/vocab.json``` when trying to use the CLIP tokenizer. ## To reproduce Steps to reproduce the behavior: 1. Just execute `tokenizer = CLIPTokenizerFast.from_pretrained("openai/clip-vit-base-patch16")` To get the error message: ``` File "/home/evolvingfungus/miniforge3/envs/tali/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1763, in from_pretrained raise err File "/home/evolvingfungus/miniforge3/envs/tali/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1724, in from_pretrained resolved_vocab_files[file_id] = cached_path( File "/home/evolvingfungus/miniforge3/envs/tali/lib/python3.8/site-packages/transformers/file_utils.py", line 1921, in cached_path output_path = get_from_cache( File "/home/evolvingfungus/miniforge3/envs/tali/lib/python3.8/site-packages/transformers/file_utils.py", line 2125, in get_from_cache _raise_for_status(r) File "/home/evolvingfungus/miniforge3/envs/tali/lib/python3.8/site-packages/transformers/file_utils.py", line 2052, in _raise_for_status request.raise_for_status() File "/home/evolvingfungus/miniforge3/envs/tali/lib/python3.8/site-packages/requests/models.py", line 960, in raise_for_status raise HTTPError(http_error_msg, response=self) requests.exceptions.HTTPError: 502 Server Error: Bad Gateway for url: https://huggingface.co/openai/clip-vit-base-patch16/resolve/main/vocab.json ``` ## Expected behavior Normally the snippet should download the CLIP text encoder's vocab and form a tokenizer.
03-01-2022 23:16:05
03-01-2022 23:16:05
@AntreasAntoniou , I'm so sorry you had this problem. Part of the Hub was briefly unavailable on March 1st around 11pm UTC. I just tested again on my side and your snippet works fine. :smile: <|||||>Yeah, it works fine now. Thanks for the response! :)
transformers
15,879
closed
[Bart] Fix implementation note doc
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #15559 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2022 22:56:35
03-01-2022 22:56:35
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15879). All of your documentation changes will be reflected on that endpoint.<|||||>Please rebase on master to remove the dev documentation failure.
transformers
15,878
closed
label_attention_mask in Bart for conditional sequence generation and other seq2seq models.
I find it very counterintuitive that the Bart class for conditional generation doesn't take label_attention_masks as an input, and that people are left to manually convert their padding tokens to values of -100. -> I think many new users are likely getting bitten by this, and will also be in the future. -> I think it's a pain to recode everytime identically when it could just be part of the function. -> It breaks symmetry with the inputs. Why do you need it for the inputs but not the outputs? Strange. The behavior could be altered without breaking current code. Indeed, labels could test if its argument is a dict, behave how it normally does if it's not, and look for `input_ids` and `attention_mask` if it is, without breaking, as giving a `dict` as an argument would just break the code currently.
03-01-2022 20:39:19
03-01-2022 20:39:19
Hi @JulesGM ! There is already an argument for that. Every seq2seq model has the `decoder_attention_mask` argument with which you can specify the attention mask for `decode_input_ids` which are fed to the decoder.<|||||>Thanks. "For translation and summarization training, decoder_input_ids should be provided. If no decoder_input_ids is provided, the model will create this tensor by shifting the input_ids to the right for denoising pre-training following the paper." Is that how it works even if labels are provided? The labels and the (shifted) decoder inputs are the same 99.99% of the time. Do I still need to feed a label value, and does it need to have -100 masks, even if I feed a decoder inputs and a decoder attention mask? Do I need to shift the decoder input values myself? On Wed., Mar. 2, 2022, 8:03 a.m. Suraj Patil, ***@***.***> wrote: > Hi @JulesGM <https://github.com/JulesGM> ! > There is already an argument for that. Every seq2seq model has the > decoder_attention_mask argument with which you can specify the attention > mask for decode_input_ids which are fed to the decoder. > > — > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/15878#issuecomment-1056906142>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/AAYU34KQODAMMGPIJDL7DR3U55RIHANCNFSM5PVDIUTQ> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>. > > You are receiving this because you were mentioned.Message ID: > ***@***.***> > <|||||>Hi @JulesGM ! The docstring is bit misleading, it should not shift `input_ids` even for denoising pre-training. I will fix that. > Is that how it works even if labels are provided? Th No, if `labels` are passed then `decoder_input_ids` are created by shifting the `labels`. > The labels and the(shifted) decoder inputs are the same 99.99% of the time. They should be different, i.e if `labels` are like this: `<s> this is a sent </s>` then `decoder_inputIds` will look like this after shifting `</s> <s> this is a sent` > Do I still need to feed a label value, and does it need to have -100 masks, even if I feed a decoder inputs and a decoder attention mask? For training `labels` are required, `decoder_input_ids` are created using `labels` as I explained above. And no adding -100 is not strictly necessary, passing `decoder_attention_mask` should also work. Also, for general questions like these, I would recommend using the [forum](https://discuss.huggingface.co/) as we issues for bug reports and feature requests. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.
transformers
15,877
closed
Updates in Trainer to support new features in SM Model Parallel library
# What does this PR do? - Calls state dict on rdp_rank==0 to fix hang for SMP when using tensor parallelism - Reorders wrapping of optimizer to happen after wrapping of model, which is a requirement for the newer version of SMP. - Uses the right ranks for data loading when using TP based on whether we are using prescaled batch or not. Prescaled batch is a new construct in SMP. SMP does tensor parallelism within a DP group. What this means is that it shuffles around the samples read by all ranks so that the rank which owns the shard of a layer/parameter can see relevant portion of samples for all samples across the data parallel group. To reduce the communication before layer execution, and to support cases where we can't use a large batch size (i.e. all ranks can't get their own sample), we recommend using prescaled batch, where all ranks in same rdp group read the same batch of data. Hence this usage affects the number of examples seen. From data loading perspective, the number of ranks which read different data changes in prescaled and non-prescaled case. World size and process index in transformers seem to relate to number of times data parallelism happens in the job, hence this change. More details here https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_general.html#prescaled-batch Merges https://github.com/huggingface/transformers/pull/15804, https://github.com/huggingface/transformers/pull/15796/, https://github.com/huggingface/transformers/pull/15811/ <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was discussed in a Slack channel - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2022 19:53:28
03-01-2022 19:53:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15877). All of your documentation changes will be reflected on that endpoint.<|||||>Lot of TF tests seem to fail, and there are a couple of strange errors. Could you take a look? <|||||>Thanks for your PR! Please rebase on master to remove the dev documentation failure.<|||||>Done<|||||>Thanks again for your PR!
transformers
15,876
closed
Passing in num_labels to ConvNextForImageClassification.from_pretrained raises size mismatch error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.17.0.dev0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.8.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @NielsRogge, @sgugger ## Information Model I am using (Bert, XLNet ...): `ConvNext` The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Initating a `ConvNextForImageClassification.from_preainted` fails when passing in `num_labels`. The same steps work sucesfully for `ViTForImageClassification` 2. When passing in a custom number of labels a size mismatch error is raised. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ```python # define labels for import image classification problem id2label = {0:"doggie", 1: "cat"} label2id = {v:k for k,v in id2label.items()} # This works fine from transformers import ViTForImageClassification, ViTFeatureExtractor feature_extractor = ViTFeatureExtractor.from_pretrained("google/vit-base-patch16-224-in21k") model = ViTForImageClassification.from_pretrained('google/vit-base-patch16-224-in21k', num_labels=len(id2label), id2label=id2label, label2id=label2id) # this failes model = ConvNextForImageClassification.from_pretrained("facebook/convnext-tiny-224", num_labels=len(id2label), id2label=id2label, label2id=label2id) ``` This raises a `RunTimeError`: ```python --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-22-9991914f5786> in <module>() 2 num_labels=len(id2label), 3 id2label=id2label, ----> 4 label2id=label2id) 5 1 frames /usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1535 pretrained_model_name_or_path, 1536 ignore_mismatched_sizes=ignore_mismatched_sizes, -> 1537 _fast_init=_fast_init, 1538 ) 1539 /usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py in _load_state_dict_into_model(cls, model, state_dict, pretrained_model_name_or_path, ignore_mismatched_sizes, _fast_init) 1688 if len(error_msgs) > 0: 1689 error_msg = "\n\t".join(error_msgs) -> 1690 raise RuntimeError(f"Error(s) in loading state_dict for {model.__class__.__name__}:\n\t{error_msg}") 1691 1692 if len(unexpected_keys) > 0: RuntimeError: Error(s) in loading state_dict for ConvNextForImageClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([1000, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([2]). ``` [Colab notebook](https://colab.research.google.com/drive/1jIX49dOIPKOO7z8XH86DnbZhJn89KDRA?usp=sharing) reproducing the above error. ## Expected behavior I expected `**Model**ForImageClassification` to behave fairly consistently across model types i.e. I can swap out `VIT` for `ConvNext` and the behaviour will be similar. I haven't dug into the source of this carefully but if it's not just me misunderstanding the docs I'm happy to try and help track down this source of this. As a side note I'm very happy to see the growth of vision models in transformers!
03-01-2022 19:51:36
03-01-2022 19:51:36
This is because the checkpoint you are using has 1,000 labels. To ignore the pretrained weights of the classifier, you have to pass along `ignore_mismatched_sizes=True`. You don't need this for other models if the pretrained model does not have a classifier head.<|||||>> This is because the checkpoint you are using has 1,000 labels. To ignore the pretrained weights of the classifier, you have to pass along `ignore_mismatched_sizes=True`. > > You don't need this for other models if the pretrained model does not have a classifier head. Thanks for clarifying @sgugger 🤗 Will close this now since it's not a bug (sorry should have checked this one on the forums)
transformers
15,875
closed
Add ONNX support for Blenderbot and BlenderbotSmall
# What does this PR do? This PR adds support to export `blenderbot` and `blenderbot-small` model types. For the `blenderbot` case, I found that I had to adapt the `_generate_dummy_examples()` functions that we implemented for `bart` to exclude the number of encoder layers when we create the `past_key_values` inputs. Without this change, the `past_key_values` shapes between the exported and reference model disagree. I am not 100% sure why this isn't needed for `blenderbot-small` (and other models copied from `bart`), but that's a separate discussion :) Closes #15757
03-01-2022 17:15:26
03-01-2022 17:15:26
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15875). All of your documentation changes will be reflected on that endpoint.<|||||>How's the work going for this PR?<|||||>> How's the work going for this PR? The export for `BlenderbotSmall` works well, but `Blenderbot` has some quirks in the way we generate dummy inputs that I'm still trying to iron out. I'll be tackling it again later this week<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@Darth-Carrotpie feel free to test the `blenderbot` export in this PR. For now you'll need to roll your own decoding (e.g. beam search), but we're currently working making this simple in our `optimum` [library](https://github.com/huggingface/optimum)<|||||>Gently pinging @LysandreJik or @sgugger for a final review 🤗 <|||||>> @Darth-Carrotpie feel free to test the `blenderbot` export in this PR. For now you'll need to roll your own decoding (e.g. beam search), but we're currently working making this simple in our `optimum` [library](https://github.com/huggingface/optimum) Hey, it's first time for me trying to test a PR and obviously I'm stuck on an error. <details> <summary> import tr2 as transformers </summary> ```--------------------------------------------------------------------------- ImportError Traceback (most recent call last) /tmp/ipykernel_33/486683697.py in <module> 1 #del transformers ----> 2 import tr2 as transformers /kaggle/working/tr2/__init__.py in <module> 28 29 # Check the dependencies satisfy the minimal versions required. ---> 30 from . import dependency_versions_check 31 from .utils import ( 32 _LazyModule, /kaggle/working/tr2/dependency_versions_check.py in <module> 15 16 from .dependency_versions_table import deps ---> 17 from .utils.versions import require_version, require_version_core 18 19 /kaggle/working/tr2/utils/__init__.py in <module> 42 to_py_obj, 43 ) ---> 44 from .hub import ( 45 CLOUDFRONT_DISTRIB_PREFIX, 46 DISABLE_TELEMETRY, /kaggle/working/tr2/utils/hub.py in <module> 39 from huggingface_hub import HfFolder, Repository, create_repo, list_repo_files, whoami 40 from requests.exceptions import HTTPError ---> 41 from transformers.utils.logging import tqdm 42 43 from . import __version__, logging ImportError: cannot import name 'tqdm' from 'transformers.utils.logging' (/opt/conda/lib/python3.7/site-packages/transformers/utils/logging.py) ``` </details> If you could point me to the right direction on what's the best practice to test 🤗 transformers, it'd be much appreciated. Attempt can be seen in [Kaggle notebook](https://www.kaggle.com/code/danieliusv/blenderbot1-0-test-and-export). ## Import transformers PR to test blenderbot export 1. clone transformers package from https://github.com/huggingface/transformers 2. check out to pr which adds blenderbot support for onnx https://github.com/huggingface/transformers/pull/15875 via commands: - git fetch origin pull/15875/head - git checkout -b pullrequest FETCH_HEAD 3. zip it and upload as dataset 4. import... 5. see error....<|||||>Hey @Darth-Carrotpie I think the simplest way to test a branch is to install as follows: ``` pip install git+https://github.com/huggingface/transformers.git@add-blenderbot-onnx#egg=transformers[onnx] ``` In any case, all this is getting merged to `main` so you can also just run: ``` pip install git+https://github.com/huggingface/transformers.git#egg=transformers[onnx] ```
transformers
15,874
closed
Bump up doc node version to 16
# What does this PR do? Fixed the doc-build issue we disucssed by bumping up node 14 -> 16 <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2022 17:02:58
03-01-2022 17:02:58
@LysandreJik this should fix the CI issue in https://github.com/huggingface/transformers/pull/15710<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15874). All of your documentation changes will be reflected on that endpoint.
transformers
15,873
closed
Freeze FlaxWav2Vec2 Feature Encoder
This PR enables the FlaxWav2Vec2 Feature Encoder to be _frozen_. The flow of gradients through the Feature Encoder during forward or reverse-mode automatic differentiation is prevented through use of a simple command (`jax.lax.stop_gradient`), thus holding the Feature Encoder parameters fixed. Freezing the Feature Encoder is required for the majority of FlaxWav2Vec2 fine-tuning set-ups, making this PR an important addition to the FlaxWav2Vec2Model.
03-01-2022 16:54:48
03-01-2022 16:54:48
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15873). All of your documentation changes will be reflected on that endpoint.<|||||>As discussed offline - the test didn't check what it was supposed to check. Bad review from my part. We should open a new PR here to fix it cc @patil-suraj (sorry should have put you as a reviewer as well)
transformers
15,872
closed
Scatter should run on CUDA
Updates the docker images so that torch scatter runs on CUDA.
03-01-2022 16:40:29
03-01-2022 16:40:29
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15872). All of your documentation changes will be reflected on that endpoint.
transformers
15,871
closed
How to go about utilizing MBART for conditional generation with beam search in ONNXRuntime with TensorRT/CUDA
Hi HuggingFace team, Last December I looked into exporting `MBartForConditionalGeneration` from `"facebook/mbart-large-50-many-to-one-mmt"` for the purpose of multilingual machine translation. Originally I followed the approach as described in this [BART + beam search example](https://github.com/huggingface/transformers/tree/master/examples/research_projects/onnx/summarization), extending the example to support MBART and overriding the max 2GB model size. While this approach worked for `CPUExecutionProvider` in ORT sessions, it did not actually improve runtime, nor did it work for `TensorRT` or `CUDA` execution providers (out of cuda memory and dynamic shape inference failure). Today I saw [this issue ](https://github.com/huggingface/transformers/issues/15716) and exported `MBartForConditionalGeneration` with `python -m transformers.onnx --model=facebook/mbart-large-50-many-to-one-mmt --feature seq2seq-lm-with-past --atol=5e-5 onnx/`. While this worked for exporting to ONNX (passing all validation checks), I couldn't run an actual ORT session due to input dimensionality mismatch (past keys encoder/decoder missing for `seq2seq-lm-with-past`, `decoder_inputs_ids` and `decoder_attention_mask` missing for `seq2seq-lm`). I could use some clarification as to whether this is the implementation I am looking for (does the latter ONNX export support `.generate()` through beam search or should I refocus my attempts at the BART + beam search modification). In case the newer command line ONNX export implementation is what I require, which feature head would be the correct head for the ConditionalGeneration many-to-one-mmt MBART head (`seq2seq-lm` or `seq2seq-lm-with-past`) and where can I find the additional inputs that I need for the model to run `.generate()` in an ORT session? The BART beam search implementation I mentioned earlier required `input_ids`, `attention_mask`, `num_beams`, `max_length` and `decoder_start_token_id`. The required inputs for the newer conversion are a bit more confusing to me. I assume @lewtun would be the person to ask for help here but I appreciate any pointers! ## Environment info - `transformers` version: 4.16.2 - Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.10.1+cu113 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no
03-01-2022 16:31:08
03-01-2022 16:31:08
I was also adjusting the BART export to work for mBART in a translation setting. There you need to be able to set the `decoder_start_token_id` dynamically. I noticed that the ONNX conversion example you link to (thank you!) is not complete. It is missing the `dynamic_axes` configuration for the `attention_mask`. It needs to be adjusted to: ``` torch.onnx.export( bart_script_model, ( inputs["input_ids"], inputs["attention_mask"], num_beams, max_length, decoder_start_token_id, ), onnx_file_path, opset_version=14, input_names=[ "input_ids", "attention_mask", "num_beams", "max_length", "decoder_start_token_id", ], output_names=["output_ids"], dynamic_axes={ "input_ids": {0: "batch", 1: "seq"}, "attention_mask": {0: "batch", 1: "seq"}, "output_ids": {0: "batch", 1: "seq_out"}, }, example_outputs=output_ids, ) ``` Hope this helps!<|||||>Thanks for jumping in with the tip @HaukurPall! Did this approach allow you to not only export the mBART model to ONNX but also run it (with increased speed) on CUDA/TensorRT execution providers? Moreover, I assume you made a custom version of `convert.py`. where you overrode the `torch.onnx.export()` call with the snippet you posted above, is this correct? Thanks in advance!<|||||>Thanks for opening an issue @JeroendenBoef! Pinging @lewtun, @mfuntowicz, should this issue be moved to optimum?<|||||>Thanks for the ping! Yes, I think it would make sense to move this issue to the `optimum` repo :) @JeroendenBoef we currently have a PR in `optimum` that will enable simple inference / text-generation for ONNX Runtime: https://github.com/huggingface/optimum/pull/113 Once that is completed, I think it should address most of the points raised in this issue!<|||||>Hey @JeroendenBoef. No, I was not able to get the inference working efficiently on the CUDA execution providers. I even attempted to use the IOBindings (as suggested by the ONNX team) but was not successful. I have put this endeavour aside until there is better support for autoregressive inference. If you still want to try this, there is a different approach to exporting the models presented in https://github.com/Ki6an/fastT5/. This is for T5, but some of the code has been adjusted to work for mBART, see issue: https://github.com/Ki6an/fastT5/issues/7. I did not try to running that model on CUDA as it would require some work getting the IOBindings correct/efficient.<|||||>Thanks for the reply and the pointer to the new PR on `optimum` @lewtun. I will close this issue for now and keep an eye out for the new developments regarding seq2seq models on `optimum`. I feel like [this](https://github.com/huggingface/optimum/issues/55) issue already covers the core of this issue, save for maybe the problems with actually achieving improved inference speed with the exported model, so unless this is preferred for documentation, I would not open a new issue on `optimum`. Thanks for the detailed response @HaukurPall, this saves me some headaches and time :). I was already afraid there would not be an improved performance but now I have confirmation that I should also postpone my efforts on this until there is a better approach in place for ORT seq2seq models.
transformers
15,870
closed
TF: Update QA example
# What does this PR do? Another example update, this time for question answering. Removes the dummy loss (we can use the internal loss) and the custom dataset preparation. @Rocketknight1 in this example, the evaluation dataset has some custom handling. Do you think it is worth diving into?
03-01-2022 16:26:13
03-01-2022 16:26:13
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15870). All of your documentation changes will be reflected on that endpoint.<|||||>@gante If you mean the bit that's in the PT example too, then we probably have to keep that! (And you don't need to change it for the refactor)
transformers
15,869
closed
Mismatch between beam search score transition probabilities and beam sequence scores
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: Linux-5.13.0-30-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help I'm tagging @patrickvonplaten because he has recently worked on this #14654 ## Information The model is a __T5__. I'm using its conditional generation properties and testing beam search decoding outputs: The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Purpose: Create two simple dummy events and test whether the transition probability scores are the same as the probability sequence scores. The goal is to understand what the sequence scores represent (are they unnormalized by length? normalized by length?). From the PR #14654 discussion, I had the impression that it would be enough to sum the transition probabilities but these do not seem to match. Would you please help me understand? ## To reproduce Steps to reproduce the behavior: 1. Create two simple dummy input examples. 2. Encode them using a t5 model (`model`) and generate the output scores using beam search. 3. Obtain transition scores through the use of `model.compute_transition_beam_scores` 4. Sum the results of 3 and compare with the ones returned with the `BeamSearchEncoderDecoderOutput.sequence_scores`. ```python import transformers model_name = "t5-small" encoding_hyperparameters = { "padding": "max_length", "max_length": 512, "truncation": True, "add_special_tokens": True, "return_attention_mask": True, "return_tensors": "pt", } tokenizer = transformers.T5TokenizerFast.from_pretrained(model_name) model = transformers.T5ForConditionalGeneration.from_pretrained(model_name) EXAMPLE = ["question: How are you? \n context: I had a long day, she said. I am so exhausted.", "question: What did the fox do? \n context: The fox jumped over the fence into a very green lawn."] BEAM_SEARCH_KWARGS = { "num_beams": 4, "do_sample": False, "num_return_sequences": 1, } # Encode inputs inputs_ids = tokenizer(EXAMPLE, **encoding_hyperparameters) # Generate using beam search beamsearch_results = model.generate( input_ids=inputs_ids["input_ids"], attention_mask=inputs_ids["attention_mask"], max_length=10, return_dict_in_generate=True, output_scores=True, # the id of the token to force as the last generated token when max_length is reached forced_eos_token_id=tokenizer.eos_token_id, **BEAM_SEARCH_KWARGS ) trs_bs = model.compute_transition_beam_scores( sequences=beamsearch_results.sequences, scores=beamsearch_results.scores, beam_indices=beamsearch_results.beam_indices ) print("Summ:", torch.sum(trs_bs, dim=1), "Expected:", beamsearch_results.sequences_scores) print("Sum/length:", torch.sum(trs_bs, dim=1)/beamsearch_results.beam_indices.shape[-1], "Expected:", beamsearch_results.sequences_scores) # output # Sum: tensor([-1.5411, -0.3851]) Expected: tensor([-0.1712, -0.0428]) # Sum/length: tensor([-0.1712, -0.0428]) Expected: tensor([-0.1712, -0.0428]) ``` From the example above I deduced that in order to obtain the same scores as those computed in the `sequences_scores` it would suffice to divide by the length of the sentences. In this case, it seems to work nicely because both sequences have the same length: ```python # output of beamsearch_results.sequences tensor([[ 0, 27, 141, 3, 9, 307, 239, 6, 255, 1], [ 0, 3, 16287, 147, 8, 8227, 139, 3, 9, 1]]) ``` So I tried a different example, that would cause the beamsearch_results.sequences to be different: ```python # Example 2 # The only change to the script above is the example, where we modify the first sequence in the batch EXAMPLE = ["question: Is this yes or no question? \n context: It is yes", "question: What did the fox do? \n context: The fox jumped over the fence into a very green lawn."] # ... print("Sum:", torch.sum(trs_bs, dim=1), "Expected:", beamsearch_results.sequences_scores) print("Sum/length:", torch.sum(trs_bs, dim=1)/beamsearch_results.beam_indices.shape[-1], "Expected:", beamsearch_results.sequences_scores) print("Sum/rel_length:", torch.sum(trs_bs, dim=1) / torch.sum(trs_bs != 0, dim=1), "Expected:", beamsearch_results.sequences_scores) # outputs # Sum: tensor([-0.0770, -0.3851]) Expected: tensor([-0.0385, -0.0428]) # Sum/length: tensor([-0.0086, -0.0428]) Expected: tensor([-0.0385, -0.0428]) # Sum/rel_length: tensor([-0.0385, -0.0481]) Expected: tensor([-0.0385, -0.0428]) ``` The output of `beamsearch_results.sequences` for the above example is: ```python tensor([[ 0, 4273, 1, 0, 0, 0, 0, 0, 0, 0], [ 0, 3, 16287, 147, 8, 8227, 139, 3, 9, 1]]) ``` The difference from `Sum/length` to `Sum/rel_length` is that in the former I divide by the maximum length of the generated sentences, whereas the previous is divided by the number of non-zero transition probabilities. We can see that for the latter case, (i.e., when dividing by the relative length) only the first example score is matched to the original `beamsearch_results.sequences_scores`). Will you please help me better understand the computation of these probabilities and their connection with the sequence_scores? In particular, are the individual scores returned by the `compute_transition_beam_scores` length-normalized ? Do these individual scores aim to represent the joint probability or are they representing the individual probabilities? Are we supposed to consider the initial padding token when computing the scores? Thanks in advance for your time!
03-01-2022 16:20:06
03-01-2022 16:20:06
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @PastelBelem8, Sorry to answer so late. We've added some tests to make sure the transition probabilities work correctly. Could you take a look at this answer: https://github.com/huggingface/transformers/issues/16413#issuecomment-1088907369 and see whether it applies to your use case?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I find the code above from @PastelBelem8 works if I set the length_penalty to 0. However, if I change the prompt so that the model produces a completion that has fewer tokens than the max_length, then the sequences_scores and the output of compute_transition_beam_scores are very different again. @patrickvonplaten, any thoughts on what might be going on there? Thanks in advance for your help! See the code in this colab: https://colab.research.google.com/drive/1KAc_Mk2k8qiiqKgXfAcggJWfaBEvvzox#scrollTo=6in7zwm7Dqxf<|||||>Thanks for the script @hacobe, I can reproduce the problem. It looks like an error in Transformers. Investigating now<|||||>Hey @hacobe, The problem is actually much more difficult than I thought. It'll need a bigger refactor of the beam scorer. I keep you updated!<|||||>Got it. Thanks for looking into it!<|||||>@PastelBelem8 @hacobe @patrickvonplaten I am confues in some things(I also work on T5 model in some Vision-Language task): 1. Do you find that the first token id in every beam of beamsearch_results.sequences always equal to zero?And I find that length of every beam of beamsearch_results.sequences seems alway equal to length of scores tuple plus one. 2. I noticed these questions because I am recently woking on RINFORCE algorithm using CIDEr reward.And I think most people want to get the transition probability may be also want to use RINFORCE algorithm, but I think we don't need the probability of every token in every time step, what I need it's the probability of the sequence,which can be formalize to P(W) = P(Wt|Wt-1,Wt-2,...,W1)P(Wt-2|Wt-3...Wt-1)P(W1);And I think the last indice of return scores tuple could represent the probablity of the sequence.I don't know whether I miss something or not?And I don't konw can I calculate the gradient of P(W) (which attaind from last indice of return scores )using standard backpropagation algorithm? 3. And right now I am working on an old version transformer which doesn't support the newest function 'compute_transition_beam_scores', are ther any method to avoid upgrade the whole transformer, but I can also use the function 'compute_transition_beam_scores'? I appreciate it very much if anyone can give me any advice!!!<|||||>@superhero-7 the first token id in every beam search is always 0 because the model introduces a pad token for every possible continuation of the string you give as input to the generate method.
transformers
15,868
closed
TF: Update multiple choice example
# What does this PR do? Updates the TF example for multiple choice. This example update is not as clean as the previous ones -- despite the deletion of a custom dataset function, it required the addition of a custom data collator ([just like in its PT counterpart](https://github.com/huggingface/transformers/blob/master/examples/pytorch/multiple-choice/run_swag.py#L150)).
03-01-2022 15:15:14
03-01-2022 15:15:14
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15868). All of your documentation changes will be reflected on that endpoint.
transformers
15,867
closed
IndexError: index out of range in self
## Environment info - `transformers` version: 4.15.0 - Platform: Linux-5.10.90+-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyTorch version (GPU?): 1.9.1+cpu (False) - Tensorflow version (GPU?): 2.6.2 (False) - Flax version (CPU?/GPU?/TPU?): 0.4.0 (cpu) - Jax version: 0.2.28 - JaxLib version: 0.1.76 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @LysandreJik ## Information Model I am using: EncoderDecoderModel with bert-base-uncased for both. I am trying to reimplement this awesome article [article](https://huggingface.co/blog/warm-starting-encoder-decoder) by @patrickvonplaten. I get an (IndexError: index out of range in self) when I try to train it on a kaggle cpu. The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I'm working on an Abstractive Text Summarization task using this [dataset](https://www.kaggle.com/gowrishankarp/newspaper-text-summarization-cnn-dailymail), which is a kaggle version of the CNN/DailyMail Dataset ## To reproduce ```python import os import numpy as np import pandas as pd import torch import torch.nn.functional as F from torch.utils.data import Dataset from transformers import BertTokenizerFast from transformers import EncoderDecoderModel from transformers import Seq2SeqTrainer, Seq2SeqTrainingArguments from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler os.environ["TOKENIZERS_PARALLELISM"] = "false" os.environ["WANDB_DISABLED"] = "true" os.environ["CUDA_LAUNCH_BLOCKING"] = "1" def load_tokenizer(save_path='./bert.tokenizer'): if not os.path.exists(save_path): tokenizer = BertTokenizerFast.from_pretrained('bert-base-uncased') tokenizer.save_pretrained(save_path) return tokenizer return BertTokenizerFast.from_pretrained(save_path) tokenizer = load_tokenizer() def light_load_csv(path, cols, nrows=None, chunksize=1000): df = pd.read_csv(path, nrows=nrows, usecols=cols, chunksize=chunksize) xdf = pd.DataFrame(columns=cols) for chunk in df: xdf = pd.concat([xdf, chunk]) return xdf class AbsSummary(Dataset): def __init__(self, data_path, xcol, ycol, tokenizer, xmax=512, ymax=128, nrows=None): self.df = light_load_csv(data_path, [xcol, ycol], nrows=nrows) self.xcol = xcol self.ycol = ycol self.xmax = xmax self.ymax = ymax self.tokenizer = tokenizer def encode_str(self, s, lim): return self.tokenizer.encode_plus(s, max_length=lim, truncation=True, padding='max_length', return_tensors='pt') def __len__(self): return self.df.shape[0] def __getitem__(self, idx): x = self.encode_str(self.df.loc[idx, self.xcol], self.xmax) y = self.encode_str(self.df.loc[idx, self.ycol], self.ymax) inp = torch.tensor([[torch.tensor(-100) if token == self.tokenizer.pad_token_id else token for token in label] for label in y['input_ids']]) y['input_ids'] = inp return { 'input_ids':x['input_ids'][0], 'attention_mask':x['attention_mask'][0], 'decoder_input_ids':y['input_ids'][0], 'decoder_attention_mask':y['attention_mask'][0] } device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') train_file = '/kaggle/input/newspaper-text-summarization-cnn-dailymail/cnn_dailymail/train.csv' valid_file = '/kaggle/input/newspaper-text-summarization-cnn-dailymail/cnn_dailymail/validation.csv' xcol, ycol = 'article', 'highlights' train_data = AbsSummary(train_file, xcol, ycol, tokenizer, nrows=1000) valid_data = AbsSummary(valid_file, xcol, ycol, tokenizer, nrows=100) model = EncoderDecoderModel.from_encoder_decoder_pretrained( 'bert-base-uncased', 'bert-base-uncased' ) model.config.decoder_start_token_id = tokenizer.cls_token_id model.config.eos_token_id = tokenizer.sep_token_id model.config.pad_token_id = tokenizer.pad_token_id model.config.vocab_size = model.encoder.config.vocab_size model.config.max_length = 142 model.config.min_length = 56 model.config.no_repeat_ngram_size = 3 model.config.early_stopping = True model.config.length_penalty = 2.0 model.config.num_beams = 4 model.to(device) training_args = Seq2SeqTrainingArguments( predict_with_generate=True, evaluation_strategy="steps", per_device_train_batch_size=8, per_device_eval_batch_size=8, fp16=False, output_dir="./", logging_steps=1, save_steps=1, eval_steps=1, # logging_steps=1000, # save_steps=500, # eval_steps=7500, # warmup_steps=2000, # save_total_limit=3, ) trainer = Seq2SeqTrainer( model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_data, eval_dataset=valid_data, ) trainer.train() ``` Steps to reproduce the behavior: 1. Load the [dataset](https://www.kaggle.com/gowrishankarp/newspaper-text-summarization-cnn-dailymail) 2. Select Accelerators as None 3. Copy/Paste the code above and run ## Expected behavior I thought the model was supposed to start training but this happens instead: ``` --------------------------------------------------------------------------- IndexError Traceback (most recent call last) /tmp/ipykernel_33/857740956.py in <module> 112 eval_dataset=valid_data, 113 ) --> 114 trainer.train() /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs) 1330 tr_loss_step = self.training_step(model, inputs) 1331 else: -> 1332 tr_loss_step = self.training_step(model, inputs) 1333 1334 if ( /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in training_step(self, model, inputs) 1889 1890 with self.autocast_smart_context_manager(): -> 1891 loss = self.compute_loss(model, inputs) 1892 1893 if self.args.n_gpu > 1: /opt/conda/lib/python3.7/site-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1921 else: 1922 labels = None -> 1923 outputs = model(**inputs) 1924 # Save past state if it exists 1925 # TODO: this needs to be fixed and made cleaner later. /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py in forward(self, input_ids, attention_mask, decoder_input_ids, decoder_attention_mask, encoder_outputs, past_key_values, inputs_embeds, decoder_inputs_embeds, labels, use_cache, output_attentions, output_hidden_states, return_dict, **kwargs) 516 past_key_values=past_key_values, 517 return_dict=return_dict, --> 518 **kwargs_decoder, 519 ) 520 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, labels, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 1237 output_attentions=output_attentions, 1238 output_hidden_states=output_hidden_states, -> 1239 return_dict=return_dict, 1240 ) 1241 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 992 token_type_ids=token_type_ids, 993 inputs_embeds=inputs_embeds, --> 994 past_key_values_length=past_key_values_length, 995 ) 996 encoder_outputs = self.encoder( /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 212 213 if inputs_embeds is None: --> 214 inputs_embeds = self.word_embeddings(input_ids) 215 token_type_embeddings = self.token_type_embeddings(token_type_ids) 216 /opt/conda/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks 1050 or _global_forward_hooks or _global_forward_pre_hooks): -> 1051 return forward_call(*input, **kwargs) 1052 # Do not call functions when jit is used 1053 full_backward_hooks, non_full_backward_hooks = [], [] /opt/conda/lib/python3.7/site-packages/torch/nn/modules/sparse.py in forward(self, input) 158 return F.embedding( 159 input, self.weight, self.padding_idx, self.max_norm, --> 160 self.norm_type, self.scale_grad_by_freq, self.sparse) 161 162 def extra_repr(self) -> str: /opt/conda/lib/python3.7/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 2041 # remove once script supports set_grad_enabled 2042 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 2043 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 2044 2045 IndexError: index out of range in self ```
03-01-2022 14:50:33
03-01-2022 14:50:33
Hey @asceznyk, It looks like a word id is passed to the model which doesn't exist. One possible issue could be that you are passing `-100` values to the decoder as `decoder_input_ids`. Note that only the labels should be masked out with `-100` not the `decoder_input_ids`.<|||||>Hey @patrickvonplaten thanks for the early reply, I'll make the necessary changes and let you know if there are any further issues<|||||>Thanks @patrickvonplaten it worked! I modifed the code to labels and it trains. But the issue is that now it reaches low training and validation loss, but when I test it on a training example it gives weird outputs ``` loading configuration file ./checkpoint-300/config.json Model config EncoderDecoderConfig { "architectures": [ "EncoderDecoderModel" ], "decoder": { "_name_or_path": "bert-base-uncased", "add_cross_attention": true, "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": null, "chunk_size_feed_forward": 0, "classifier_dropout": null, "cross_attention_hidden_size": null, "decoder_start_token_id": null, "diversity_penalty": 0.0, "do_sample": false, "early_stopping": false, "encoder_no_repeat_ngram_size": 0, "eos_token_id": null, "finetuning_task": null, "forced_bos_token_id": null, "forced_eos_token_id": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": true, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "bert", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beam_groups": 1, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_scores": false, "pad_token_id": 0, "position_embedding_type": "absolute", "prefix": null, "problem_type": null, "pruned_heads": {}, "remove_invalid_values": false, "repetition_penalty": 1.0, "return_dict": true, "return_dict_in_generate": false, "sep_token_id": null, "task_specific_params": null, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torch_dtype": null, "torchscript": false, "transformers_version": "4.15.0", "type_vocab_size": 2, "use_bfloat16": false, "use_cache": true, "vocab_size": 30522 }, "decoder_start_token_id": 101, "early_stopping": true, "encoder": { "_name_or_path": "bert-base-uncased", "add_cross_attention": false, "architectures": [ "BertForMaskedLM" ], "attention_probs_dropout_prob": 0.1, "bad_words_ids": null, "bos_token_id": null, "chunk_size_feed_forward": 0, "classifier_dropout": null, "cross_attention_hidden_size": null, "decoder_start_token_id": null, "diversity_penalty": 0.0, "do_sample": false, "early_stopping": false, "encoder_no_repeat_ngram_size": 0, "eos_token_id": null, "finetuning_task": null, "forced_bos_token_id": null, "forced_eos_token_id": null, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "id2label": { "0": "LABEL_0", "1": "LABEL_1" }, "initializer_range": 0.02, "intermediate_size": 3072, "is_decoder": false, "is_encoder_decoder": false, "label2id": { "LABEL_0": 0, "LABEL_1": 1 }, "layer_norm_eps": 1e-12, "length_penalty": 1.0, "max_length": 20, "max_position_embeddings": 512, "min_length": 0, "model_type": "bert", "no_repeat_ngram_size": 0, "num_attention_heads": 12, "num_beam_groups": 1, "num_beams": 1, "num_hidden_layers": 12, "num_return_sequences": 1, "output_attentions": false, "output_hidden_states": false, "output_scores": false, "pad_token_id": 0, "position_embedding_type": "absolute", "prefix": null, "problem_type": null, "pruned_heads": {}, "remove_invalid_values": false, "repetition_penalty": 1.0, "return_dict": true, "return_dict_in_generate": false, "sep_token_id": null, "task_specific_params": null, "temperature": 1.0, "tie_encoder_decoder": false, "tie_word_embeddings": true, "tokenizer_class": null, "top_k": 50, "top_p": 1.0, "torch_dtype": null, "torchscript": false, "transformers_version": "4.15.0", "type_vocab_size": 2, "use_bfloat16": false, "use_cache": true, "vocab_size": 30522 }, "eos_token_id": 102, "is_encoder_decoder": true, "length_penalty": 2.0, "max_length": 142, "min_length": 56, "model_type": "encoder-decoder", "no_repeat_ngram_size": 3, "num_beams": 4, "pad_token_id": 0, "torch_dtype": "float32", "transformers_version": null, "vocab_size": 30522 } loading weights file ./checkpoint-300/pytorch_model.bin All model checkpoint weights were used when initializing EncoderDecoderModel. All the weights of EncoderDecoderModel were initialized from the model checkpoint at ./checkpoint-300. If your task is similar to the task the model of the checkpoint was trained on, you can already use EncoderDecoderModel for predictions without further training. text: By . Associated Press . PUBLISHED: . 14:11 EST, 25 October 2013 . | . UPDATED: . 15:36 EST, 25 October 2013 . The bishop of the Fargo Catholic Diocese in North Dakota has exposed potentially hundreds of church members in Fargo, Grand Forks and Jamestown to the hepatitis A virus in late September and early October. The state Health Department has issued an advisory of exposure for anyone who attended five churches and took communion. Bishop John Folda (pictured) of the Fargo Catholic Diocese in North Dakota has exposed potentially hundreds of church members in Fargo, Grand Forks and Jamestown to the hepatitis A . State Immunization Program Manager Molly Howell says the risk is low, but officials feel it's important to alert people to the possible exposure. The diocese announced on Monday that Bishop John Folda is taking time off after being diagnosed with hepatitis A. The diocese says he contracted the infection through contaminated food while attending a conference for newly ordained bishops in Italy last month. Symptoms of hepatitis A include fever, tiredness, loss of appetite, nausea and abdominal discomfort. Fargo Catholic Diocese in North Dakota (pictured) is where the bishop is located . ==================== summary: ...???!!! " " " ( ( ( [ [......... ] ] ] ) ) ) ] ] [ [ [ | | | ] ]...... [ [ ] ]'''] ] } } } ] ] | | [ [ < < < > > > < < | | > > ] ] ` ` ` | | ` ` < < 〈 〈 〉 〉 〉 〈 〈 〈 | | < < ` ` 〈 〈 「 「 「 〈 〈 < < 「 「 」 」 」 「 「 « « « 〈 〈 」 」 〉 〉 」 」 } } 〉 〉 | | 〈 〈 immortals immortals immortals immortal immortal immortal mortal mortal mortal immortal immortal ``` I think it must be a bug from my side, maybe the dataset is not working as expected<|||||>On the earlier comment, I am training on the first 1000 examples of the Dataset
transformers
15,866
closed
Remove attention from padding
# What does this PR do? This PR removes all attention from padding tokens where before there was a minimal ammount. This promotes reproducibility since it doesn't matter the batch size but one should get the same results every time from any task (which now is not the case). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik
03-01-2022 12:37:56
03-01-2022 12:37:56
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15866). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR! See issue here: https://github.com/huggingface/transformers/issues/14859 Ideally we would want all models to behave similarly, I believe @ydshieh wanted to work on it.<|||||>Thank you @nicola-decao for this PR! This large negative values for attention mask is one thing in our next steps to clean up. In particular: - we will need the same value for any specific model across frameworks (PyTorch, TensorFlow, and Flax) - we also prefer to use the same value among all models Also, using `-float("inf")` is theoretically good, but it seems there are some disadvantages in practice. (I need to investigate this part a bit more to be sure). Because of the above considerations, I prefer not to merge this PR, and our team will work on this soon. I would like to hear from @patrickvonplaten about this decision. <|||||>@nicola-decao As explained in my previous comment, I am going to close this PR without merge. Thank you again for your effort and opening this PR, very appreciated ❤️. Don't hesitate if you have more comments.<|||||>@ydshieh @patrickvonplaten yeah I understand! Although we should try to address this at some point since it is hard to debug or inspect models where results are not the same depending on padding. It make a tiny difference (ie `exp(-1e9)` is very small so it is ok but still people need to use `torch.isclose` instead of equality since it is not the same)<|||||>@nicola-decao Do you mean you want to have `equality` instead of using `torch.isclose`?<|||||>> @nicola-decao Do you mean you want to have `equality` instead of using `torch.isclose`? @ydshieh Correct! Right now if you use different padding lengths the results might differ for the same input which is not ideal for deterministic / reproducible results.
transformers
15,865
closed
use python 3.7 for flax self-push tests
# What does this PR do? Use python 3.7 for flax self-push tests. This PR also removes`kenlm` installation since it was causing some issues and is actually not required for flax tests.
03-01-2022 11:10:15
03-01-2022 11:10:15
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15865). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks for working on it! Is the `apt` installation necessary when you already specified the Python version you wanted in the yaml?<|||||>> Is the apt installation necessary when you already specified the Python version you wanted in the yaml? Aah good catch! No, it's not. It is necessary for `kenlm` but it's not required anymore.
transformers
15,864
closed
DebertaForSequenceClassification loss computation
# What does this PR do? This PR changes the way the loss is being computed for `DebertaForSequenceClassification` and `DebertaV2ForSequenceClassification` to make it match what is being done for other bert-like models. The current way of doing it seems overly complicated and I do not see why the behaviour should be different from other bert-like models, but I might be missing something here... @LysandreJik @sgugger @patrickvonplaten @BigBird01 what do you think?
03-01-2022 10:32:25
03-01-2022 10:32:25
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15864). All of your documentation changes will be reflected on that endpoint.<|||||>No this is a breaking change. We didn't touch the way the loss was computed when adding support for `problem_type` for all models to avoid such a breaking change and we should leave it as is, at least until the next major version IMO. At the very least, if we do break things there should be a flag to enable back the older behavior.<|||||>Additionally to what @sgugger said, DeBERTa was implemented to respect the paper's implementation which gave SOTA results on sequence classification tasks. Not all transformer models work the best with the same heads, so I personally think it's perfectly fine to have models have different heads to perform the same task, if those heads have been identified as the best for the task at hand. What is very important is that it still has the same output as other sequence classification models, but that is the case with the `SequenceClassifierOutput`.<|||||>Ok thank you for the explanation, closing this!
transformers
15,863
closed
Adding timestamps for CTC with LM in ASR pipeline.
# What does this PR do? - Adds `word` to return_timestamps for `ctc_with_lm` models. - Move `offset` -> `timestamp` logic to make it used for both code paths. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2022 09:34:02
03-01-2022 09:34:02
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15863). All of your documentation changes will be reflected on that endpoint.<|||||>Perfect.<|||||>Need to rebase the PR to remove the dev documentation failure. Think then we're good to go
transformers
15,862
closed
Adding license file to some of the BERT models
Hi, I am interested in using “[distilbert-base-german-cased](https://huggingface.co/distilbert-base-german-cased)”, “[bert-base-uncased](https://huggingface.co/bert-base-uncased)”, and “[distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)” models for industrial applications. To be able to download these models from Hugging Face and use them in applications, the stated “apache-2.0” license in the readme files (https://huggingface.co/distilbert-base-german-cased/blob/main/README.md) is unfortunately not sufficient and an explicit license text file (i.e., [text for the Apache-2.0 license](https://opensource.org/licenses/Apache-2.0)) with copyright information is required to avoid ambiguity. I would like to know if you could add a license file with copyright information to these models on Hugging Face. Thank you.
03-01-2022 07:08:20
03-01-2022 07:08:20
Hi @VictorSanh, I would like to know if a license file can be added to the above mentioned models. Since you have proposed the Distilbert models, I was just wondering if you could maybe provide a license file for the Ditsilbert models? Thank you.<|||||>Interesting question regarding lincense of models on the hub cc @osanseviero @julien-c<|||||>Yes we can add the full-text version of the licenses to those 3 models, I see no objection to doing it. Curious though, could you expand on > unfortunately not sufficient and an explicit license text file [...] with copyright information is required to avoid ambiguity. Is this a legal requirement from your company? Would you have more details to share? Thanks! cc @annatrdj <|||||>Thanks a lot @julien-c for the reply. As stated in the [Apache 2.0 license §4](https://opensource.org/licenses/Apache-2.0), recipients must get a copy of the license and therefore, we require a copy of the license for the models. I appreciate your effort and would be grateful if you could add the license files to these models. Thank you.<|||||>> Yes we can add the full-text version of the licenses to those 3 models, I see no objection to doing it. sgtm!<|||||>Hi @julien-c, I would like to know if there is any update on the license files? Thank you.<|||||>Just a quick heads up that we are working on a Hub feature which is going to simplify this. Please stay tuned 🙏 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello @julien-c, is there any update on the this topic? Thanks.<|||||>this is still in progress but hoping to have an update on this in the coming month<|||||>OK we now have a Hub Pull request feature which is a good use case for this. Please check the PR to add the full-text of the license to [`distilbert-base-uncased`](https://huggingface.co/distilbert-base-uncased/discussions/1) you can suggest PRs to add the full-text to other models as well, but FYI we'll also work on a UX feature to easily get to the full text of the license from the license tag of any repo + a capability to download it programmatically. It should comply with the text and the spirit of the licenses. In the meantime feel free to suggest other PRs if necessary<|||||>Thanks for the update. I appreciate your effort. This will be very useful. Then I make a PR for the other model that I mentioned in the post earlier.
transformers
15,861
closed
Unable to run Speech2Text example in documentation
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.2 - Platform: Linux-5.4.0-94-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.10.2+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten, @anton-l <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using: Speech2Text The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Run the example in the documentation [here](https://huggingface.co/docs/transformers/model_doc/speech_to_text#transformers.Speech2TextModel) ```python from transformers import Speech2TextTokenizer, Speech2TextModel import torch tokenizer = Speech2TextTokenizer.from_pretrained("facebook/s2t-small-librispeech-asr") model = Speech2TextModel.from_pretrained("facebook/s2t-small-librispeech-asr") inputs = tokenizer("Hello, my dog is cute", return_tensors="pt") outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state ``` Spits out error: <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` Traceback (most recent call last): File "test.py", line 8, in <module> outputs = model(**inputs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'input_ids' ``` Another [official example](https://huggingface.co/facebook/s2t-small-librispeech-asr) also fails: ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) input_features = processor( ds["speech"][0], sampling_rate=16_000, return_tensors="pt" ).input_features # Batch size 1 generated_ids = model.generate(input_ids=input_features) transcription = processor.batch_decode(generated_ids) ``` spits out: ``` Traceback (most recent call last): File "t5.py", line 26, in <module> generated_ids = model.generate(input_ids=input_features) File "/opt/conda/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py", line 1088, in generate model_kwargs = self._prepare_encoder_decoder_kwargs_for_generation( File "/opt/conda/lib/python3.8/site-packages/transformers/generation_utils.py", line 507, in _prepare_encoder_decoder_kwargs_for_generation model_kwargs["encoder_outputs"]: ModelOutput = encoder(**encoder_kwargs) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'input_ids' ``` ## Expected behavior No error is thrown <!-- A clear and concise description of what you would expect to happen. -->
03-01-2022 06:34:49
03-01-2022 06:34:49
@anton-l - we should put this on our TODO for the doc tests<|||||>`inputs_ids` should be replaced with `inputs` - I'll try to update the docs asap<|||||>Should be fixed here: #15911 and be part of new release.
transformers
15,860
closed
Add PT + TF automatic builds
Adds docker containers for both PyTorch and TensorFlow on which the pipelines will run. Solves the dependencies errors for PyTorch and TensorFlow pipelines.
02-28-2022 21:29:04
02-28-2022 21:29:04
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15860). All of your documentation changes will be reflected on that endpoint.
transformers
15,859
closed
set python version to 3.7 for flax tests
# What does this PR do? set python version to 3.7 for self-push flax tests.
02-28-2022 20:15:01
02-28-2022 20:15:01
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15859). All of your documentation changes will be reflected on that endpoint.
transformers
15,858
closed
set python version to 3.7 for flax tests
# What does this PR do? Set python version to 3.7 for self-push flax tests.
02-28-2022 20:08:09
02-28-2022 20:08:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15858). All of your documentation changes will be reflected on that endpoint.
transformers
15,857
closed
Supporting multiple evaluation datasets in `Trainer` and `Seq2seqTrainer`
# 🚀 Feature request Support for evaluating on multiple validation datasets when using the `Trainer` class. ## Motivation This is a common use case in research. Imagine that you train your model on some data, and you have validation data coming from two distinct distributions. You want to compute two sets of metrics - one for the validation dataset with the same distribution as the training data and one for the validation dataset with known distribution. ## Your contribution - Happy to submit an example with my own code (assuming the research makes sense) so that others see how this can be achieved in practice. - Could update relevant posts on the huggingface forum so that other users requiring this feature can see how it can be done. ## Things I have tried Inspired by this post [here](https://discuss.huggingface.co/t/evaluating-your-model-on-more-than-one-dataset/1544) and @sgugger's solution, I set out to see if implementing the `on_evaluate` callback is possible, but I can't figure out how to get the other validation datasets to it - the callaback can only access the trainer init arguments/state but none of these objects can be passed additional data loaders. How can this be approached instead? My current solution is not clean, but may work: 1. Use `setattr` to add an attribute to the trainer after init, call it `additional_eval_datasets` 2. Override the `_maybe_log_save_evaluate` method as follows: - Call the `Trainer` superclass method first to do what the trainer would normally do - loop through the additional datasets, calling the `Trainer.evaluate` for each dataset with appropriate inputs This is a bit hacky as I should not be overriding a private method, but override `evaluate` instead. However, the implementation would be more concise in this way. Please feedback of any undesirable side effects that this may lead to - I did read through the `Trainer` source code and did not spot any pitfalls! Of course, this approach can be adapted in order to support evaluation on multiple datasets natively in the `Trainer`.
02-28-2022 18:50:19
02-28-2022 18:50:19
We could support several evaluation datasets inside the `Trainer` natively. I think the easiest would be to: - accept a list of datasets for the `eval_dataset` at init - have a new boolean `TrainingArguments` named `multiple_eval_dataset` that would tell the `Trainer` that it has several evaluation datasets (since it won't be able to make the difference between one or several datasets: it could very well receive a list for a regular evaluation dataset). Would you like to work on a PR for this?<|||||>Hi @sgugger, Yes, that would be a nice solution. Happy to try and prototype this in my own time (weekends, overwhelmed PhD student here) and see if we can put together something for one of the future releases. Just to make sure I understand the design principle: first we make the above changes and then change the logic inside `_maybe_log_save_evaluate` to deal with multiple datasets? This requires some assumptions (e.g., report the metrics to hp search from evaluation on dataset at index `0` or some user specified index or average over specified indices). These would also have to be defined as additional arguments to `TrainingArguments` so that we know what to report to the hyperparameter search. The alternative is that we just make the changes you mentioned, which would allow the user to write the `on_evaluation` callback - an example of how that callback looks like could be in the docs. Which is the preferred option?<|||||>The idea would be to loop other the datasets when calling the `evaluate` function in `_maybe_log_save_evaluate` all natively inside the `Trainer`. We can also have another `TrainingArguments` for the names of the datasets, that's a good idea, which would defaults to just 0, 1, etc... <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>not stale. I'll pick this up soon enough! <|||||>I'll put the `WIP` label so that it is not closed!<|||||>Hi! I also need this feature and I have a hacky implementation to make it work (based on @sgugger 's suggestions). How is the status on your side @alexcoca? Just to know if it's worth polishing it up on my side and make a PR.<|||||>@bmichele , really nice! I've been busy with conference deadlines so progress has been slow. How about you open a PR and I can also help make some contributions/suggestions so we get the job done? <|||||>Is there an update on this?:)
transformers
15,856
closed
Fix (deprecated) ONNX exporter to account for new tf2onnx API
# What does this PR do? This PR fixes an issue with the deprecated `convert_graph_to_onnx` module, where exporting TensorFlow models to ONNX failed due to changes in the `tf2onnx` API. In particular, the failing tests in question were: * `test_onnx.py::OnnxExportTestCase::test_export_tensorflow` * `test_onnx.py::OnnxExportTestCase::test_quantize_tf` With this fix, the whole `test_onnx.py` test suite now passes when I run: ``` RUN_SLOW=1 pytest tests/onnx/test_onnx.py ```
02-28-2022 17:13:05
02-28-2022 17:13:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15856). All of your documentation changes will be reflected on that endpoint.
transformers
15,855
closed
Update TF LM examples
# What does this PR do? As part of the objective to bring TF examples up to speed, this PR updates the LM examples with the internal loss computations and the `to_tf_dataset()` functionality, which shaves off significant custom code.
02-28-2022 16:32:11
02-28-2022 16:32:11
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15855). All of your documentation changes will be reflected on that endpoint.
transformers
15,854
closed
Add time stamps for wav2vec2 with lm
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds word offsets analogue to https://github.com/huggingface/transformers/pull/15687 . `pyctcedecode` only returns time stamps of words not characters so only those can be returned for now. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-28-2022 16:09:41
02-28-2022 16:09:41
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15854). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger - if you could take a quick look here this would be great :-)
transformers
15,853
closed
Add TF benchmark examples
# What does this PR do? As part of the objective to refresh TF examples, this one adds actual TF benchmarks in the examples. Also showcases the power of XLA ;)
02-28-2022 14:57:00
02-28-2022 14:57:00
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15853). All of your documentation changes will be reflected on that endpoint.<|||||>I see that this PR (https://github.com/huggingface/transformers/pull/15848) exists -- should I just delete the benchmark example instead?<|||||>Ah, and I didn't see that we're deprecating the benchmarks either. Maybe we shouldn't merge this!<|||||>Cc @patrickvonplaten <|||||>@patrickvonplaten It's inconsistent with deprecating the benchmarks, so is your approval a sign we should un-deprecate them?<|||||>That's fine by me! :-)<|||||>Cool -- I'm closing the PR then and will open a new one as @patrickvonplaten suggested (moving both benchmarks to the research projects).
transformers
15,852
closed
Add TF generate sample tests with all logit processors
# What does this PR do? This PR adds a GPT2 TF generate sample test with all logit processors. It is a requirement for the TF generate sample refactor (https://github.com/huggingface/transformers/pull/15793), which is part of the TF generate refactor.
02-28-2022 13:32:34
02-28-2022 13:32:34
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15852). All of your documentation changes will be reflected on that endpoint.<|||||>Also, code quality is failing, be sure to `make style`!<|||||>> Is the plan to add a single specific test for each logit processor, as they're converted? Yeah, in the other PR (master doesn't have them yet)<|||||>Perfect! Sorry to be a bit annoying here - could you also add one random test for T5? I know that this is not a very common use case for encoder-decoder settings, but the encoder-decoder setting is significantly different from decoder-only so that it would be good to be sure that nothing is broken there before the refactor. Think you can take more or less the exact same input that you used for GPT2 (the output might not be that sensible, but anyways good to test encoder-decoder in sample mode).<|||||>@patrickvonplaten now with a T5 generate sample test. I took the liberty to find a combination of inputs where the output of one is very unstable (the first one), and the output of the other is pretty stable (the second one). It may help us distinguish the case where the new generate is completely different from the case where there are minor numerical differences.
transformers
15,851
closed
[vision] Add problem_type support
# What does this PR do? * add `problem_type` support to vision backbones that weren't supporting this before * make sure all vision backbones are tested for this
02-28-2022 13:24:44
02-28-2022 13:24:44
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15851). All of your documentation changes will be reflected on that endpoint.
transformers
15,850
closed
No such file or directory: '../datasets/RE/${DATA}/${SPLIT}/cached_train_BertTokenizer_128_sst-2.lock'
[@wonjininfo](https://github.com/wonjininfo) Platform: Google Colab Python version: 3.7 Transformers version: 3.0.0 Installations that were done: ``` pip install transformers==3.0.0 ``` I'm trying to use Biobert from [this github repository](https://github.com/dmis-lab/biobert-pytorch), and I followed the below steps given in the repo to finetune it on Relation Extraction task. 1. ``./download.sh`` to download all the datasets 2. ``./preprocess.sh`` to preprocess for RE 3. ``` %env SAVE_DIR=./output %env DATA="GAD" %env SPLIT="1" %env DATA_DIR=../datasets/RE/${DATA}/${SPLIT} %env ENTITY=${DATA}-${SPLIT} %env MAX_LENGTH=128 %env BATCH_SIZE=32 %env NUM_EPOCHS=3 %env SAVE_STEPS=1000 %env SEED=1 ``` 4. ``` !python run_re.py \ --task_name SST-2 \ --config_name bert-base-cased \ --data_dir ${DATA_DIR} \ --model_name_or_path dmis-lab/biobert-base-cased-v1.1 \ --max_seq_length ${MAX_LENGTH} \ --num_train_epochs ${NUM_EPOCHS} \ --per_device_train_batch_size ${BATCH_SIZE} \ --save_steps ${SAVE_STEPS} \ --seed ${SEED} \ --do_train \ --do_predict \ --learning_rate 5e-5 \ --output_dir ${SAVE_DIR}/${ENTITY} \ --overwrite_output_dir ``` When I run step 4, this is the error that I get: ``` 02/28/2022 07:27:25 - INFO - transformers.training_args - PyTorch: setting up devices 02/28/2022 07:27:25 - WARNING - __main__ - Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False 02/28/2022 07:27:25 - INFO - transformers.configuration_utils - loading configuration file https://s3.amazonaws.com/models.huggingface.co/bert/dmis-lab/biobert-base-cased-v1.1/config.json from cache at /root/.cache/torch/transformers/efc161c68d589c7960ab5463ed06a47d75d1ec73b2c31938de0ff797f76892dd.d7a3af18ce3a2ab7c0f48f04dc8daff45ed9a3ed333b9e9a79d012a0dedf87a6 02/28/2022 07:27:25 - INFO - transformers.configuration_utils - Model config BertConfig { "attention_probs_dropout_prob": 0.1, "gradient_checkpointing": false, "hidden_act": "gelu", "hidden_dropout_prob": 0.1, "hidden_size": 768, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-12, "max_position_embeddings": 512, "model_type": "bert", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 0, "type_vocab_size": 2, "vocab_size": 28996 } 02/28/2022 07:27:25 - INFO - transformers.tokenization_utils_base - Model name 'dmis-lab/biobert-base-cased-v1.1' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, TurkuNLP/bert-base-finnish-cased-v1, TurkuNLP/bert-base-finnish-uncased-v1, wietsedv/bert-base-dutch-cased). Assuming 'dmis-lab/biobert-base-cased-v1.1' is a path, a model identifier, or url to a directory containing tokenizer files. 02/28/2022 07:27:29 - INFO - transformers.tokenization_utils_base - loading file https://s3.amazonaws.com/models.huggingface.co/bert/dmis-lab/biobert-base-cased-v1.1/vocab.txt from cache at /root/.cache/torch/transformers/a6d2d795bddbd9841e0ccd4a2f51c5b412116fda79488f6ffed7979e7ea9ef36.e13dbb970cb325137104fb2e5f36fe865f27746c6b526f6352861b1980eb80b1 02/28/2022 07:27:29 - INFO - transformers.tokenization_utils_base - loading file https://s3.amazonaws.com/models.huggingface.co/bert/dmis-lab/biobert-base-cased-v1.1/added_tokens.json from cache at None 02/28/2022 07:27:29 - INFO - transformers.tokenization_utils_base - loading file https://s3.amazonaws.com/models.huggingface.co/bert/dmis-lab/biobert-base-cased-v1.1/special_tokens_map.json from cache at None 02/28/2022 07:27:29 - INFO - transformers.tokenization_utils_base - loading file https://s3.amazonaws.com/models.huggingface.co/bert/dmis-lab/biobert-base-cased-v1.1/tokenizer_config.json from cache at None 02/28/2022 07:27:29 - INFO - transformers.tokenization_utils_base - loading file https://s3.amazonaws.com/models.huggingface.co/bert/dmis-lab/biobert-base-cased-v1.1/tokenizer.json from cache at None Traceback (most recent call last): File "run_re.py", line 259, in <module> main() File "run_re.py", line 125, in main GlueDataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None File "/usr/local/lib/python3.7/dist-packages/transformers/data/datasets/glue.py", line 106, in __init__ with FileLock(lock_path): File "/usr/local/lib/python3.7/dist-packages/filelock/_api.py", line 214, in __enter__ self.acquire() File "/usr/local/lib/python3.7/dist-packages/filelock/_api.py", line 170, in acquire self._acquire() File "/usr/local/lib/python3.7/dist-packages/filelock/_unix.py", line 35, in _acquire fd = os.open(self._lock_file, open_mode) FileNotFoundError: [Errno 2] No such file or directory: '../datasets/RE/${DATA}/${SPLIT}/cached_train_BertTokenizer_128_sst-2.lock' ``` What is causing this error? Could someone please help me get rid of it?
02-28-2022 13:06:19
02-28-2022 13:06:19
From your error `No such file or directory: '../datasets/RE/${DATA}/${SPLIT}/cached_train_BertTokenizer_128_sst-2.lock'` , it seems like the script input literally takes `${DATA}` and `${SPLIT}` instead of the values of the environment variables. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I am having the same issue. Does anyone know how to resolve it?
transformers
15,849
closed
Rename semantic segmentation outputs
# What does this PR do? This PR renames `SemanticSegmentationModelOutput` to `SemanticSegmentationOutput`, because of simplicity. We also have `SequenceClassifierOutput` and `TokenClassifierOutput` for instance, so the "model" in the name is not really useful.
02-28-2022 10:53:27
02-28-2022 10:53:27
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15849). All of your documentation changes will be reflected on that endpoint.<|||||>`Classifier` is a model name, whereas `Segmentation` is not. We could go for `SemanticSegmenter` if you dislike `SemanticSegmentationModelOutput`, but `SemanticSegmentationOutput` does not match the other model output names.<|||||>Agree with @sgugger , similarly to `SemanticSegmentationModelOutput`, I've added a `ImageClassificationModelOutput` (in resnet pr for now) following the same naming convention<|||||>Should we resurrect this? We went for `ImageClassifierOutput`, maybe we can go for `SemanticSegmenterOutput` as @sgugger proposed<|||||>@FrancescoSaverioZuppichini updated the PR accordingly
transformers
15,848
closed
[Benchmark tools] Deprecate all
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> As discussed internally this PR deprecates all of HF's benchmark tools. Instead of deleting the docs I put a warning note at the top. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-28-2022 10:23:54
02-28-2022 10:23:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15848). All of your documentation changes will be reflected on that endpoint.