repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
15,243
closed
Update pipelines.mdx
fix few spelling mistakes # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @Narsil
01-20-2022 09:39:10
01-20-2022 09:39:10
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15243). All of your documentation changes will be reflected on that endpoint.<|||||>Feel free to merge.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,242
closed
[ViTMAE] Add image pretraining script
# What does this PR do? This PR adds an example script to a new folder, "image-pretraining". It allows the user to pre-train a Vision Transformer (ViT) using the [MAE](https://arxiv.org/abs/2111.06377) method. More specifically, one can pre-train a `ViTMAEForPreTraining` with the script. After pre-training, one can easily load the weights into a `ViTForImageClassification`, obtaining SOTA accuracy. 🥇 Some things that require some feedback: - regarding providing a custom dataset: this should be updated in the README. The original script uses torchvision's handy ImageFolder as seen [here](https://github.com/facebookresearch/mae/blob/6a2ba402291005b003a70e99f7c87d1a2c376b0d/main_pretrain.py#L128). Can we have something similar with `datasets`? ([#2830](https://github.com/huggingface/datasets/pull/2830)) - hyperparameters: here are the ones used in the original paper: <img width="521" alt="Schermafbeelding 2022-01-20 om 10 22 07" src="https://user-images.githubusercontent.com/48327001/150310271-5d3afc1b-43a9-46be-bc9b-d7a53c9c65de.png"> => I'd like to mimic these as closely as possible.
01-20-2022 09:23:47
01-20-2022 09:23:47
_The documentation is not available anymore as the PR was closed or merged._<|||||>We do have a speech-pretraining folder in the examples directory, which also only works for a single model (`Wav2Vec2ForPreTraining`).<|||||>Do what you want then :-)
transformers
15,241
closed
Updating InfNanLogitsprocess to also remove negative infinity.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #15169 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-20-2022 08:51:31
01-20-2022 08:51:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>I'm not really sure that this is the reason of the problem. Usually there is nothing wrong with `float("-inf")` values for PyTorch's softmax or sampling function. I've debugged: ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config lm_model = 't5-small' model = T5ForConditionalGeneration.from_pretrained(lm_model) tokenizer = T5Tokenizer.from_pretrained(lm_model) def restrict_decode_vocab(batch_idx, prefix_beam): if len(prefix_beam)==3: restricted_vocab = tokenizer(' ', return_tensors="pt")['input_ids'].tolist() else: restricted_vocab = tokenizer('<extra_id_0> cute dog <extra_id_1> the <pad>', return_tensors="pt")['input_ids'].tolist() return restricted_vocab source = ['The <extra_id_0> walks in <extra_id_1> park .'] source_encoding = tokenizer(source[:], padding='longest', return_tensors="pt") input_ids, attention_mask = source_encoding['input_ids'], source_encoding['attention_mask'] decoded_beams = model.generate(input_ids=input_ids, attention_mask=attention_mask, do_sample=True, num_beams=2, prefix_allowed_tokens_fn=restrict_decode_vocab, min_length=4, max_length=4, remove_invalid_values=True) print(decoded_beams) ``` and the reason for the failure seems to be that after 2 generation steps the logits are all `nan`.<|||||>Yes, that's because of the `float(-inf)` the second step, everything is `float(-inf)` before the softmax, and then everything become NaN after the softmax. (The exp sum being -inf, it's understandable)<|||||>Yeah, but it should be all `-inf`, this essentially means that no token should be sampled<|||||>So IMO the problem here is that the user added a `prefix_allowed_tokens_fn` that sets all values to `-inf`.<|||||>Closing, the bug is that the `scores` should never become only `float(-inf)`. So earlier than this.
transformers
15,240
closed
[Fix doc example] missing import
# What does this PR do? Add `from transformers import EncoderDecoderModel, TFEncoderDecoderModel` in a doc example.
01-20-2022 08:46:05
01-20-2022 08:46:05
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15240). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,239
closed
Update README.md
# What does this PR do? Adds an OVHcloud tutorial URL for the Robust Speech Challenge
01-20-2022 08:46:04
01-20-2022 08:46:04
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15239). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,238
closed
Fix a bug that QuestionAnsweringPipeline ignores max_seq_len parameter
# What does this PR do? Fix a bug that QuestionAnsweringPipeline ignores max_seq_len parameter <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @LysandreJik
01-20-2022 08:03:58
01-20-2022 08:03:58
_The documentation is not available anymore as the PR was closed or merged._<|||||>Merging now since this PR is relatively trivial and was running for long enough, sorry about the review time.
transformers
15,237
closed
why training bigbird-512 model is much slower than bert-512?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: - Python version: 3.6.8 - PyTorch version (GPU?): 1.10.1+cu102 - Tensorflow version (GPU?): - Using GPU in script?: one GPU of V100-16GB - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten Model I am using (Bert, XLNet ...): BigBird The problem arises when using: * [ ] the official example scripts: (give details below) * [*] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [*] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: I have compared the training time of bigbird model and bert model, both in the sequence length of 512, with the official recommend pipeline (config -> MLM model -> dataset -> data_collator -> training_args -> trainer ). But I find the training time of bigbird model is much bigger than bert model, even in the same seq_length, here are the detailed parameters: num_train_epochs=1 learning_rate = 5e-5 warmup_steps = 0 save_strategy='no' gradient_checkpointing=False, seed=222 gradient_accumulation_steps=1 max_length = 512 the total number of samples is 200_000 1. for bert-512, I can set the batchsize up to 13 (bigger would raise OOM error), the total training time is 2:04:18 2. for bigbird-512, I revise the block_size to 32 to meet the requirements of (5 + 2*num_random_blocks)*block_size < 512, and only can set the batchsize up to 11 (bigger would raise OOM error, this value is smaller than Bert's, I don't know the reaseon, the total parameters of bigger would be bigger than Bert's ?), the total training time is about 4:45:23, that is much slower than Bert-512, Anything wrong? looking forward to your response, thanks advance
01-20-2022 05:39:25
01-20-2022 05:39:25
If your maximum sequence length is just 512, it does indeed not make much sense to use BigBird and BERT, AlBERT, or DeBERTa make much more sense. BigBird should be used for models that require sequence lengths > 2048<|||||>The main promise of BigBird is not that it is faster than BERT for seq_len < 512, but rather that it can handle sequence lengths of up to 16K tokens<|||||>Got it. Thanks very much.
transformers
15,236
closed
wav2vec2_with_lm
I refer to the example and run kenlm on the Chinese test set, it is ok ,but the language model weight, no matter how I adjust it, the result remains the same, this is my debugging code # dict vocabfile = os.path.join(model_path,'vocab.json') with open(vocabfile,'r',encoding="utf-8") as f: vocab_dict = json.load(f) labels = list(vocab_dict.keys()) # kenlm deocder decoder = build_ctcdecoder( labels, kenlm_model_path=kenlm_path, alpha = alpha, beta = beta ) # processor_with_lm processor_with_lm = Wav2Vec2ProcessorWithLM( feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer, decoder=decoder ) # decode with lm text_tmp = processor_with_lm.batch_decode(logits.numpy()).text[0].split('<s>') text_lm = ''.join(text_tmp) print ("lm text:{}".format(text_lm)) i adjust alpha beta , how can i do ?
01-20-2022 02:56:46
01-20-2022 02:56:46
You can run a hyperparameter search by inferencing a test set with different alpha and beta values. But if you have a small language model there might not be a difference. Maybe use a larger language model or larger test set.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> I refer to the example and run kenlm on the Chinese test set, it is ok ,but the language model weight, no matter how I adjust it, the result remains the same, this is my debugging code > > # dict > ``` > vocabfile = os.path.join(model_path,'vocab.json') > with open(vocabfile,'r',encoding="utf-8") as f: > vocab_dict = json.load(f) > > labels = list(vocab_dict.keys()) > # kenlm deocder > > decoder = build_ctcdecoder( > labels, > kenlm_model_path=kenlm_path, > alpha = alpha, > beta = beta > ) > # processor_with_lm > > processor_with_lm = Wav2Vec2ProcessorWithLM( > feature_extractor=processor.feature_extractor, > tokenizer=processor.tokenizer, > decoder=decoder > ) > > # decode with lm > text_tmp = processor_with_lm.batch_decode(logits.numpy()).text[0].split('<s>') > text_lm = ''.join(text_tmp) > print ("lm text:{}".format(text_lm)) > ``` > > i adjust alpha beta , how can i do ? Check your kenlm model is char-level or word-level. To wav2vec2_with_lm, it bases `pyctcdecode`, which only support word-level kenlm. If you need to use char-level language model, you need to tune codes of `pyctcdecode`.
transformers
15,235
closed
Specify providers explicitly in ORT session initialization
# What does this PR do? Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['CUDAExecutionProvider', 'CPUExecutionProvider'], ...) <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-20-2022 01:27:04
01-20-2022 01:27:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
15,234
closed
Fixes tf_default_data_collator sometimes guessing the wrong dtype for labels
Fixes issues reported by @philschmid in [this notebook](https://colab.research.google.com/drive/13nMZPJpgJpqzdl2e5t4cnV26m_l2xwlw?usp=sharing) with `labels` occasionally ending up as `tf.float32` instead of `tf.int64`. The underlying cause was that the `dtype` checking code failed in cases when the label for a single example was a Numpy scalar - this caused `isinstance(np.ndarray)` to return `False`. Adding `or isinstance(np.generic)` correctly catches Numpy scalars as well as arrays.
01-19-2022 18:09:03
01-19-2022 18:09:03
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15234). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,233
closed
Adapt Common Voice Talk Title and Abstract
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-19-2022 17:00:44
01-19-2022 17:00:44
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15233). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,232
closed
Wav2Vec2ForPreTraining doc example has None loss
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-5.11.0-46-generic-x86_64-with-glibc2.31 - Python version: 3.9.7 - PyTorch version (GPU?): 1.10.1 (True) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help Models: - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l Documentation: @sgugger ## Information I'm trying to additionally pretrain Wav2Vec2.0 model on my dataset. [In the docs](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForPreTraining) you have an example for running the `Wav2Vec2ForPreTraining`: ```python import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices from datasets import load_dataset import soundfile as sf feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("patrickvonplaten/wav2vec2-base") model = Wav2Vec2ForPreTraining.from_pretrained("patrickvonplaten/wav2vec2-base") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) input_values = feature_extractor(ds["speech"][0], return_tensors="pt").input_values # Batch size 1 # compute masked indices batch_size, raw_sequence_length = input_values.shape sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length) mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2) with torch.no_grad(): outputs = model(input_values, mask_time_indices=mask_time_indices) # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) cosine_sim = torch.cosine_similarity( outputs.projected_states, outputs.projected_quantized_states, dim=-1 ) # show that cosine similarity is much higher than random assert cosine_sim[mask_time_indices].mean() > 0.5 # for contrastive loss training model should be put into train mode model.train() loss = model(input_values, mask_time_indices=mask_time_indices).loss ``` If you print the `loss` that you get in the end, you will get `None`. This happens because [in the definition](https://github.com/huggingface/transformers/blob/f4b7420dfe419fe653908f091976517635a119e6/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1511) you have to pass `sampled_negative_indices` in order to get not `None` loss. ## To reproduce Steps to reproduce the behavior: 1. Run the above code 2. `print(loss)` in the end ## Expected behavior Expected to have some example on how to get the actual loss and train the model.
01-19-2022 16:48:07
01-19-2022 16:48:07
Hi, Thanks for spotting. Feel free to open a PR to fix this :) <|||||>Hi As I am a beginner, I would love to contribute to this Good First Issue, can anyone guide me. ThankYou<|||||>Hi, The docs of `Wav2Vec2ForPretraining` can be found in `modeling_wav2vec2.py`, [here](https://github.com/huggingface/transformers/blob/c85547af2b69f9082bcd7bac97092b1d162f3fdc/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1469-L1502).<|||||>Hi, I am a beginner, and given that the issue has been inactive for some time, I'll like to take this and try fixing it.<|||||>The following changes to the code, mainly adding the `sampled_negative_indices` as mentioned by the original issue, work, and allow to print the loss ( run on google colab ). The sampled_negative_indices can be calculated from the `_sample_negative_indices` function from `transformers.models.wav2vec2.modeling_wav2vec2` Also I added a .item() at the end of `sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item()` because otherwise it would be a tensor, and would give an error in `_sample_negative_indices` ![image](https://user-images.githubusercontent.com/43698245/171943954-dceb4f2c-4e45-40b0-9803-6b7f20d000ee.png) Does this change to the documentation seem fine? I am just a beginner so please let me know if I need to provide more information/context<|||||>Adding the code here ```python import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining from transformers.models.wav2vec2.modeling_wav2vec2 import _compute_mask_indices, _sample_negative_indices from datasets import load_dataset import soundfile as sf feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("patrickvonplaten/wav2vec2-base") model = Wav2Vec2ForPreTraining.from_pretrained("patrickvonplaten/wav2vec2-base") ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation") input_values = feature_extractor(ds[0]["audio"]["array"], return_tensors="pt").input_values # Batch size 1 # compute masked indices batch_size, raw_sequence_length = input_values.shape sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item() mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2) sampled_negative_indices = _sample_negative_indices((batch_size, sequence_length), model.config.num_negatives, mask_time_indices) mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.long) sampled_negative_indices = torch.tensor(sampled_negative_indices, device=input_values.device, dtype=torch.long) with torch.no_grad(): outputs = model(input_values, mask_time_indices=mask_time_indices) # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) cosine_sim = torch.cosine_similarity(outputs.projected_states, outputs.projected_quantized_states, dim=-1) # show that cosine similarity is much higher than random print(cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5) # tensor(True) # for contrastive loss training model should be put into train mode model = model.train() loss = model(input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices).loss print(loss) ```<|||||>Hey @Sorrow321 @ayushtues could you upgrade your transformers version to the newest version and try again? I cannot reproduce the error on `main`<|||||>Hi, @patrickvonplaten I am still getting None if we don't pass sampled_negative_indices to the model (as in the original code given by @Sorrow321) while passing it prints the actual loss. I am using transformers 4.20.0 dev.0, built from source on the latest main branch My code: ```python import torch import transformers from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2ForPreTraining from transformers.models.wav2vec2.modeling_wav2vec2 import ( _compute_mask_indices, _sample_negative_indices, ) from datasets import load_dataset import soundfile as sf feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained( "patrickvonplaten/wav2vec2-base" ) model = Wav2Vec2ForPreTraining.from_pretrained("patrickvonplaten/wav2vec2-base") ds = load_dataset( "hf-internal-testing/librispeech_asr_dummy", "clean", split="validation" ) input_values = feature_extractor( ds[0]["audio"]["array"], return_tensors="pt" ).input_values # Batch size 1 # compute masked indices batch_size, raw_sequence_length = input_values.shape sequence_length = model._get_feat_extract_output_lengths(raw_sequence_length).item() mask_time_indices = _compute_mask_indices( (batch_size, sequence_length), mask_prob=0.2, mask_length=2 ) sampled_negative_indices = _sample_negative_indices( (batch_size, sequence_length), model.config.num_negatives, mask_time_indices ) mask_time_indices = torch.tensor( mask_time_indices, device=input_values.device, dtype=torch.long ) sampled_negative_indices = torch.tensor( sampled_negative_indices, device=input_values.device, dtype=torch.long ) with torch.no_grad(): outputs = model(input_values, mask_time_indices=mask_time_indices) # compute cosine similarity between predicted (=projected_states) and target (=projected_quantized_states) cosine_sim = torch.cosine_similarity( outputs.projected_states, outputs.projected_quantized_states, dim=-1 ) # show that cosine similarity is much higher than random print(cosine_sim[mask_time_indices.to(torch.bool)].mean() > 0.5) # tensor(True) # for contrastive loss training model should be put into train mode model = model.train() loss = model( input_values, mask_time_indices=mask_time_indices, # sampled_negative_indices=sampled_negative_indices, ).loss print("transformers version") print(transformers.__version__) print("Loss without sampled_negative_indices") print(loss) loss = model( input_values, mask_time_indices=mask_time_indices, sampled_negative_indices=sampled_negative_indices, ).loss print("Loss with sampled_negative_indices") print(loss) ``` Output screenshot ![image](https://user-images.githubusercontent.com/43698245/173180102-ca1a5359-886c-4942-808b-aa5f56e7b047.png) <|||||>Hey @ayushtues, We need to pass the `sampled_negative_indices` to get a loss - it's more or less equivalent to the "labels" in training<|||||>Hey @ayushtues are you still working on this? I would be interested in taking this issue up if not.<|||||>@pramodith by all means, feel free to look into this issue!<|||||>Hey @patrickvonplaten, since no one seems to be working on this issue at the moment I'd love to address it if that's okay!<|||||>@patrickvonplaten I just opened a PR to address this issue based on @ayushtues's suggestions. On a side note I am fairly new to open source software contribution. Any feedback is very welcome and if there is any issue with this PR I'd be happy to address it.
transformers
15,231
closed
Fix PR number
Updates the reference to the environment variable to respect Github Actions' format.
01-19-2022 15:50:14
01-19-2022 15:50:14
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_${PR_NUMBER}). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15231). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,230
closed
Adds missing module_specs for usages of _LazyModule
# What does this PR do? This PR adds a missing `__spec__` to the `_LazyModule` instances of all models. When a module is missing a spec it could not be imported via `importlib.import_module`, which is what [torchmetrics does](https://github.com/PyTorchLightning/metrics/blob/master/torchmetrics/utilities/imports.py#L114), which is why I've found it in the first place. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Related to #13321 (pr) Fixes #15212 (issue) ## Before submitting - [x] ~This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).~ - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] ~Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).~ - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-19-2022 15:46:50
01-19-2022 15:46:50
_The documentation is not available anymore as the PR was closed or merged._<|||||>😍 exactly what i need<|||||>I have just moved the test. I also have a local commit lying around that adds the module_spec to all occurences of the LazyModule, but I am not that sure on where to put the tests for that. Looking at the `test_modeling_{{cookiecutter.lowercase_modelname}}.py` template I could add the test to both possibilities of the `{{cookiecutter.camelcase_modelname}}ModelTester` -- but I am not sure on how to import the model beforehand, since I am not that sure on how the Lazy Modules are working exactly. Can I just add a `import transformers.models.{{cookiecutter.lowercase_modelname}}` to all model test files, even outside of `is_torch_available` or would this break things?<|||||>Oh that's great! If you want to include that commit in this PR and rename it that would be great. Regarding tests, I don't think we need more than the test for the auto submodule and the test of transformers (since it's the same thing everywhere). To answer your question however, it does not need to be under a `is_torch_available()`. To have the PR be perfectly complete, could you also add the change to the modeling template (if you don't have it in your commit already) as well as fix the test for transformers spec to be in a `unittest.TestCase`?<|||||>Okay, I figured that with the tests, but I wasn't sure. In theory it wouldn't hurt, but yeah, it's probably okay to test the LazyModule once. From my side this would now be mergeable, unless I made an error somewhere! Edit: We *could* also get rid of the `Optional`-Hint for the `module_spec` to prevent those issues from arising later again (or at least the None-default).<|||||>Thank you so much for the wide fix! With the templates properly fixed, I don't think we will get new inits without the `module_spec` (if someone adds another new submodule, they will copy an existing init), so I think we're good.
transformers
15,229
closed
Is there any script for running RACE dataset?
# 🚀 Feature request examples/multiple-choice/run_swag.py is the script for running SWAG dataset, although readme says that a tweak of preprocess_function() should get things work for RACE dataset, but i still can not figure it out with this little information. is there any information about how to run RACE?
01-19-2022 15:34:34
01-19-2022 15:34:34
transformers
15,228
closed
Fix typo in BERT tokenization file
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-19-2022 14:39:18
01-19-2022 14:39:18
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15228). All of your documentation changes will be reflected on that endpoint.<|||||>I was actually going to work on that :-) Thanks for fixing! You can see the doc after your fix [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15228/en/model_doc/bert#transformers.BertTokenizerFast), whereas before had a problem as seen [here](https://huggingface.co/docs/transformers/master/en/model_doc/bert#transformers.BertTokenizerFast) (we just need to fix the link displayed in the comment above).<|||||>New bot looks great :)<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,227
closed
[Speech Event] Fix speech event readme
# What does this PR do?
01-19-2022 14:24:28
01-19-2022 14:24:28
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_$PR_NUMBER). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you!<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,226
closed
Correct Speech Event Readme
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Correct YouTube links ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-19-2022 14:22:11
01-19-2022 14:22:11
Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_$PR_NUMBER). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,225
closed
[Fix doc example] TFFunnelTokenizer' is not defined
# What does this PR do? `TFFunnelTokenizer` should be changed to `FunnelTokenizer`.
01-19-2022 13:37:30
01-19-2022 13:37:30
transformers
15,224
closed
Copy of the custom modeling file when saving a model
Hi, When using the dynamic code loading feature #13467 , the custom modeling file (eg: modeling.py) isn't copied into the folder when you save the model. In this case, you have to manually copy the file to the folder to be able to reload the model from the checkpoint. Code example: ``` from transformers import AutoModel, AutoTokenizer # Load a model from the hub with a custom modeling file model = AutoModel.from_pretrained("ccdv/lsg-base-4096", trust_remote_code=True) model.save_pretrained("saved_model") # Fail to reload since the file isn't in the folder model = AutoModel.from_pretrained("saved_model", trust_remote_code=True) ``` Any plan to fix this ? Thank you
01-19-2022 12:01:16
01-19-2022 12:01:16
Yes, there are plans to couple this we the `AutoXxx.register` API so that when you register a new architecture, the custom code is copied to the right places when doing `save_pretrained`. I'm hoping to get to this by the end of the week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This should have been addressed by #15379, see the description of this PR or the new [doc page](https://huggingface.co/docs/transformers/master/en/custom_models) for more information!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,223
closed
where is the 4.16.0dev??
I'm running the run_mlm.py script. There is such a line, # Will error if the minimal version of Transformers is not installed. Remove at your own risks. check_min_version("4.16.0.dev0") but where is it? can't find by pip,no in github too.
01-19-2022 11:41:04
01-19-2022 11:41:04
Hey! That's the current `master` version :) https://github.com/huggingface/transformers/blob/80f72960913ab6682451c33dfa8035ef0c932128/setup.py#L354<|||||>You can either clone the repo and install it locally, or install it with the following: ``` pip install -U git+https://github.com/huggingface/transformers ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,222
closed
Is it possible to support Wav2Vec in ZeroShotClassificationPipeline?
Similar to the very helpful NLI-based zero-shot classification pipeline using a ModelForSequenceClassification, it would be great to have zero-shot on audio data. Pass wav file(s) with candidate labels to the pipeline and get a prediction. Is this at all on the roadmap?
01-19-2022 11:17:53
01-19-2022 11:17:53
Maybe of interest to @Narsil and @patrickvonplaten<|||||>Pretty cool idea! @mabu-dev do you know whether there is a research paper on it? For NLI zero-shot we have this paper: https://huggingface.co/facebook/bart-large-mnli#nli-based-zero-shot-text-classification<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,221
closed
[ViTMAE] Various fixes
# What does this PR do? This PR: - adds a link to a demo notebook for Facebook's MAE. - fixes the links to the docs of various Vision Transformer-based models in ViT's doc page. - fixes the code examples for MAE. - adds MAE to the `AutoFeatureExtractor` API.
01-19-2022 11:04:39
01-19-2022 11:04:39
Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,220
closed
Wav2vec code not working with kenlm n-gram
I followed all the steps listed in https://discuss.huggingface.co/t/how-to-create-wav2vec2-with-language-model/12703/7 to create LM but seems like I am getting no output at all. I tried to replicate all the steps listed for Hindi and failed. Also I tried to run the Spanish repo as pointed out by @patrickvonplaten but even after executing the [file](https://huggingface.co/patrickvonplaten/wav2vec2-large-xlsr-53-spanish-with-lm/blob/main/run_ngram_wav2vec2.py), I was not able to run the code. ![Screenshot from 2022-01-19 15-58-57](https://user-images.githubusercontent.com/30959215/150115123-0caac09d-d68a-4e11-aff3-d244f362ceac.png) Can we please start a discussion on the potential issues with this because the Community week is approaching soon and hence language model decoding can be the game changer.
01-19-2022 10:46:36
01-19-2022 10:46:36
Hey @harveenchadha, Could you try to instead follow the steps of the official blog post: https://huggingface.co/blog/wav2vec2-with-ngram . It's more up-to-date and contains more in-detail information.<|||||>Hi, I created a processor with LM using my own data in Urdu Language. When decoding, I am getting the following error. ValueError: Input logits of size 412, but vocabulary is size 46 Do I also need to train the model with this processor? <|||||>@ypirkani, It seems like the alphabet of the LM and the vocab of the model doesn't match. Could you upload all files (kenLM, alphabet.json, vocab.json, pt_model.bin, ...) to a HF Hub repo so that we can take a look together? :-)<|||||>https://huggingface.co/yapak1994/LM_Test_1<|||||>I Have uploaded some of the relevant filles om the above repository.<|||||>Hey @ypirkani, The file structure doesn't look correct. Could you please try to follow the guide here: https://huggingface.co/blog/wav2vec2-with-ngram#4-combine-an-n-gram-with-wav2vec2 to create a pyctcdecode beam search decoder with your `.arpa` kenLM and your model? BTW, we usually don't upload the model as a zipped file as this way it cannot be loaded with `transformers`. Hope this helps<|||||>Hi Patrick, Thanks a lot for your reply. I am still facing the same issue and I created the decoded as per the guidelines on the link you provided. Let me point out a few things which I am following. I am fine-tuning on my own data of Urdu Language. My training System is as follows: ```python tokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|", bos_token="<s>", eos_token="</s>", do_lower_case=False ) feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=False) processor = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer) processor.save_pretrained("Processor/wav2vec2-base-Urdu") class DataCollatorCTCWithPadding: processor: Wav2Vec2Processor padding: Union[bool, str] = True max_length: Optional[int] = None max_length_labels: Optional[int] = None pad_to_multiple_of: Optional[int] = None pad_to_multiple_of_labels: Optional[int] = None def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=self.padding, max_length=self.max_length_labels, pad_to_multiple_of=self.pad_to_multiple_of_labels, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True) model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-base", ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, vocab_size=len(processor.tokenizer) ) training_args = TrainingArguments( output_dir="model_Urdu/wav2vec2-xlsr-Urdu", group_by_length=True, per_device_train_batch_size=4, evaluation_strategy="steps", num_train_epochs=5, fp16=True, gradient_checkpointing=True, save_steps=500, eval_steps=500, logging_steps=500, learning_rate=1e-4, weight_decay=0.005, warmup_steps=1000, save_total_limit=12, ) trainer = Trainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=data["train"], eval_dataset=data["test"], tokenizer=processor.feature_extractor, ) trainer.train() ``` The training goes smoothly and I am getting a WER of 40%. To create a Language model using KenLM, I am using the same cleaned text which I used in the above process. After creating the LM, I corrected it to include both eos and bos symbols. Now in order to combine LM with my previous processor, I am doing the following: ```py processor = Wav2Vec2Processor.from_pretrained("Processor/wav2vec2-base-Urdu") vocab_dict = processor.tokenizer.get_vocab() sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])} decoder = build_ctcdecoder( labels=list(sorted_vocab_dict.keys()), kenlm_model_path="Urdu.arpa", ) processor_with_lm = Wav2Vec2ProcessorWithLM( feature_extractor=processor.feature_extractor, tokenizer=processor.tokenizer, decoder=decoder ) ` ``` Now vocab_dict of processor and processor_with_lm is the same i.e. has the same alphabets. Now when I try to decode using Processor_with_lm it is giving me an error: `ValueError: Input logits of size 412, but vocabulary is size 46` The main, issue is that vocab.json on non LM processor has been derived from my the same text which I have used to create LM.<|||||>Hey @ypirkani, Sadly, the model is still in zipped format on the Hub here: https://huggingface.co/yapak1994/LM_Test_1/tree/main . Could you please not zip any files and upload them as standalone files to the repo? E.g. the following command should work: ```py from transformers import AutoModelForCTC, AutoProcessor model = AutoModelForCTC.from_pretrained("yapak1994/LM_Test_1") processor = AutoProcessor.from_pretrained("yapak1994/LM_Test_1") ``` but it currently doesn't<|||||>I am making my arpa file from the same text I am using for training the model and generating the processor. can Be downloaded from here https://huggingface.co/yapak1994/Urdu_ASR_wav2vec2_base_Model/blob/main/5gram_correct_1.arpa for inference code is follows, ``` import soundfile as sf import torch from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import os from transformers import AutoProcessor, AutoModelForCTC processor = AutoProcessor.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model") model = AutoModelForCTC.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model")` path = "urdu/" #print(os.listdir(path)) for file in os.listdir(path): # load audio audio_input, sample_rate = sf.read(path + file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0]) # print(transcription) print(transcription + " (" + file + ")" ) ``` This code is working correctly and giving me output. Now When I use merge this model with my LM.arpa ``` from transformers import AutoProcessor, AutoModelForCTC processor = AutoProcessor.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model") model = AutoModelForCTC.from_pretrained("yapak1994/Urdu_ASR_wav2vec2_base_Model")` from transformers import Wav2Vec2ProcessorWithLM decoder = build_ctcdecoder( labels=list(sorted_vocab_dict.keys()), kenlm_model_path="yapak1994/Urdu_ASR_wav2vec2_base_Model/5gram_correct_1.arpa ", ) processor_lm = Wav2Vec2ProcessorWithLM(feature_extractor=feature_extractor, tokenizer=tokenizer , decoder=decoder)` ``` Now when I try to repeat the same code for previous inference, I get error. `ValueError: Input logits of size 586, but vocabulary is size 44` ``` for file in os.listdir(path): # load audio audio_input, sample_rate = sf.read(path + file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor_lm.decode(predicted_ids[0]) # print(transcription) print(transcription + " (" + file + ")" ) ``` <|||||>Secondly, How to I push the Wav2Vec2ProcessorWithLM with Language_model to HF repo ? <|||||>Hey @ypirkani, I've fixed the language model structure of your repo so that you can now correctly load it with `AutoProcessor`. Doing the following works as expected: ```python import numpy as np from transformers import pipeline asr = pipeline("automatic-speech-recognition", model="yapak1994/Urdu_ASR_wav2vec2_base_Model") asr(np.array(16_000 * [0.0])) ``` <|||||>For the future, please read through this blog post to understand how to correctly integrate the LM with Wav2Vec2: https://huggingface.co/blog/wav2vec2-with-ngram<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,219
closed
Make chuking smartly (long files) work on asr ctc_with_lm.
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-19-2022 10:17:00
01-19-2022 10:17:00
Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,218
closed
MobileBertForPreTraining means IB-BERT PreTraining?
https://github.com/huggingface/transformers/blob/05fa1a7ac17bb7aa07b9e0c1e138ecb31a28bbfe/src/transformers/models/mobilebert/modeling_mobilebert.py#L902 You Know that the mobilebert(student model) is distilled from IB-BERT(The Teacher Model), The MobileBertForPreTraining means IB-BERT PreTraining?
01-19-2022 08:34:55
01-19-2022 08:34:55
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,217
closed
[ONNX] transformer.onnx exporting fails for longformer
## Environment info - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.11.0-1024-gcp-x86_64-with-debian-bullseye-sid - Python version: 3.7.11 - PyTorch version (GPU?): 1.10.1+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Using transformer.onnx - Using distributed or parallel set-up in script?: No Models: - Longformer: @patrickvonplaten ## Information The same error occurs for - allenai/longformer-base-4096 - valhalla/longformer-base-4096-finetuned-squadv1 - allenai/longformer-large-4096 ## To reproduce Steps to reproduce the behavior: ``` python -m transformers.onnx --model=allenai/longformer-base-4096 ./longformer-base-4096 ``` ### Error Code `RuntimeError: 0INTERNAL ASSERT FAILED at "../torch/csrc/jit/ir/alias_analysis.cpp":584, please report a bug to PyTorch. We don't have an op for aten::constant_pad_nd but it isn't a special case. Argument types: Tensor, int[], bool, `
01-19-2022 08:02:06
01-19-2022 08:02:06
Hi @RaedShabbir thank you for reporting this bug! It seems that this one slipped through our unit tests and it's not currently possible to export Longformer due to a limitation with the ops supported in JIT. We plan to remove Longformer from the list of supported architectures for ONNX until a long-term solution can be found. In the meantime, if you'd like to investigate which part of the Longformer implementation is causing the error, then please feel free to comment here on what you find!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Working on it <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,216
closed
DebertaForMaskedLM cannot load the parameters in the MLM head from microsoft/deberta-base
my env: ``` transformers 4.15.0 ``` ``` from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("microsoft/deberta-base") ``` The warning goes like: ``` Some weights of the model checkpoint at microsoft/deberta-base were not used when initializing DebertaForMaskedLM: ['deberta.embeddings.position_embeddings.weight', 'lm_predictions.lm_head.dense.bias', 'lm_predictions.lm_head.dense.weight', 'lm_predictions.lm_head.bias', 'lm_predictions.lm_head.LayerNorm.weight', 'lm_predictions.lm_head.LayerNorm.bias'] - This IS expected if you are initializing DebertaForMaskedLM from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing DebertaForMaskedLM from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of DebertaForMaskedLM were not initialized from the model checkpoint at /home/tfangaa/Downloads/ptlm/deberta-base/ and are newly initialized: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` It seems that the checkpoints provided by `microsoft/deberta-base` doesn't possess the weights needed for the MLM head, so that DebertaForMaskedLM cannot be directly used for masked token prediction. Is this a bug of the DebertaForMaskedLM class or the checkpoints provided by Microsoft? Thanks!
01-19-2022 07:54:15
01-19-2022 07:54:15
Pinging @BigBird01 on the issue<|||||>does someone know the answer?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I see the same problem, and this is not only true for base but also for base-v3 so I would say it is error in code.
transformers
15,215
closed
ALL YOUR BASE ARE BELONG TO 504.
<img width="1223" alt="image" src="https://user-images.githubusercontent.com/30382262/150070377-edb319a5-7c28-4478-ba8e-123aebc46499.png">
01-19-2022 05:36:34
01-19-2022 05:36:34
![image](https://user-images.githubusercontent.com/48780754/150070713-002d6e62-f87f-4d8e-b380-7963413953b8.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,214
closed
Add support for BERT SequenceClassification conversion to ONNX
# 🚀 Feature request Currently, the `SequenceClassification` pipeline is not supported for converting BERT-based models to ONNX. Support for this pipeline would be great. ## Motivation As part of a recent internship, I needed to deploy a `SequenceClassification` `SciBERT` model by first converting it to ONNX and quantizing it. I wrote a script that did this, and so thought I should contribute this to the repo. ## Your contribution I'd be happy to make a PR, but I haven't contributed before so I thought I'd make an issue first to check that there were no problems I hadn't seen before diving in. For example: * Is this something that is already being worked on? (I couldn't see an open PR or issue related to it, but maybe there's other work I don't know about) * Are there other factors that would complicate this work? (It was pretty simple to quantize the individual models I was working with, but I can imagine there being more difficulties in implementing it generally)
01-18-2022 23:34:23
01-18-2022 23:34:23
I'm pretty sure `xxxForSequenceClassification` models are supported. You can read the updated docs [here](https://huggingface.co/docs/transformers/master/en/serialization). For instance, you can convert one as follows: ``` python -m transformers.onnx --model=distilbert-base-uncased-finetuned-sst-2-english \ --feature=sequence-classification onnx/ ```<|||||>Ah you're right, I must have somehow had an older version, really sorry about that! Next time I'll try to make a first issue that is actually an issue.
transformers
15,213
closed
[WIP] [doc] performance/scalability revamp
@lvwerra and I are working on a massive performance/scalability docs revamp: So the rough plan is to make custom plans for each of the combinations `[inference|training] * [1 gpu|many gpus|cpu]` so that it's very easy for the user to follow the instructions that are specific to their needs. So the proposed doc layout is: * performance.mdx (main entry point) * perf_infer.mdx - perf_infer_cpu.mdx - perf_infer_gpu_many.mdx - perf_infer_gpu_one.mdx * perf_train.mdx - perf_train_gpu_many.mdx - perf_train_gpu_one.mdx * scalability.mdx (rename from parallelism.mdx) (XXX: to do) See the PR's changes for a rough layout of the content. One big question is this: At the moment everything is pytorch-centric, as we don't have any info on tf/flax. Down the road we will either inject tf/flax-specific instructions into the current docs, or perhaps it'd be better to have dedicated docs for pt/tf/flax. It'd help a lot to decide ahead of time to avoid document renaming and potentially breaking links. If we plan to have these PT-specific perhaps let's embed `_pt` in the filenames? @lvwerra
01-18-2022 22:38:28
01-18-2022 22:38:28
Hi @stas00 Thanks for shaping the structure - this is looking great! I have been thinking about this a bit more and have a two main comments: - Not sure if you copy-pasted the subsections or if they are supposed to be like that. There is a lot of redundancy and I think if we e.g. explain mixed precision schemes in the single GPU section we don't need to explain it again in the multi-GPU section. And I would suggest to the reader that one should first look at the single GPU section as the methods carry over to the multi-GPU case. - Performance vs. scalability: the only parallelism strategy that is currently natively supported with the `Trainer` and `accelerate` is data parallelism, with the exception of ZerO with DeepSpeed. What to you think about outsourcing the theoretical parallelism parts to a blog post and only keep the aspects in the docs that can be used natively in `transformers`. I think we could then also merge `perf_train_gpu_many.mdx` and `scalability.mdx ` into one section where we highlight for each technique whether it helps performance or scalability. In terms of implementation what do you think about tackling the sections with the most need first? I'd expect single GPU training and CPU inference to be the most widely used settings. Followed by multi-GPU training and GPU inference (AFAICT for many companies GPUs in prod are still out of reach). Finally, multi-GPU inference which is probably only needed for a few companies with huge models and fast response requirements, right? <|||||>re: sub-sections The ideas to have the map and some of those map entries will redirect to elsewhere for details. e.g. mixed precision is wanted in all paths, so we cover its theory in one place and link to it from all other places. We will need to decide whether we cover it in the first path (e.g. 1-gpu train) or in a shared doc (e.d. current performance.mdx). I'm inclined to think the latter, since `performance.mdx` currently has a lot of shared info - .e.g. most of the hardware notes. So this could be our theory w/ brief examples - precisely as it's now. And then on each specific path (e.g. many-gpu train) we link the overview in the main doc and then include recipes to have to use it on this particular path, with code, etc. re: Performance vs. scalability Same idea, keep the general scalability documents that describes the whole domain including parts that we don't yet have. This is an overview document. Then in each specific path doc we cover only the tech that is available to our users with references to the main doc for theory/overview and focusing on the how-to details. That way the specific paths (scenarios) will be 100% actionable. And the general document shows a curious reader a bigger picture and perhaps even entice them to go and fill the missing holes. Also note that all those missing holes will be filled out sooner or later since we are actively trying to sort it out. re: priority/order we first re-shuffle what we have and your PR and then perhaps add a bit of specific how-to that hasn't been written yet (e.g. make a really neat 1-gpu-train path and merge this PR. Then gradually improve the other sections. <|||||>That sounds good to me - let's discuss this with the rest of the team then. <|||||>Progress made: - beefed up the main perf doc - which is the reference to which other docs point to - started working on a "model" file `docs/source/perf_train_gpu_one.mdx` for the 1 of 5 main scenarios we mapped out - once polished we continue with the rest. Need your input: - please review `docs/source/perf_train_gpu_one.mdx` and let me know if this feels good or whether a different approach should be taken. As you can see since there is going to be a lot of duplication I'm offloading all the why's to `performance.mdx` and only leaving the how-to information specific to each situation/scenario. I'm of course open to changes in course, but let's sort out one doc and then replicate the same structure to the other 4 docs. - I don't know anything about `accelerate` so if @sgugger / @lvwerra you could fill in the gaps that would be great. (marked with `XXX: Sylvain/Leandro`) I'd say **for now please ignore any spelling or grammar or any minor details as it's likely that many of these sections will be re-written completely so please don't waste your time on premature editing**. We will do it at the very end when y'all happy with the first doc. Thank you! p.s. Also how do we make the doc builder add its link as this PR was created before that feature was added? I could start a new PR if it's easier since there was no commentary to preserve yet. I think the ability to read the rendered doc will help a lot to help us produce better documentation. <|||||>Hi @stas00 thanks for starting to work on this! First, I am not sure about the doc builder - maybe it is indeed easiest to just create a new PR. A few high level comments on the current structure: - Since `performance.mdx` is the main entry point I think it's main purpose should just be to give an overview of all the subsections and maybe a guide how to read them. - My understanding was that we would essentially move the guide added in #15119 from `performance.mdx` to the `perf_train_gpu_one.mdx` section and properly merge it with the material that was already there and is currently appended at the end of the document. - Regarding the separation of why and how: I am actually not in favour of that because I think it is easier for a user if both are together. Explain a concept and directly show how it is done, otherwise there will be a lot of switching between the two. What do you think? If you'd like I can have a stab at it.<|||||>> Hi @stas00 thanks for starting to work on this! > > First, I am not sure about the doc builder - maybe it is indeed easiest to just create a new PR. > > A few high level comments on the current structure: > > * Since `performance.mdx` is the main entry point I think it's main purpose should just be to give an overview of all the subsections and maybe a guide how to read them. Indeed, we can move the indepth reference to another file and refer to it instead. For now it's just easier to xref to it. > * My understanding was that we would essentially move the guide added in [add model scaling section #15119](https://github.com/huggingface/transformers/pull/15119) from `performance.mdx` to the `perf_train_gpu_one.mdx` section and properly merge it with the material that was already there and is currently appended at the end of the document. Indeed, something like that. I was just trying to use a few sections to lay out a possible structure before we fill the gaps in. > * Regarding the separation of why and how: I am actually not in favour of that because I think it is easier for a user if both are together. Explain a concept and directly show how it is done, otherwise there will be a lot of switching between the two. Which means that many sections will be duplicated at least 5 times and in some cases more than that - as you can see I have 2 identical sub-sections for several entries since those impact both speed and memory, but the explanations are slightly different. > What do you think? If you'd like I can have a stab at it. Sure, please feel free to shift things around and propose a different approach. Let me know if I prefer that I open a new PR first, but then I will need to integrate Sylvain's suggestions and I'm a bit too busy with BigScience at the moment. so it's your call. I can of course integrate them in the new PR as well, I won't forget.<|||||>Awesome, thanks for clarifying - it is sometimes hard to read the intentions :) If you could open a new PR that would be great and I can work on it a bit next week and also integrate Sylvain's comments (or make notes). Thanks!<|||||>The PR has been moved to https://github.com/huggingface/transformers/pull/15723 Will address all the suggestions so far in the new PR.
transformers
15,212
closed
ValueError: transformers.models.auto.__spec__ is None. causing import errors
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Colaboratory - Python version: 3.7.12 ### Who can help @LysandreJik <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Hello, this code was working last week but today I am getting a 'ValueError: transformers.models.auto.__spec__ is None' error which is causing errors when trying to import other Libraries. I noted a similar issue #12904 but this has been resolved and closed last year. Model I am using (Bert, XLNet ...): BERT The problem arises when using: Transformers My code: ```python # Import all libraries import pandas as pd import numpy as np import re # Huggingface transformers import transformers from transformers import BertModel,BertTokenizer,AdamW, get_linear_schedule_with_warmup print(transformers.__version__) print(transformers.models.auto.__spec__) import torch from torch import nn from torch.utils.data import DataLoader,Dataset,RandomSampler, SequentialSampler import pytorch_lightning as pl from pytorch_lightning.callbacks import ModelCheckpoint from sklearn.model_selection import train_test_split from sklearn.metrics import accuracy_score, precision_recall_fscore_support import seaborn as sns from pylab import rcParams import matplotlib.pyplot as plt from matplotlib import rc %matplotlib inline RANDOM_SEED = 42 np.random.seed(RANDOM_SEED) torch.manual_seed(RANDOM_SEED) import os os.environ["CUDA_VISIBLE_DEVICES"]="0" device = torch.device("cpu") ``` The Output: ```python 4.15.0 None --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-10-3e575e7ee253> in <module>() 12 from torch import nn 13 from torch.utils.data import DataLoader,Dataset,RandomSampler, SequentialSampler ---> 14 import pytorch_lightning as pl 15 from pytorch_lightning.callbacks import ModelCheckpoint 16 10 frames /usr/local/lib/python3.7/dist-packages/pytorch_lightning/__init__.py in <module>() 18 _PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT) 19 ---> 20 from pytorch_lightning.callbacks import Callback # noqa: E402 21 from pytorch_lightning.core import LightningDataModule, LightningModule # noqa: E402 22 from pytorch_lightning.trainer import Trainer # noqa: E402 /usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/__init__.py in <module>() 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from pytorch_lightning.callbacks.base import Callback 15 from pytorch_lightning.callbacks.device_stats_monitor import DeviceStatsMonitor 16 from pytorch_lightning.callbacks.early_stopping import EarlyStopping /usr/local/lib/python3.7/dist-packages/pytorch_lightning/callbacks/base.py in <module>() 24 25 import pytorch_lightning as pl ---> 26 from pytorch_lightning.utilities.types import STEP_OUTPUT 27 28 /usr/local/lib/python3.7/dist-packages/pytorch_lightning/utilities/types.py in <module>() 23 from torch.optim.lr_scheduler import _LRScheduler, ReduceLROnPlateau 24 from torch.utils.data import DataLoader ---> 25 from torchmetrics import Metric 26 27 _NUMBER = Union[int, float] /usr/local/lib/python3.7/dist-packages/torchmetrics/__init__.py in <module>() 12 _PROJECT_ROOT = os.path.dirname(_PACKAGE_ROOT) 13 ---> 14 from torchmetrics import functional # noqa: E402 15 from torchmetrics.aggregation import CatMetric, MaxMetric, MeanMetric, MinMetric, SumMetric # noqa: E402 16 from torchmetrics.audio import ( # noqa: E402 /usr/local/lib/python3.7/dist-packages/torchmetrics/functional/__init__.py in <module>() 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from torchmetrics.functional.audio.pit import permutation_invariant_training, pit, pit_permutate 15 from torchmetrics.functional.audio.sdr import scale_invariant_signal_distortion_ratio, sdr, signal_distortion_ratio 16 from torchmetrics.functional.audio.si_sdr import si_sdr /usr/local/lib/python3.7/dist-packages/torchmetrics/functional/audio/__init__.py in <module>() 12 # See the License for the specific language governing permissions and 13 # limitations under the License. ---> 14 from torchmetrics.functional.audio.pit import permutation_invariant_training, pit, pit_permutate # noqa: F401 15 from torchmetrics.functional.audio.sdr import ( # noqa: F401 16 scale_invariant_signal_distortion_ratio, /usr/local/lib/python3.7/dist-packages/torchmetrics/functional/audio/pit.py in <module>() 22 from torchmetrics.utilities import _future_warning 23 from torchmetrics.utilities.checks import _check_same_shape ---> 24 from torchmetrics.utilities.imports import _SCIPY_AVAILABLE 25 26 # _ps_dict: cache of permutations /usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/imports.py in <module>() 90 _TQDM_AVAILABLE: bool = _module_available("tqdm") 91 _TRANSFORMERS_AVAILABLE: bool = _module_available("transformers") ---> 92 _TRANSFORMERS_AUTO_AVAILABLE = _module_available("transformers.models.auto") 93 _PESQ_AVAILABLE: bool = _module_available("pesq") 94 _SACREBLEU_AVAILABLE: bool = _module_available("sacrebleu") /usr/local/lib/python3.7/dist-packages/torchmetrics/utilities/imports.py in _module_available(module_path) 34 """ 35 try: ---> 36 return find_spec(module_path) is not None 37 except AttributeError: 38 # Python 3.6 /usr/lib/python3.7/importlib/util.py in find_spec(name, package) 112 else: 113 if spec is None: --> 114 raise ValueError('{}.__spec__ is None'.format(name)) 115 return spec 116 ValueError: transformers.models.auto.__spec__ is None ``` ## To reproduce Steps to reproduce the behavior: ```python import transformers print(transformers.__version__) print(transformers.models.auto.__spec__) 4.15.0 None ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior This code ran perfectly and all Libraries were imported last week. I made no changes to this code since but it produced the above error today.
01-18-2022 18:28:46
01-18-2022 18:28:46
Having the same issue too! Exact same reproducible code + transformers version.<|||||>Same issue with `transformers 4.10.0, 4.10.2` and `pytorch-lightning 1.3.5`<|||||>I have the same issue :(<|||||>Installing `torchmetrics==0.6.2` helped in my case.<|||||>Seems like [this PR](https://github.com/huggingface/transformers/pull/13321) needs to be copied for [this line of code](https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/__init__.py#L251).<|||||>> Installing `torchmetrics==0.6.2` helped in my case. On G.Colab I had the same issue with `transformers==4.15.0` and `pytorch_lightning==1.5.7` installed. Solved by installing `torchmetrics==0.6.2` . Thank you
transformers
15,211
closed
Add FastTokenizer to REALM
# What does this PR do? This is a follow-up PR of https://github.com/huggingface/transformers/pull/13292#discussion_r786162541 and https://github.com/huggingface/transformers/pull/13292#discussion_r780749863. This PR removes the `BertTokenizer` abstraction and adds a `FastTokenizer` to REALM. <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patrickvonplaten @patil-suraj @sgugger Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-18-2022 18:00:40
01-18-2022 18:00:40
transformers
15,210
closed
Build dev documentation
null
01-18-2022 16:39:35
01-18-2022 16:39:35
Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15110).\n\nAll of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15110). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_$PR_NUMBER). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_$PR_NUMBER). All of your documentation changes will be reflected on that endpoint.<|||||>Great job merging this PR! the documentation will now be removed from the staging environment.
transformers
15,209
closed
Build dev documentation
null
01-18-2022 15:59:11
01-18-2022 15:59:11
transformers
15,208
closed
Build dev documentation
Builds the dev documentation
01-18-2022 15:47:57
01-18-2022 15:47:57
transformers
15,207
closed
Rename compute_loss in TF models
This PR renames the `compute_loss` method on our models to `hf_compute_loss`, as Keras has just added a `compute_loss` method to its base `Model` class that causes lots of conflicts. Draft PR for now, since this will probably break something!
01-18-2022 15:17:32
01-18-2022 15:17:32
transformers
15,206
closed
Different generations with the NLP model MarianMT of HuggingFace
I'm trying to use the NLP model " MarianMT" from HuggingFace, and I want to use my own implementation of decoding (ex : Greedy Decoding). I compared my implementatin to Hugging Face implementation. (model.generate()) I implemented it; I checked the code 100 times and I don't know why I have different text generation. Can you please help me on that ? My implemntation : ``` # Hugging face from transformers import MarianMTModel, MarianTokenizer import torch import torch.nn.functional as F import numpy as np device = torch.device("cuda" if torch.cuda.is_available() else "cpu") tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de') model = MarianMTModel.from_pretrained('Helsinki-NLP/opus-mt-en-de').to(device) vocab_size = tokenizer.vocab_size max_t = 15 done = [False ] # To indicate if the generation is finished (we reached EOS token) # My implementation of Greedy Search source = [ 'While that might be a somewhat morbid thought, I think it has some really profound implications that are worth exploring.'] encoded = tokenizer.prepare_seq2seq_batch(source, return_tensors='pt').to(device) generatd_tokens = torch.tensor([[58100] ]).to(device) for t in range(1, max_t-1) : model_output = model(**{'input_ids':encoded["input_ids"], "attention_mask" :encoded['attention_mask'],"decoder_input_ids":generatd_tokens})['logits'].detach()[:,-1,:].reshape(1, vocab_size ) distrib = F.softmax(model_output, dim =-1).reshape(1, vocab_size ) distrib = torch.sort(distrib, dim =-1,descending=True ) distrib_idx = distrib.indices distrib_values = distrib.values next_token = torch.tensor([[distrib_idx[0][0].item()] ]).to(device) next_token = torch.tensor(1 - np.array(done)*1).view(-1,1).to(device) * next_token generatd_tokens = torch.cat([generatd_tokens, next_token ], dim =-1) if (done[0] == False and next_token[0][0].item() == model.config.eos_token_id) or (t == max_t - 1) : done[0] = True # If all sentences are generated, we exit the loop if all(done) : break gen_sentences = tokenizer.batch_decode(generatd_tokens, skip_special_tokens=True) print(gen_sentences) ``` *Output : ['Obwohl das ein etwas morbider Gedanke sein könnte, denke']* # Hugging Face Implementation ``` translated_greedy = model.generate(input_ids = encoded['input_ids'].to(device) ,max_length=max_t+2,do_sample = False) translated_greedy_sen = tokenizer.batch_decode(translated_greedy, skip_special_tokens=True) print(translated_greedy_sen) ``` *Output : ['Das mag zwar ein etwas morbider Gedanke sein, aber ich denke']* Thanks in advance
01-18-2022 15:17:05
01-18-2022 15:17:05
for the notice, 58100 is the pad_id of the model <|||||>Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,205
closed
Remove dependency to quiet Dependabot
# What does this PR do? We are spammed by Dependabot alerts on this dependency, and there doesn't seem to be a version with a fix, so I'm suggesting removing it entirely.
01-18-2022 14:39:05
01-18-2022 14:39:05
transformers
15,204
closed
Ignore empty subfolders when identifying submodules
# What does this PR do? As point out by @NielsRogge, when someone has an empty subfolder with just pycache in them (after checking out a branch for instance), the new check in `check_inits` will fail. This PR addresses that.
01-18-2022 14:35:23
01-18-2022 14:35:23
Thanks for fixing!
transformers
15,203
closed
How can i add new_token ?
null
01-18-2022 14:25:58
01-18-2022 14:25:58
Hi, Can you please ask this question on our [forum](https://discuss.huggingface.co/) rather than here? We'd like to keep Github issues for bugs/feature requests. Thanks!
transformers
15,202
closed
Copies and docstring styling
# What does this PR do? This PR makes the `check_copies` script a little bit more resilient by making sure it applies the styling on the docstrings when making/checking the copies. This will fixe issues like the one encountered in #15079
01-18-2022 14:11:29
01-18-2022 14:11:29
transformers
15,201
closed
[MBartTokenizer] remove dep on xlm-roberta tokenizer
# What does this PR do? This PR makes `MBartTokenizer` and `MBartTokenizerFast` standalone and removes dependency on `XLMRobertaTokenizer`
01-18-2022 13:43:55
01-18-2022 13:43:55
transformers
15,200
closed
[ASR pipeline] correct with lm pipeline
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-18-2022 13:37:28
01-18-2022 13:37:28
transformers
15,199
closed
GPT2: masked_bias should be sufficiently small instead of -1e4
See also #9594 My package build `transformers==4.12.3`. Ideally for a causal language model, we should set it as -inf. Current setting -1e4 is not **small** enough. In my case, the model would learn to lower the attention weight of left context such that it could get the information of right context, which is illegal behavior. Codes at https://github.com/huggingface/transformers/blob/531336bbfd2a97cf800f610d971d6ec0a1578752/src/transformers/models/gpt2/modeling_gpt2.py#L205-L206 In my experiment, I print the `attn_weights` of the first layer, there're many weights smaller than -1e4, and the attention score is no longer lower triangle, a.k.a., not causal. ``` tensor([[[[-16012.8271, -10000.0000, -10000.0000, -10000.0000, -10000.0000, -10000.0000, -10000.0000], [ 10511.5508, 5376.9136, -10000.0000, -10000.0000, -10000.0000, -10000.0000, -10000.0000], [-24461.1621, -13598.0195, -20461.9980, -10000.0000, -10000.0000, -10000.0000, -10000.0000], [-16113.5645, -10553.2480, -13133.3857, 147.0204, -10000.0000, -10000.0000, -10000.0000], [-20674.4199, -2806.1458, -15805.4199, -4623.0571, 12110.2920, -10000.0000, -10000.0000], [-24806.1738, -14453.9990, -20414.9160, -8439.6494, -2858.4604, -17795.5215, -10000.0000], [ -5393.4922, -10265.1875, -6821.3672, 3551.4062, -20622.5879, -3372.0925, 10511.3047]], ```
01-18-2022 12:55:48
01-18-2022 12:55:48
Hey @maxwellzh, this is a similar conversation to what was said here: https://github.com/huggingface/transformers/issues/14859 Thanks for raising this issue!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,198
closed
[examples/Flax] add a section about GPUs
# What does this PR do? This PR adds a section in the flax readme on how to run the example scripts on GPU and links JAX's GPU installation guide.
01-18-2022 11:23:43
01-18-2022 11:23:43
transformers
15,197
closed
Question about fine-tuning BeitForSemanticSegmentation model
From the documentation, it says that the logits shape will be (batch_size, num_labels, height/4, width/4) I assume that the logits are the output masks of the model (since I'm doing the segmentation). How do I convert this shape (height /4, width /4) to the original image's shape before being resized to (height, width)? -> # logits are of shape (batch_size, num_labels, height/4, width/4) I realized that the input image is resized to the shape (height, width) by BeiTFeatureExtractor object while the height and widthare those defined in the BeitFeatureExtractor's config and are constant values. This means the output values' shapes are not the original image's shape but are rather resized-shape/4
01-18-2022 11:09:02
01-18-2022 11:09:02
Hi, You can take a look at this notebook: https://github.com/NielsRogge/Transformers-Tutorials/blob/master/SegFormer/Fine_tune_SegFormer_on_custom_dataset.ipynb It shows how to fine-tune SegFormer on a custom dataset (BeiT is equivalent), and it also outputs logits of shape (batch_size, num_labels, height/4, width/4). One interpolates these to the original size of the image. Note that we are in the process of determining generic outputs for semantic segmentation models, so it might be that in the future, the logits will automatically have the same size as the original `pixel_values`. <|||||>Thank you.
transformers
15,196
closed
Wav2Vec2ForCTC fine-tuning best practices
I have started to train models based on [this tutorial](https://huggingface.co/blog/fine-tune-wav2vec2-english) (thanks to @patrickvonplaten) and so far everything works. > Note: The model I am fine-tuning here is the [`facebook/wav2vec-base`](https://huggingface.co/facebook/wav2vec2-base) model as I am targeting mobile devices. However, there are still a few details that I am missing here. First off, I noticed that most experiments stagnate at around 30% WER after which the model does not seem to improve anymore. The data is a 2000 hour English dataset on which I get around 10% WER when using my own (streaming) Transducer implementation. I would not expect Wav2Vec2 to perform worse than that. I am not sure whether the tutorial tells the full story or is just a basic example so I have a few questions on how to improve the model. I'm asking these questions because at this point I am just not sure what I have to write myself and what is already provided by Huggingface and can be dealt with by simply changing a config-object. ### Q: Should we use subwords instead of characters in the vocabulary? Subwords appear to work great for Transformer-models but I am not sure what the best practice is here with a CTC model. ### Q: Can we/should we apply augmentation techniques (SpecAugment, time-stretching, etc.) during fine-tuning Since the model is pre-trained, I am not sure whether it's a good idea to apply certain augmentation-techniques to the inputs during fine-tuning. Is there a recommendation how we should do this? ### Q: Should inputs bucketed together in batches by sequence length? The tutorial appears to batch together audio-samples as they come. However, sometimes bucketing these examples by sequence length may improve results or at least accelerate the experiment. Is there a way we can bucket samples or is this somehow unnecessary?
01-18-2022 11:06:43
01-18-2022 11:06:43
Hey @stefan-falk, To answer your questions above: > Q: Should we use subwords instead of characters in the vocabulary? > Subwords appear to work great for Transformer-models but I am not sure what the best practice is here with a CTC model. You can try, but I don't recommend to do it. The input to output ration in Wav2Vec2 is made to work very well with characters. > Q: Can we/should we apply augmentation techniques (SpecAugment, time-stretching, etc.) during fine-tuning > Since the model is pre-trained, I am not sure whether it's a good idea to apply certain augmentation-techniques to the inputs during fine-tuning. Is there a recommendation how we should do this? In my experience SpecAugment and other techniques help quite a bit to improve performance > Q: Should inputs bucketed together in batches by sequence length? > The tutorial appears to batch together audio-samples as they come. However, sometimes bucketing these examples by sequence length may improve results or at least accelerate the experiment. Is there a way we can bucket samples or is this somehow unnecessary? Yes, you can use `--group_by_length` as is done in all examples usually. See: https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-fine-tuned/blob/main/run.sh#L24 for https://huggingface.co/patrickvonplaten/wav2vec2-base-timit-fine-tuned<|||||>@patrickvonplaten thanks a lot! :) > You can try, but I don't recommend to do it. The input to output ration in Wav2Vec2 is made to work very well with characters. Okay, I'll stick to that. Just a curios question in addition.. when it comes to Chinese we have a pretty large vocabulary. Do you know how we can deal with that or are we just going to have to use a ~5k vocab? > In my experience SpecAugment and other techniques help quite a bit to improve performance Definitely. My problem here is that since I am using the pre-trained model `facebook/wav2vec-base` I don't know whether or not I can deviate here. I don't know in detail how this model was exposed to augmentation - there's a chance that deviating from their methods could degrade performance I guess? > Yes, you can use --group_by_length as is done in all examples usually. Ah, thank you. I am not running `run_speech_recognition_ctc.py` though as I was following your tutorial and I just realized that `group_by_length` was already set to `True`. I just don't understand then how this works in combination with `per_device_train_batch_size`.<|||||>The batch_size simply corresponds to the number of audio inputs (not #number of audio inputs * #number of loss tokens) so with ``--group_by_length`` we group audio inputs of similar length (= #number of loss tokens) together.<|||||>Regarding Chinese, that's a good question actually! Maybe @voidful has an idea here?<|||||>> Regarding Chinese, that's a good question actually! Maybe @voidful has an idea here? If you are working in large vocabulary size model like Chinese, you often encounter the problem of homophones. the acoustic model actually lacks the ability to deal with this. Therefore, it is recommended to use the language model to adjust the results, which will greatly improve.<|||||>@patrickvonplaten I see. Thanks for that info. But does that mean that the number of samples (audio samples) stays constant no matter how long the input is? So I get 20x 5 seconds as well as 20x 20 seconds in a batch? The way Tensorflow bucketing works is we can actually vary the number of samples in a batch. With the Transducer I have buckets which are 300ms apart and the longer the maximum length is, the fewer samples will be in a bucket. In that case I am actually not setting "number of audio samples" per se. Instead, I pass the number of frames (features) I want to see in each batch. So.. the number of frames and therefore the total duration in each batch stays constant. I've just tried to set `group_by_length=True` and added the field `length` to each example: ```python yield dict(input_values=input_values, labels=labels, length=input_values.size) ``` However, the data collator still receives samples which vary quite a lot in duration (1sec to 13sec e.g.). This doesn't seem to have an effect - am I missing something out? @voidful I see, thank you. Right now I kind of have memory issues with Chinese because of the larger vocabulary 😓 <|||||>@patrickvonplaten Since I've already created this issue here: It's just a minor thing but in the tutorial we're setting the `ctc_loss_function` to `"mean"`: ```python model = Wav2Vec2ForCTC.from_pretrained( "facebook/wav2vec2-base", ctc_loss_reduction="mean", pad_token_id=processor.tokenizer.pad_token_id, ) ``` Now, in version 4.12.5 we cannot set it like this (same for `pad_token_id`). Instead this argument appears to have moved to the `Wav2Vec2Config` where its default value is `"sum"` however. Could this have any effect on training/performance whether we set sum or mean here? I'm just looking for ways to improve the training. And one last thing: Regarding augmentation, _how_ could I implement augmentation e.g. SpecAugment? The `Wav2Vec2FeatureExtractor` expects 1-dimensional raw audio data and the model appears to extract features from that signal. I'm sorry if this is a rather basic question that I should know the answer to but I couldn't make out where to do this except in a preprocessing step on the raw audio.<|||||>1. We are still using "mean" as the reduction method as can be seen here: https://github.com/huggingface/transformers/blob/6beae766eeaa0a5656d2a69df849041bfacf4851/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L125 2. A type of SpecAugment is done automatically inside the model as shown here: https://github.com/huggingface/transformers/blob/6beae766eeaa0a5656d2a69df849041bfacf4851/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1284 . By masking time frames with learned vectors you are essentially augmenting the audio data<|||||>And regarding the bucketing - yeah the audio lengths might very well vary, but this is usually not a problem. The idea of `group_by_length` is to reduce padding by grouping similar length-ed audio samples together to avoid padding as much as possible.<|||||>> @patrickvonplaten I see. Thanks for that info. But does that mean that the number of samples (audio samples) stays constant no matter how long the input is? So I get 20x 5 seconds as well as 20x 20 seconds in a batch? > > The way Tensorflow bucketing works is we can actually vary the number of samples in a batch. With the Transducer I have buckets which are 300ms apart and the longer the maximum length is, the fewer samples will be in a bucket. > > In that case I am actually not setting "number of audio samples" per se. Instead, I pass the number of frames (features) I want to see in each batch. So.. the number of frames and therefore the total duration in each batch stays constant. > > I've just tried to set `group_by_length=True` and added the field `length` to each example: > > ```python > yield dict(input_values=input_values, labels=labels, length=input_values.size) > ``` > > However, the data collator still receives samples which vary quite a lot in duration (1sec to 13sec e.g.). This doesn't seem to have an effect - am I missing something out? > > @voidful I see, thank you. Right now I kind of have memory issues with Chinese because of the larger vocabulary 😓 Is that happen on evaluation stage? I actually trained on 56 language on 2 3090 and around 200GB memory machine before. I am using a large vocabulary on all language, and all seems to be fine.<|||||>> @patrickvonplaten Since I've already created this issue here: It's just a minor thing but in the tutorial we're setting the `ctc_loss_function` to `"mean"`: > > ```python > model = Wav2Vec2ForCTC.from_pretrained( > "facebook/wav2vec2-base", > ctc_loss_reduction="mean", > pad_token_id=processor.tokenizer.pad_token_id, > ) > ``` > > Now, in version 4.12.5 we cannot set it like this (same for `pad_token_id`). Instead this argument appears to have moved to the `Wav2Vec2Config` where its default value is `"sum"` however. > > Could this have any effect on training/performance whether we set sum or mean here? I'm just looking for ways to improve the training. > > And one last thing: Regarding augmentation, _how_ could I implement augmentation e.g. SpecAugment? The `Wav2Vec2FeatureExtractor` expects 1-dimensional raw audio data and the model appears to extract features from that signal. I'm sorry if this is a rather basic question that I should know the answer to but I couldn't make out where to do this except in a preprocessing step on the raw audio. Here is some tips regarding to the trainer with large vocabulary size: https://discuss.huggingface.co/t/problems-and-solution-on-trainer/11498 taking argmax of ctc result on evaluation stage can largely reduce the memory usage XD <|||||>@patrickvonplaten > We are still using "mean" as the reduction method as can be seen here: I see, but it looks like the `Wav2Vec2Config`, based on the tutorial, sets the loss-reduction. I've tested it and it looks like I have to set it here. However, the result in the end does not seem to vary. Looks like it does not matter whether we use sum or mean here. > A type of SpecAugment is done automatically inside the model as shown here I see. Thank you. Here I wonder how much we can vary from the original pre-training. I see that per default `mask_feature_prob` is set to `0.0` in `Wav2Vec2Config`. Cranking that up might hurt the quality since the encoder is frozen? But: My question was a bit more general. I was wondering how/where I could implement other methods like time-warping (see [SpecAugment](https://arxiv.org/pdf/1904.08779.pdf)) or noise-mixing etc and whether or not this is a good idea due to the frozen encoder. > And regarding the bucketing - yeah the audio lengths might very well vary, but this is usually not a problem. The idea of group_by_length is to reduce padding by grouping similar length-ed audio samples together to avoid padding as much as possible. In my batches they varied too much which told me something is not working. It looks like bucketing/grouping does not work if the dataset provided is a `torch.utils.data.IterableDataset`. There's a [branch](https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/src/transformers/trainer.py#L646:L662) which appears to prevent this in such a case. This here is never reached: https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/src/transformers/trainer.py#L576 A warning after https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/src/transformers/trainer.py#L646 could be an option here. --- @voidful > Is that happen on evaluation stage? I actually trained on 56 language on 2 3090 and around 200GB memory machine before. I am using a large vocabulary on all language, and all seems to be fine. I have 4x 1080 Ti (each 11 GB VRAM). A dirty-generated vocabulary for Chines has around 3500 different symbols and I am not able to train this model. Not even with `batch_size = 1`. Thank you for the link. I'll take a look :)<|||||>Hey @stefan-falk, Sorry I think we're getting a bit too many issues / questions thrown together here. Could you try to open a new issue for the `group_by_length` and maybe cc @sgugger and/or @stas00 ? Regarding the config, I mean you can freely choose whatever parameter you would like, but in the official examples we set it to `"mean"` as stated above - see: https://github.com/huggingface/transformers/blob/6beae766eeaa0a5656d2a69df849041bfacf4851/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py#L536 I don't understand: > Cranking that up might hurt the quality since the encoder is frozen? Using a value > 0.0 means that we are masking time frames. It's similar to dropout and will help the model generalize better. Finally, if you're looking for custom implementations of data augmentation, please use the forum instead: https://discuss.huggingface.co/<|||||>Grouping cannot work with an iterable dataset as we can't select elements by indices, this is not a bug, just something that isn't possible :-) Also note that this whole thread should be on the [forums](https://discuss.huggingface.co/) in my opinion as it's not reporting a bug or discussing a new feature, but asking general questions about training. The wider community would benefit from having it there :-)<|||||>This issue got a bit out of hand, I agree. Sorry about that. 😓 @patrickvonplaten Regarding `group_by_length` I think I got it working. It wasn't just apparent to me that I need to implement `datasets.Dataset` here. Regarding augmentation: My fear is, since it's a pre-trained model, that the quality might get worse if I don't use the same parameters during fine-tuning. Since the model was never trained on masked frequencies, this might throw the encoder off and delivers worse results. I know I can test this via experiments but I thought maybe I am lucky and you guys can warn me about not doing that :) @sgugger well, given a buffer, it is possible to implement it without the need of indices. I don't know for sure but I think this is how Tensorflow's [`bucket_by_sequence_length`](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#bucket_by_sequence_length) is implemented. Regarding the forums, I was not very lucky there getting answers, which is why I tried it on github: - https://discuss.huggingface.co/t/constantly-running-out-of-memory-fine-tuning-wav2vec2/12976 - https://discuss.huggingface.co/t/need-help-training-speech2text-from-scratch/12306 - https://discuss.huggingface.co/t/is-there-a-complete-speech2text-example/12234 Anyway, thanks a lot guys! I've learned a thing or two here and I'll try to see how far I can get with that knowledge. I'll keep you updated. From my side, we can close this issue. :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,195
closed
Tensorboard missing values
## Environment info - `transformers` version: 4.12.5 - Platform: Ubuntu 18 - Python version: 3.8 ### Who can help Models: - Wav2Vec2 @patrickvonplaten @anton-l ## Information Model I am using is Wav2Vec2ForCTC and it is regarding the eval/loss in Tensorboard. For some (most) of my experiments I'd notice missing values for the eval/loss: ![image](https://user-images.githubusercontent.com/43335432/149923148-3b82e453-2447-4ba3-b8ff-6143827f1a81.png) ![image](https://user-images.githubusercontent.com/43335432/149923447-7ccb5bd8-2250-43c0-9ee1-2516bd183b6e.png) I am not doing anything special and I am essentially following the basic fine-tuning examples. Any idea why I am seeing this?
01-18-2022 10:49:28
01-18-2022 10:49:28
That's strange @severo do you maybe have any idea?<|||||>BTW, @stefan-falk feel free to share a link to your training repo here if the 30% WER seems to high for you. There are some hidden tricks on how to evaluate and train wav2vec2-base, *e.g.* https://github.com/pytorch/fairseq/issues/3227<|||||>No idea. @stefan-falk would you be able to share some of these tensorboard trace files (`*.tfevents.*`)?<|||||>@severo I've added the files as I found them in my logging-directory: [logs.zip](https://github.com/huggingface/transformers/files/7895914/logs.zip) @patrickvonplaten I am using the same method as in your tutorial with the small change of using my own implementation for the error rates: ```python def compute_metrics(processor, pred): pred_logits = pred.predictions pred_ids = np.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = processor.tokenizer.pad_token_id pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = error_rate(targets=label_str, predictions=pred_str, tokens="words") cer = error_rate(targets=label_str, predictions=pred_str, tokens="characters") return {"wer": wer, "cer": cer} ``` The reason is because `load_metric("wer")` uses `jiwer`, a library that constantly breaks on macOS somehow, so I replaced it. Debugging this method tells me that all decoded labels (`label_str`) appear to be fine: ![image](https://user-images.githubusercontent.com/43335432/150109982-9e6b00db-ba5e-4492-995e-6d42213000a9.png) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,194
closed
`is_ctc` needs to be updated to `self.type == "ctc".
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-18-2022 10:37:24
01-18-2022 10:37:24
transformers
15,193
closed
M2M100 support for ONNX export
# What does this PR do? Enable ONNX export for M2M100 models for the following tasks: - default - default-with-past - seq2seq-lm - seq2seq-lm-with-past Related issue: #15060
01-18-2022 10:04:22
01-18-2022 10:04:22
The export for `seq2seq-lm` and `seq2seq-lm-with-past` is failing due to ``` RuntimeError: Exporting model exceed maximum protobuf size of 2GB. Please call torch.onnx.export without setting use_external_data_format parameter. ``` The `OnnxConfig.use_external_data_format` finds that the size is ok... working on it! Also the export works for `valhalla/m2m100_tiny_random`, which is very small, meaning that the export is working, so it is related to the model size.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>The try / except change aims at poiting the user to use a more recent torch version to (possibly) be able to export the model as the protobuf issue seems solved starting from torch 1.10 (when not specifying the deprecated arguments).
transformers
15,192
closed
Question about fine-tuning with vision transformer (VIT)
Thanks for sharing great work! I have a simple question about fine-tuning with VIT. Is there any option for selecting trainable weight during fine-tuning phase? Or can I only train all weight?
01-18-2022 04:49:13
01-18-2022 04:49:13
Hi, You can easily disable parameters which you don't want to train with requires_grad=False: ``` from transformers import ViTForImageClassification model = ViTForImageClassification.from_pretrained("google/vit-base-patch16-224-in21k") for name, param in model.named_parameters(): if name == "...": param.requires_grad = False ```<|||||>Thank you for your fast and kind response! I will try it!
transformers
15,191
closed
can't set gradient_checkpointing=True when using the DistributedDataParallel mode
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: 2GPU-V100 with 16GB - Python version: 3.6.8 - PyTorch version (GPU?): 1.10.1+cu102 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: ### Who can help @sgugger I used the official document to build a pretrain BigbirdMLM model in DP mode, the trainer works well (but when the seq_length is very big ~4096, the training would take a lot of time), so I would like to try the DDP mode, but I meet some problems: 1. Do I need to change some codes in the original main.py? I read the document of "trainer_ags.py", so I only take a code change from "ddp_find_unused_parameters=False" to "ddp_find_unused_parameters=True", but raise a error about the "gradient_checkpointing", so I change the "gradient_checkpointing=False" ( this parameter is set to True in DP mode), and then I run the code in command line with "torchrun --nproc_per_node=2 main.py", it works well. 2. the feature of "gradient_checkpointing" is very important I think. If I set "gradient_checkpointing=False", the batchsize(per_device_train_batch_size) only can be set to 1, (otherwise OOM), so this feature has a conflict in DDP mode? sorry I don't know the details about this feature. thanks
01-18-2022 03:57:06
01-18-2022 03:57:06
You need to have `dpp_find_unused_parameters=False` when using `gradient_checkpointing=True`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,190
closed
`AutoConfig` doesn't support `gpt-neox`
@patil-suraj ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: v4.11.3 - Platform: Macbook (CPU) - Python version: 3.8.10 - PyTorch version (GPU?): 1.10.0(CPU) - Tensorflow version (GPU?): I don't use it - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT-NeoX The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce ```python3 from transformers import AutoConfig config = AutoConfig.from_pretrained(PRIVATE_GPT_NEOX_MODEL_PATH) ``` ``` --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-6-7344d2d8cc3c> in <module> 1 from transformers import AutoConfig ----> 2 config = AutoConfig.from_pretrained(GPT_NEOX_1B_MODEL) /opt/anaconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 527 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 528 if "model_type" in config_dict: --> 529 config_class = CONFIG_MAPPING[config_dict["model_type"]] 530 return config_class.from_dict(config_dict, **kwargs) 531 else: /opt/anaconda3/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py in __getitem__(self, key) 276 def __getitem__(self, key): 277 if key not in self._mapping: --> 278 raise KeyError(key) 279 value = self._mapping[key] 280 module_name = model_type_to_module_name(key) KeyError: 'gpt-neox' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The reason why I need to load gpt-neox from Auto is because my code will be capable to load multiple kinds of models. To achieve it, I use `AutoConfig.from_pretrained(model_path)` to identify `model_type` before loading the model. My expectation would be supporting `gpt-neox` in `AutoConfig` or providing other nicer ways. Thank you in advance!
01-18-2022 00:35:22
01-18-2022 00:35:22
@patil-suraj I found the problem in my code. Because I defined the `gpt-neox` model by my own custom class, of course, `transformers` couldn't recognize it! My apologies! I will close this issue.
transformers
15,189
closed
run_t5_mlm_flax.py Multi GPU Training
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Ubuntu - Python version: Python 3.9.9 - PyTorch version (GPU?): 1.10.1 ### Who can help @patil-suraj @patrickvonplaten <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (T5): I am currently trying to pretrain T5 from scratch on a custom dataset using the [`run_t5_mlm_flax.py`](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_t5_mlm_flax.py) on multiple gpus but when I log the GPU count using `jax.local_device_count()` and `jax.local_device_count()` it returns 1. Does this mean the script is training with just one GPU and if so how can I make It distributed? Thank you! The tasks I am working on is: * [ ] Pretraining T5 from scratch on a custom dataset <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
01-17-2022 23:46:19
01-17-2022 23:46:19
Hey @ToluClassics, Could you please open an issue on https://github.com/google/jax as your problem does not seem to be related to Transformers<|||||>@ToluClassics @patrickvonplaten @patil-suraj @sgugger I also have the same issue with the exact same script. How is it resolved finally? How can I make run_t5_mlm_flax.py use GPU instead of CPU? This is how I run it: ``` python run_t5_mlm_flax.py \ --output_dir="models/my_mini_trial-t5-base" \ --model_type="t5" \ --config_name="models/my_mini_trial-t5-base" \ --tokenizer_name="models/my_mini_trial-t5-base" \ --dataset_name="oscar" \ --dataset_config_name="unshuffled_deduplicated_no" \ --max_seq_length="512" \ --per_device_train_batch_size="32" \ --per_device_eval_batch_size="32" \ --adafactor \ --learning_rate="0.005" \ --weight_decay="0.001" \ --warmup_steps="2000" \ --overwrite_output_dir \ --logging_steps="500" \ --save_steps="10000" \ --eval_steps="2500" ``` And I confirm that JAX recognize the GPU: ``` In [1]: import jax ...: ...: print("Number of available GPUs:", jax.device_count()) ...: print("Default GPU:", jax.default_backend()) Number of available GPUs: 1 Default GPU: gpu ``` G.V.
transformers
15,188
closed
Mark bad tokenizers version
# What does this PR do? Last release of tokenizers breaks many tests, this makes sure the bad version is not installed.
01-17-2022 19:54:55
01-17-2022 19:54:55
Errors come from Hub connection errors, so merging.
transformers
15,187
closed
[Fix doc example] TFRagModel
# What does this PR do? Fix some doc examples in TFRagModel, e.g. missing/wrong import, wrong class name, missing real checkpoint name, etc. ## Who can review?
01-17-2022 19:47:25
01-17-2022 19:47:25
transformers
15,186
closed
Error when code examples are improperly closed
# What does this PR do? When an MDX file has a code sample that is not properly closed, the doc styler ends up removing part of that file. This PR addresses that problem by raising an error before the file is overwritten, so the user knows what's wrong and can fix it.
01-17-2022 19:21:03
01-17-2022 19:21:03
transformers
15,185
closed
Longformer Set Global Attention Layers Non-Trainable
# 🚀 Feature request Tensorflow Longformer model set global attention layers non-trainable. ## Motivation During training global attention is not used giving zero gradients for all global attention weights. Setting the global attention layers to non-trainable will decrease training time and reduce memory consumption.
01-17-2022 17:41:29
01-17-2022 17:41:29
Hi @MarkWijkhuizen! Thank you for your input. If you have the time, I'd like to encourage you to open a PR with the change. Otherwise, no worries, let us know (and, if possible, give us more pointers -- e.g., a reference to the paragraph in the paper where they mention this part of the training regime) 🤗 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,184
closed
[doc] new MoE paper
add new paper
01-17-2022 17:01:08
01-17-2022 17:01:08
transformers
15,183
closed
ValueError When Padding Long Sequences To Small Max Length (Pytorch and Tensorflow)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help - @Rocketknight1 - @SaulLu - @sgugger ## Information I am following along with the HuggingFace Transformers course. At the [end of Chapter 2](https://huggingface.co/course/chapter2/6?fw=tf), there is information concerning how to use **`AutoTokenizer`** to perform certain tasks (tokenized single/multiple sequences, pad, truncate, convert to other libraries). The problem arises when using: * [the official example colab](https://colab.research.google.com/github/huggingface/notebooks/blob/master/course/chapter2/section6_tf.ipynb): * This code executes flawlessly because at no point do we ever have the following conditions: * **`return_tensors="pt" or return_tensors="tf"`** * **`padding="max_length"`** * **`max_length=<a number smaller than the number of tokens in the longest sequence>`** * [my self contained example](https://colab.research.google.com/drive/1_Zt5I-pBAIIaiJRL9-JTg4BuOWR8a8LY?usp=sharing): * I simply coded a simple self-contained example to show the error and when it appears * I am 99% sure this has to do with the handling of ragged tensors. * As when you pass a **`max_length`** value smaller than the longest sequence, the default behaviour is to leave this sequence alone (no truncation). * This can result in tensors of different lengths. * This causes problems as these tensors cannot be batched together. **Similar errors can also occur when performing truncation and passing other combinations of arguments.** <br> The tasks I am working on is: * I am replicating the entire HuggingFace course for Kaggle consumption (I'm doing both Pytorch and Tensorflow versions) ## To reproduce Steps to reproduce the behavior: 1. Open the [**self-contained example**](https://colab.research.google.com/drive/1_Zt5I-pBAIIaiJRL9-JTg4BuOWR8a8LY?usp=sharing) 2. Run the code cells ## Code Snippet **Code That Works** ```python tokenizer(["I've been waiting for a HuggingFace course my whole life.", "So have I!"], padding='max_length', max_length=8) ``` **Code That Breaks** ```python tokenizer(["I've been waiting for a HuggingFace course my whole life.", "So have I!"], return_tensors="tf", padding='max_length', max_length=8) ``` **Two main errors:** ```python ValueError: Can't convert non-rectangular Python sequence to Tensor. ``` ```python ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length. ``` **Example of how to handle this situation** ```python tokenizer(["I've been waiting for a HuggingFace course my whole life.", "So have I!"], return_tensors="tf", truncation=True, padding='max_length', max_length=8) ``` ## Expected behavior I'm not 100% sure what the desired functionality is in this case. However, I believe that throwing an error is probably not the correct action. Also, the error message does not make 100% sense. I would think that delivering a warning and coercing the argument **`truncation`** to be **`True`** would be an acceptable option. If this is of interest, I can create a pull request for this feature. **Thanks in advance!**
01-17-2022 16:11:09
01-17-2022 16:11:09
I'm not sure where the bug is: you want to pad sequences to a length lower than the maximum length of your two sentences. The tokenizer does that but you end up with two sentences of different length (the smallest one has been padded to 8 tokens and the longest one is longer) so it's not possible to batch them together. You need to activate truncation to truncate the longer sentence.<|||||>@sgugger - I 100% understand where you are coming from, however, why would this functionality be supported for **`return_tensors="np"`** and **`return_tensors=None`**. And if you really do want different functionality/outputs like this (for different arguments), then the error message should be clearer (as the functionality is breaking on the **`return_tensors`** argument). It is not always obvious the length of a sequence in terms of tokens, especially considering subword tokenizers. Take the following example of two seven word (**`.split()`**) sentences ```Python # Returns 9 input ids print(tokenizer("This is a sequence with seven words")) # Returns 10 input ids print(tokenizer("I am also a sequence, seven words")) ``` --- Keep in mind I'm literally just following the HuggingFace course with **`return_tensors="pt"`** and **`return_tensors="tf"`** and I got a plethora of errors for the section on padding and truncation. If you'd like to close this as expected functionality that's your prerogative. I took the time to put this all together because I thought it would help make someone else's experience better, as mine was not ideal.<|||||>> why would this functionality be supported for return_tensors="np" and return_tensors=None Because you can have lists of lists of different lengths or build a numpy array where the objects are lists of different lengths. You can't however build a tensor with rows of different lengths. As for the error message, it has to be kept generic enough to cover every situation where you'd get an error. It does tell you explicitly that: - you can't buld a tensor with two rows of different lengths - you should activation truncation/padding We're open to suggestions on how to make it better, but I don't see what was missing to help you debug the error.<|||||>I compiled my suggestions above already. However, to reiterate: * I would default to **`truncation=True`** and **`padding=True`** * Or at the least issue a warning and fallback to that state (given a situation that results in the problem) Also, > build a numpy array where the objects are lists of different lengths. Is this really useful? I've never known numpy arrays where the objects are lists to be useful. It feels very forced. If you were to do this, why wouldn't you fall back to returning ragged tensors for TF and PT? Or, more in-line, why wouldn't you issue the same error that you do for TF and PT for NumPy?<|||||>I'm going to go ahead and close this. I'm not sure the struggle here is worth the effort. This felt wrong to me and I wanted to communicate that, but from your responses, I can see that my difficulty is outside the norm. Thanks for the help.<|||||>You're probably right that we should also raise an error for NumPy in this situation. We rely on the fact the call to `np.ndarray` just works but that's a bit lazy. cc @LysandreJik what do you think?<|||||>Hey @darien-schettler, thank you for your issue and helping us understand what didn't feel natural! I agree with you both that the NumPy counterpart should raise an error in this situation. I think raising an error here is fine given that it is explicitly mentioning what must be done, and that truncating, as a potential loss of information, should be opt-in even when a max length and padding are defined.
transformers
15,182
closed
Fix incorrect max_seq_length bug in pytorch training scripts
# What does this PR do? Fixes #15181 Input sequences are not correctly truncated to the correct length in the `run_mlm.py` script. Even if a model can support large sequence lengths (such as 1024 or 2048), the sequences are truncated to the tokenizer's default `model_max_length`. This bug is described [here](https://github.com/huggingface/transformers/issues/15181). This pull request fixes this bug by comparing the `max_seq_length` passed by the user to the maximum sequence length supported by the model (`config.max_position_embeddings`), and sets `max_seq_length = min(data_args.max_seq_length, config.max_position_embeddings)` If this is the correct fix, I can make the same change in other pytorch training scripts such as [run_plm.py](https://github.com/huggingface/transformers/blob/9a2dabae7002258e41419491c73dd43ad61b5de7/examples/pytorch/language-modeling/run_plm.py#L363), [run_qa.py](https://github.com/huggingface/transformers/blob/9a2dabae7002258e41419491c73dd43ad61b5de7/examples/pytorch/question-answering/run_qa.py#L333), etc. ## Who can review? @NielsRogge @LysandreJik @sgugger
01-17-2022 15:40:19
01-17-2022 15:40:19
As said on the linked issue, this should not be changed.
transformers
15,181
closed
Bug with max_seq_length argument in training scripts
Hello, I am trying to train Nystromformer for Masked Language Modeling with a sequence length of 1024 using the [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py) script. As, I understand it, the `max_seq_length` argument of the script controls the maximum number of input tokens passed to the model. And this should be, less than or equal to the model's `max_position_embeddings`. So, I set `--max_seq_length 1024`, however I get: `01/17/2022 08:58:11 - WARNING - __main__ - The max_seq_length passed (1024) is larger than the maximum length for themodel (512). Using max_seq_length=512.` This implies that sequences greater than 512 tokens are not being passed to the model, although the model can handle sequences of upto 1024 tokens (I double-checked that the model has `max_position_embeddings=1024`). ### To reproduce 1. Clone the transformers repository 2. In the examples/pytorch/language-modeling directory, run `python run_mlm.py --model_type nystromformer --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 6 --num_train_epochs 1 --do_train --do_eval --tokenizer_name albert-base-v1 --output_dir ./nystrom --max_seq_length 1024 --config_overrides="max_position_embeddings=1024"` ### Potential fix The `max_seq_length` variable in the script controls the size of the input sequences. [Here](https://github.com/huggingface/transformers/blob/9a2dabae7002258e41419491c73dd43ad61b5de7/examples/pytorch/language-modeling/run_mlm.py#L385), it is set to `min(data_args.max_seq_length, tokenizer.model_max_length)`. However, this will always evaluate to `tokenizer.model_max_length` (if `max_seq_length` is greater than the tokenizer's default `model_max_length`). So, although `--max_seq_length 1024` is passed to the script, the input sequences are truncated to 512 (the default `model_max_length` for tokenizers such as BERT, RoBERTa and ALBERT). We could fix this by replacing `tokenizer.model_max_length` with `config.max_position_embeddings` in this [block](https://github.com/huggingface/transformers/blob/0167edc8549b8c8b01fcc669d0da7ea41903b244/examples/pytorch/language-modeling/run_mlm.py#L379-L385). This way, the user is warned if the model cannot support the provided `max_seq_length`, and the sequences are truncated to `min(data_args.max_seq_length, config.max_position_embeddings)`. I've implemented this change in the pull request below. ### Who can help @NielsRogge @LysandreJik @sgugger
01-17-2022 15:39:29
01-17-2022 15:39:29
No, the tokenizer used should be manually fixed if its maximum length is incorrect, but we shouldn't change that part of the script. This is not a bug, but a desired feature, as the corresponding pretrained model will not work with a greater maximum length. You can always change the example script (which are just examples) if you want a different behavior.<|||||>I would argue that it would be beneficial (especially for training models from scratch) if `max_seq_length` is set to `min(data_args.max_seq_length, config.max_position_embeddings)`. Consider two cases: 1. The script is being used with a pre-trained model. As an example, consider a pre-trained Nystromformer model with maximum sequence length of 512. Here, I must truncate the inputs to 512 (since this is the maximum length handled by the model). Even if I pass `--max_seq_length 1024` with a pre-trained model with maximum length 512, the modified script will warn the user and set `max_seq_length = min(data_args.max_seq_length, config.max_position_embeddings)`. In this case, `max_seq_length` is set to `config.max_position_embeddings` (512), which is correct. 2. On the other hand, if I wish to train a model from scratch, I would want the tokenizer to produce sequences that are not greater than the model's `max_position_embeddings`. For example, if I want to train Nystromformer for a maximum length of 1024 (which is what I was trying to do originally) from scratch, I would also want the tokenizer to generate input sequences that are at most 1024 in length. However, with the current training script, I am unable to do so. The current script sets the maximum length of the tokenizer to `min(data_args.max_seq_length, tokenizer.model_max_length)`. In this case, it is set to 512, although I want to train my model on sequences of length 1024. The current script does not allow the tokenizer to produce large input sequences even if the model is compatible with them. In this case, if we `max_seq_length` is set to `min(data_args.max_seq_length, config.max_position_embeddings)`, the tokenizer will instead truncate the input sequences to the maximum length allowed by the model (1024).
transformers
15,180
closed
Fix deprecation warnings for int div
Co-authored-by: mgoldey <[email protected]> # What does this PR do? This PR fixes the int div warnings by using the new API in PyTorch for versions >= 1.8 all the while keeping backward compatibility with older versions. Supersedes #14577
01-17-2022 14:08:14
01-17-2022 14:08:14
transformers
15,179
closed
Unclear documentation regarding sequence length for all-MiniLM-L6-v2
On [the model card for MiniLM-L6](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) it says > By default, input text longer than 256 word pieces is truncated. However, looking at the model and tokenizer ```model = AutoModel.from_pretrained("sentence-transformers/all-MiniLM-L6-v2")``` It seems to have a sequence length of 512 in its embeddings, and also have this set in the tokenizer as the max length. The huggingface tokenizer picks up on this higher max length, and does not truncate to 256, although using the SentenceTransformer package does this in its tokenization step. I am not sure whether this is merely a documentation oversight, or a sign of some deeper problem. Version: transformers 4.15.0
01-17-2022 13:53:32
01-17-2022 13:53:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,178
closed
Fix dtype issue in TF BART
Fixes an issue in BART caused by implicit casting of Python ints to tf.int32, when other inputs are tf.int64. We usually get away with this, but tf.where() requires both inputs to have the same dtype.
01-17-2022 13:13:06
01-17-2022 13:13:06
transformers
15,177
closed
processing texts longer than 512 tokens with token-classification pipeline
# 🚀 Feature request Currently, the token-classification pipeline truncates input texts longer than 512 tokens. It would be great if the pipeline could process texts of any length. ## Motivation This issue is a result of this feature request: huggingface/hub-docs#11 As suggested I am tagging @Narsil here. ## Your contribution For my [punctuation prediction model](https://huggingface.co/oliverguhr/fullstop-punctuation-multilang-large), I wrote an inference code for this task. This code segments a longer text into 512 tokens chunks with some overlap and reconstructs the final output string. The code however is rather complex and tailored to my punctuation prediction use case. So I am not sure if this is useful here.
01-17-2022 12:58:19
01-17-2022 12:58:19
Hi @oliverguhr , So this specific pipelines truncates by default, and changing that would mean a breaking change. The problem with "chunking" on this pipeline is that models usually need context before and after a token to get proper context to get a useful classification. That means, than in order to achieve this, the pipeline should be able chunk into multiple parts of text, with overlap (usually called stride, at least in vision, but we borrowed that name in pipelines too). Run the `token-classification` on all the parts, and then resolve conflicts to give a final output. Two things to have in mind: - Conflict resolution is the tricky part and I don't think we can find a win always solution. It means that it's likely that the final result won't be as good as if we could have sent the full text to the model. Consider the Example: ``` [The best city in the world is New York, because of Central Park] -------------------------------------- ---------------------------------------- ``` Both parts "see" New York, but one said it's a location while the other says it's nothing with greater confidence for instance. How would you resolve this ? What if New and York get different predictions for both parts. There's already a relatively complex resolution for tokens in the single case, this would make it more complex (that's ok). - This can never be a default, always an opt-in. The pipelines' goal is to abstract away everything dealing with Tensors and logits, and manipulate simpler objects like strings and dicts. However, models being limited in range **is** a core limitation of those models (some models have an extremely long range) that we don't want to hide away (or at least not by default) especially if the result is not going to be the best one in all situations. Currently at least, we want users to know when there's a limitation and explicitly activate long range fallbacks. The only exception to the rule is `question-answering` currently.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I've been looking for a similar feature to this, and also written custom, non-pipeline code to handle it so far. I can't afford to throw away text past the max length, so use the stride as you describe to include additional context. This would be really handy to have included in the pipeline, given that the functionality to wrap overflowing tokens is in the tokenizer. I agree that conflict resolution is the difficult part here, and will add complexity. I'd see it as similar to the way entities are aggregated from tokens. The pipeline would allow strategies to be passed, where this may use strategies such as "greatest confidence", "prefer entities", "prefer first" etc. (all pretty naive strategies, I know). This would let the user decide how truncation is, or isn't applied, with a flag to discard anything past the model max length (or some overriding max length). I've been trying to implement this in a TokenClassificationPipeline subclass recently. If this truncation/overflowing were to be included in the pipeline, where would it go? Would it be part of the `preprocess` function? Or does it also need to be included in the `collate_fn` of the base? The `overflow_to_sample_mapping` key currently breaks the existing `collate_fn` as its shape is different, so something needs to change there.<|||||>> If this truncation/overflowing were to be included in the pipeline, where would it go? Would it be part of the preprocess function? Hi @jammie19, any help would be super welcome here ! You would need to switch from `Pipeline` to `ChunkPipeline` (changes assumption of 1input-1 model pass, to 1 input-n model pass, while preserving batching, collation etc..). Codewise this change is not too hard (going for `return` to `yield`, you can check `ZeroShotClassificationPipeline` for inspiration. That should take care of the `collate_fn` for you (it's in `pipelines.pt_utils`) and `uncollate_fn` if you will too (since batching doesn't have to align with the number of items your input is creating). So preprocess would handle the chunking/overflowing+stride mecanisms, leveraging the tokenizer, but the post-process would be in charge of bringing the pieces together, `AutomaticSpeechRecognition` is another `ChunkPipeline` which has more postprocessing logic to stitch outputs. Overall without actually digging into this work, it's hard to give you 100% confident advice on how to structure this. Aim for little code change and consistent with what's being done in the 2 other pipelines I have pointed out and it should be a good starting point. Forcing to use `stride` on this to prevent handling conflicts at the boundaries, would certainly help (A bit similar to what's being done here I would say: https://huggingface.co/blog/asr-chunking) If you start working on this, don't hesitate to share your work early and ping me or others to receive feedback on the direction.<|||||>Thanks for the advice and links. I see, I think that's where I've been going wrong. Ultimately, I want to be able to: - make one call to `tokenize`, passing the entire batch at once - Then apply batching before passing inputs to the `forward` function - Then postprocess all inputs at once, such that the `overflow_to_sample_mapping` values are all correct and batch-independent (unaligned, as you say) From the sounds of things, that's where the `collate_fn` and `uncollate_fn` functions come in. Does that seem like a sensible workflow?<|||||>> make one call to tokenize, passing the entire batch at once Then apply batching before passing inputs to the forward function Then postprocess all inputs at once, such that the overflow_to_sample_mapping values are all correct and batch-independent (unaligned, as you say) This is exacly what `ChunkPipeline` does for you.. for essentially free. You need to yield items in the preprocess instead of of returning a whole batch, and add `is_last` key (so the unbatcher knows when to stop). `is_last` just needs to be True for the last item of the overflowing tokens. `_forward` will see the batch, but no modification is needed in this function, since it's the class itself that will handle the `collate_fn` again for free for you. The batch_size might be very different than the number of items `preprocess` sent, it's very orthogonal. So you could have a large input that gets split into 100 chunks, and still process them with `batch_size=64` if that's the capacity limit of your GPU let's say. The difference between `batch_size` and chunk_number is essential and often overlooked :) `postprocess` will see all the items at once (the same number of items `preprocess` sent. <|||||>Great, that sounds perfect then (I should have looked at that before, it's been... difficult trying to hack that into a standard `Pipeline` subclass). If I come up with anything useful and reusable, I'll open a PR<|||||>> Great, that sounds perfect then (I should have looked at that before, it's been... difficult trying to hack that into a standard Pipeline subclass). Perfectly understandable `ChunkPipeline` are relatively new, and mostly introduced after a big refactor, only to enable batching back onto `ZeroShot` and `QuestionAnswering`. It's not super advertised, since implementation is definitely more involved.<|||||>Hi all, I have come to a similar issue. My setup is I get the text of an image caption with several panels on it and I must separate the panels. My approach so far is using NER to classify tokens as 'O' or 'START-PANEL'. We use a BERT-based model but a substantial part of the captions we have are >512 tokens. I would be glad to give a try to build a kind of LongTextTokenClassificationPipeline following your comments if this has not yet been tried. After experimenting with our model, in most of our cases `stride` would not be needed since the model keeps a high performance, but I could try to add something like that if the experience of buildiing this pipeline would be successful and useful for anyone <|||||>> We use a BERT-based model but a substantial part of the captions we have are >512 tokens. I would be glad to give a try to build a kind of LongTextTokenClassificationPipeline following your comments if this has not yet been tried. That would be awesome ! If you want this to be merged in `transformers` there is one key aspect, is that the chunking MUST be opt-in (because it will affect performance on models different from yours so we cannot do it without the users activating the option). If you want to be easier, you could implement it exactly as you want and share it, and we could check what/how we can merge back afterwards.<|||||>Thanks for the answer :) I am following an approach that is basically using the `TokenClassificationPipeline` and modify it to be used with `ChunkPipeline` instead of `Pipeline`. So far things seem to be working. I am using the `stride` option for generalization. Will deal with how to aggregate the data when arriving to the `postprocess` part, but in the `preprocess` it does not generate a big issue. I am currently in the `preprocess` method. My current approach is to first use `_get_sentence_chunks` to estimate the total amount of chunks a sentence will be split on. The next step is to divide the model inputs into these chunks. Here I am struggling a bit on what would be the most efficient way. My thought was to set up `return_tensors=None` in the call to the tokenizer. This would give me lists that could be then sliced into the different parts of the sentence. Then, I need to add [CLS] and [END] to each chunk. The other options would be to return the tensors from the tokenizer and then generate a function that slides the tensors and concatenates them with [CLS] and [END], doing it for each framework. Then I would do a function (still not implemented here) that takes care of converting each list into a vector based on `self.framework` to make the pipeline general. Does this sound like a proper approach to you? <|||||>Sorry for the radio silence @Narsil - I actually implemented this back in March, but due to issues with my employer's open source policy, couldn't publish the changes. These issues have been remedied, and I've got a branch with this token classification pipeline and associated tests. I've implemented the pipeline, but had to migrate the tests from pytest to unittest tests so these tests don't all pass in the HF suite yet. I can open a (draft) PR at this stage to give you chance to comment? I'll try to finish those other test fixes in the next few days and publish the actual PR<|||||>Sure, please publish ! Better to start the discussion early. I don't think we hardly rely on `pytest` but I could be wrong.<|||||>I've gotten around to updating the test cases on the PR, they're passing now so I've published it. Let me know if it needs changes or explanation<|||||>Any updates on this topic? Having a mechanism to handle more than 512 tokens for token classification inference is certainly a useful feature to have.<|||||>I've got a [PR open](https://github.com/huggingface/transformers/pull/19735) which should solve this, but hasn't been reviewed yet. I've also had some issues with the CI tasks that I need someone from the HF team to help with
transformers
15,176
closed
Bug in seq2seq fine tuning section of Automatic Speech Recognition Examples
## Environment info - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu111 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten , @anton-l ## Information Model I am using (Bert, XLNet ...): Wave2vec2-2-BART fine tuning The problem arises when using: * [ ] the official example scripts: In the code snippet from this section : https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition#sequence-to-sequence In this line : ```model = SpeechEncoderDecoderModel.from_encoder_decoder_pretrained(encoder_id, decoder_id, encoder_add_adapter=True, encoder_feat_proj_dropout=0.0, encoder_layerdrop=0.0, max_length=200, num_beams=5)``` the arguments ```encoder_add_adapter=True, encoder_feat_proj_dropout=0.0, encoder_layerdrop=0.0``` throws an error with the traceback: ```TypeError: __init__() got an unexpected keyword argument 'add_adapter'``` When I remove those 3 arguments, it works and trains properly. Per the Readme, these arguments are added because`**by default a single output vector of wav2vec2-base has a receptive field of ca. 25ms (cf. with section 4.2 of the official Wav2Vec2 paper), which represents a little less a single character. BART on the other hand makes use of a sentence-piece tokenizer as an input processor so that a single hidden vector of bart-base represents ca. 4 characters. To better align the output of Wav2Vec2 and BART's hidden vectors for the cross-attention mechanism, we further subsample Wav2Vec2's output by a factor of 8 by adding a convolution-based adapter.**' It seems the accuracy would take a hit on removing those arguments. However, adding them raises exceptions. Could you please look into this? @patrickvonplaten
01-17-2022 11:30:36
01-17-2022 11:30:36
Thanks for reporting this. It should be fixed soon: https://github.com/huggingface/transformers/pull/15056 . I'll make sure to speed-up the process now - sorry for being so late on this.<|||||>No worries! Thanks for the prompt resolution
transformers
15,175
closed
Compute loss independent from decoder for TF EncDec models (as #14139)
# What does this PR do? Apply the same change in #14139 to TF Encoder-Decoder like models. (As discussed in [this comment in 14148](https://github.com/huggingface/transformers/pull/14148#discussion_r780361117)) @NielsRogge & @patrickvonplaten knows better the context @gante @Rocketknight1 for TF
01-17-2022 08:50:49
01-17-2022 08:50:49
transformers
15,174
closed
Fix handling of multiple stopping criteria
If multiple stopping criteria are used, the current logic will break.
01-17-2022 06:47:20
01-17-2022 06:47:20
@christiancosgrove Do you mind sharing the bug you are trying to fix ? What you are trying to fix should be handled here: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_stopping_criteria.py#L113<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,173
closed
Add class LayoutLMv2ForRelationExtraction
# What does this PR do? Add class LayoutLMv2ForRelationExtraction which in https://github.com/microsoft/unilm/tree/master/layoutlmft
01-17-2022 03:32:24
01-17-2022 03:32:24
I've [implemented](https://github.com/woflowinc/transformers/tree/add-layoutlmv2-re) the requested changes, but I'm not sure how to contribute to this PR from here.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @quasimik, could you open up a clean PR such that we can add it?
transformers
15,172
closed
Fix encoder-decoder models when labels is passed
# What does this PR do? Fix encoder-decoder models when labels is passed and `return_dict` is `False`. ## Details This line https://github.com/huggingface/transformers/blob/669e3c50c98ad5b506555a551d2ecbf72ceb3c99/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L526 fails when `labels` is passed to `EncoderDecoderModel` + `return_dict = False` Since we don't pass `labels` to decoder, `decoder_outputs` won't return `loss`. When `return_dict` is `False`, `logits` will be the first element in the returned tuple `decoder_outputs`. ## The issue ``` import os, tempfile from transformers import EncoderDecoderModel, BertConfig, AutoTokenizer, BertModel, BertLMHeadModel config = BertConfig(num_hidden_layers=2, hidden_size=4, num_attention_heads=2, intermediate_size=4) enc = BertModel(config) dec = BertLMHeadModel(config) with tempfile.TemporaryDirectory() as tmpdir: enc.save_pretrained(os.path.join(tmpdir, "enc")) dec.save_pretrained(os.path.join(tmpdir, "dec")) enc_dec = EncoderDecoderModel.from_encoder_decoder_pretrained( os.path.join(tmpdir, "enc"), os.path.join(tmpdir, "dec") ) enc_dec.config.pad_token_id = 0 enc_dec.config.decoder_start_token_id = 1 tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased") enc_text = "i love cats" dec_text = "my cat is cute" enc_inputs = tokenizer(enc_text, return_tensors="pt") dec_inputs = tokenizer(dec_text, return_tensors="pt") # This fails inputs = { "input_ids": enc_inputs["input_ids"], "labels": dec_inputs["input_ids"] } outputs = enc_dec(**inputs, return_dict=False) ``` This gives the error ``` loss = loss_fct(logits.reshape(-1, self.decoder.config.vocab_size), labels.view(-1)) AttributeError: 'tuple' object has no attribute 'reshape' ``` ## Who can review? @NielsRogge
01-16-2022 13:28:33
01-16-2022 13:28:33
@NielsRogge Could you take a look whenever you have sometime? Or tag another person if you think more properly. Thanks!<|||||>@NielsRogge - would be amazing if you could wait a day or so here so that I can quickly take a look as well before merging :-)
transformers
15,171
closed
RuntimeError: expected scalar type Half but found Float
## Environment info - `transformers` version: 4.4.0 - Platform: Linux-4.4.0-142-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.9.0 (True) - Tensorflow version (GPU?): 2.6.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ## Models: - BERT ## Library: - Deepspeed: 0.5.9 ## Error: ``` Traceback (most recent call last): File "run_glue_sparse.py", line 569, in <module> main() File "run_glue_sparse.py", line 509, in main trainer.train() File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/transformers/trainer.py", line 1105, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/transformers/trainer.py", line 1198, in _maybe_log_save_evaluate metrics = self.evaluate() File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/transformers/trainer.py", line 1667, in evaluate output = self.prediction_loop( File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/transformers/trainer.py", line 1800, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/transformers/trainer.py", line 1914, in prediction_step loss, outputs = self.compute_loss(model, inputs, return_outputs=True) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/transformers/trainer.py", line 1475, in compute_loss outputs = model(**inputs) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/Experiment/glue/pretraining/sparse_modeling.py", line 1166, in forward outputs = self.bert( File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/Experiment/glue/pretraining/sparse_modeling.py", line 822, in forward encoder_output = self.encoder( File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/Experiment/glue/pretraining/sparse_modeling.py", line 580, in forward layer_out = layer_module( File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/Experiment/glue/pretraining/sparse_modeling.py", line 455, in forward self_attn_out = self.attention(pre_attn_input, attention_mask) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/Experiment/glue/pretraining/sparse_modeling.py", line 387, in forward context_layer = self.self(input_tensor, key_padding_mask) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/deepspeed/ops/sparse_attention/bert_sparse_self_attention.py", line 66, in forward mixed_query_layer = self.query(hidden_states) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96, in forward return F.linear(input, self.weight, self.bias) File "/remote-home/zyyin/anaconda3/envs/grad/lib/python3.8/site-packages/torch/nn/functional.py", line 1847, in linear return torch._C._nn.linear(input, weight, bias) RuntimeError: expected scalar type Half but found Float ``` command line: ``` CUDA_VISIBLE_DEVICES=2 python run_glue_sparse.py \ --model_name_or_path ./outputs/cola/checkpoint-1 \ --task_name cola \ --max_seq_length 128 \ --run_name cola \ --output_dir ./outputs/cola \ --overwrite_output_dir \ --do_train \ --do_eval \ --do_predict \ --fp16 \ --fp16_full_eval \ --evaluation_strategy steps \ --eval_steps 1 \ --save_strategy no \ --save_steps 1 \ --per_device_train_batch_size 1 \ --gradient_accumulation_steps 1 \ --per_device_eval_batch_size 1 \ --learning_rate 2e-5 \ --weight_decay 0.01 \ --max_grad_norm 1.0 \ --num_train_epochs 5 \ --lr_scheduler_type polynomial \ --warmup_steps 0 ``` It's running well when model is training, and this error appears in the eval phase. It is ok if I give only --do_eval/--do_train command. Thank you very much! @sgugger
01-16-2022 13:17:24
01-16-2022 13:17:24
What is `run_glue_sparse.py`?<|||||>> What is `run_glue_sparse.py`? Hello, we just implement our Sparse Attention by deepspeed in run_glue.py. You can just think it as run_glue.py.<|||||>Without knowing the model you are using and the script you are using, we can't reproduce the bug and cannot help.<|||||>> Without knowing the model you are using and the script you are using, we can't reproduce the bug and cannot help. Hello, this is my code. Thanks! ``` # coding=utf-8 # Copyright 2021 Intel Corporation. All rights reserved. # DeepSpeed note, code taken from commit 3d59216cec89a363649b4fe3d15295ba936ced0f # https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/modeling.py # Deepspeed code taken from commit: 35b4582486fe096a5c669b6ca35a3d5c6a1ec56b # https://github.com/microsoft/DeepSpeedExamples/tree/master/bing_bert # RMS Norm taken from: https://github.com/EleutherAI/gpt-neox/blob/main/megatron/model/norms.py # # Copyright 2018 The Google AI Language Team Authors and The HugginFace Inc. team. # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """PyTorch BERT model.""" from __future__ import absolute_import, division, print_function, unicode_literals import copy import logging import math import os import sys import torch import torch.nn.functional as F import torch.nn.init as init from torch import nn from torch.nn import CrossEntropyLoss, Module from torch.nn.modules.loss import MSELoss from torch.nn.parameter import Parameter from torch.utils import checkpoint from transformers import BertConfig, PreTrainedModel from transformers.modeling_outputs import SequenceClassifierOutput, QuestionAnsweringModelOutput,MaskedLMOutput from transformers.models.bart.modeling_bart import BartForConditionalGeneration import deepspeed.ops.sparse_attention as deepspeed_sparse_attn logger = logging.getLogger(__name__) def get_deepspeed_config(args): if hasattr(args, "deepspeed_config") and args.deepspeed_config: from deepspeed import DeepSpeedConfig return DeepSpeedConfig(None, param_dict=args.ds_config) else: raise RuntimeError("deepspeed_config is not found in args.") @torch.jit.script def f_gelu(x): return F.gelu(x) # @torch.jit.script # def f_gelu(x): # pdtype = x.dtype # x = x.float() # y = x * 0.5 * (1.0 + torch.erf(x / math.sqrt(2.0))) # return y.to(pdtype) # @torch.jit.script def bias_gelu(bias, y): x = bias + y return F.gelu(x) # def bias_gelu(bias, y): # x = bias + y # return x * 0.5 * (1.0 + torch.erf(x / 1.41421)) @torch.jit.script def bias_relu(bias, y): x = bias + y return F.relu(x) # @torch.jit.script # def bias_gelu(bias, y): # x = bias + y # return x * 0.5 * (1.0 + torch.erf(x / 1.41421)) @torch.jit.script def bias_tanh(bias, y): x = bias + y return torch.tanh(x) def gelu(x): """Implementation of the gelu activation function. For information: OpenAI GPT's gelu is slightly different (and gives slightly different results): 0.5 * x * (1 + torch.tanh(math.sqrt(2 / math.pi) * (x + 0.044715 * torch.pow(x, 3)))) Also see https://arxiv.org/abs/1606.08415 """ return f_gelu(x) def swish(x): return x * torch.sigmoid(x) ACT2FN = {"gelu": F.gelu, "relu": F.relu, "swish": swish, "tanh": F.tanh} class LinearActivation(Module): r"""Fused Linear and activation Module.""" __constants__ = ["bias"] def __init__(self, in_features, out_features, act="gelu", bias=True): super(LinearActivation, self).__init__() self.in_features = in_features self.out_features = out_features self.fused_gelu = False self.fused_tanh = False self.fused_relu = False if isinstance(act, str) or (sys.version_info[0] == 2 and isinstance(act, unicode)): if bias and act == "gelu": self.fused_gelu = True elif bias and act == "tanh": self.fused_tanh = True elif bias and act == "relu": self.fused_relu = True else: self.act_fn = ACT2FN[act] else: self.act_fn = act self.weight = Parameter(torch.Tensor(out_features, in_features)) if bias: self.bias = Parameter(torch.Tensor(out_features)) else: self.register_parameter("bias", None) self.reset_parameters() def reset_parameters(self): init.kaiming_uniform_(self.weight, a=math.sqrt(5)) if self.bias is not None: fan_in, _ = init._calculate_fan_in_and_fan_out(self.weight) bound = 1 / math.sqrt(fan_in) init.uniform_(self.bias, -bound, bound) def forward(self, inp): if self.fused_gelu: return bias_gelu(self.bias, F.linear(inp, self.weight, None)) elif self.fused_tanh: return bias_tanh(self.bias, F.linear(inp, self.weight, None)) elif self.fused_relu: return bias_relu(self.bias, F.linear(inp, self.weight, None)) else: return self.act_fn(F.linear(inp, self.weight, self.bias)) def extra_repr(self): return "in_features={}, out_features={}, bias={}".format( self.in_features, self.out_features, self.bias is not None ) class RegularLinearActivation(Module): """Regular Linear activation module with""" def __init__(self, in_features, out_features, act="gelu"): super(RegularLinearActivation, self).__init__() self.dense = nn.Linear(in_features, out_features) if isinstance(act, str) or (sys.version_info[0] == 2 and isinstance(act, unicode)): self.act = ACT2FN[act] def forward(self, hidden_states): return self.act(self.dense(hidden_states)) def get_apex_layer_norm(): try: import apex # apex.amp.register_half_function(apex.normalization.fused_layer_norm, 'FusedLayerNorm') import apex.normalization # apex.amp.register_float_function(apex.normalization.FusedLayerNorm, 'forward') return apex.normalization.FusedLayerNorm except ImportError: raise Exception(f"Layer norm of type apex is not available, apex not installed.") class RMSNorm(torch.nn.Module): def __init__(self, dim, p=-1.0, eps=1e-8, bias=False): """ Root Mean Square Layer Normalization :param dim: model size :param p: partial RMSNorm, valid value [0, 1], default -1.0 (disabled) :param eps: epsilon value, default 1e-8 :param bias: whether use bias term for RMSNorm, disabled by default because RMSNorm doesn't enforce re-centering invariance. """ super(RMSNorm, self).__init__() self.eps = eps self.d = dim self.p = p self.bias = bias self.scale = torch.nn.Parameter(torch.ones(dim)) self.register_parameter("scale", self.scale) if self.bias: self.offset = torch.nn.Parameter(torch.zeros(dim)) self.register_parameter("offset", self.offset) def forward(self, x): if self.p < 0.0 or self.p > 1.0: norm_x = x.norm(2, dim=-1, keepdim=True) d_x = self.d else: partial_size = int(self.d * self.p) partial_x, _ = torch.split(x, [partial_size, self.d - partial_size], dim=-1) norm_x = partial_x.norm(2, dim=-1, keepdim=True) d_x = partial_size rms_x = norm_x * d_x ** (-1.0 / 2) x_normed = x / (rms_x + self.eps) if self.bias: return self.scale * x_normed + self.offset return self.scale * x_normed LAYER_NORM_TYPES = {"pytorch": nn.LayerNorm, "apex": get_apex_layer_norm(), "rms_norm": RMSNorm} def get_layer_norm_type(config): if config.layer_norm_type in LAYER_NORM_TYPES: return LAYER_NORM_TYPES[config.layer_norm_type] else: raise Exception(f"Layer norm of type {config.layer_norm_type} is not available.") class BertEmbeddings(nn.Module): """Construct the embeddings from word, position and token_type embeddings.""" def __init__(self, config): super(BertEmbeddings, self).__init__() self.config = config self.word_embeddings = nn.Embedding(config.vocab_size, config.hidden_size) self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size) self.token_type_embeddings = nn.Embedding(config.type_vocab_size, config.hidden_size) # self.LayerNorm is not snake-cased to stick with TensorFlow model variable name and be able to load # any TensorFlow checkpoint file BertLayerNorm = get_layer_norm_type(config) self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, input_ids, token_type_ids=None): seq_length = input_ids.size(1) position_ids = torch.arange(seq_length, dtype=torch.long, device=input_ids.device) position_ids = position_ids.unsqueeze(0).expand_as(input_ids) if token_type_ids is None: token_type_ids = torch.zeros_like(input_ids) words_embeddings = self.word_embeddings(input_ids) position_embeddings = self.position_embeddings(position_ids) token_type_embeddings = self.token_type_embeddings(token_type_ids) embeddings = words_embeddings + position_embeddings + token_type_embeddings if self.config.useLN: embeddings = self.LayerNorm(embeddings) embeddings = self.dropout(embeddings) return embeddings class BertSelfOutput(nn.Module): def __init__(self, config): super(BertSelfOutput, self).__init__() self.dense = nn.Linear(config.hidden_size, config.hidden_size) self.dense.bert_output_layer = True self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, hidden_states, input_tensor): hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states class BertAttention(nn.Module): def __init__(self, config): super(BertAttention, self).__init__() import deepspeed.ops.sparse_attention as deepspeed_sparse_attn sparsity_config = deepspeed_sparse_attn.BigBirdSparsityConfig(config.num_attention_heads, 16) self.self = deepspeed_sparse_attn.BertSparseSelfAttention(config, sparsity_config) self.output = BertSelfOutput(config) def forward(self, input_tensor, attention_mask): # key_padding_mask_mode is "add" by default in deepspeed.ops.sparse_attention.BertSparseSelfAttention key_padding_mask = attention_mask.squeeze(1).squeeze(1) # (bsz, seq_len) # key_padding_mask[key_padding_mask == -10000.] = -float('inf') # key_padding_mask = key_padding_mask.half() dtype = input_tensor.dtype input_tensor = input_tensor.half() context_layer = self.self(input_tensor, key_padding_mask) attention_output = self.output(context_layer, input_tensor) attention_probs = None output = ( attention_output, attention_probs, ) return output class BertIntermediate(nn.Module): def __init__(self, config): super(BertIntermediate, self).__init__() if config.fused_linear_layer: linear_layer = LinearActivation else: linear_layer = RegularLinearActivation self.dense_act = linear_layer( config.hidden_size, config.intermediate_size, act=config.hidden_act ) def forward(self, hidden_states): hidden_states = self.dense_act(hidden_states) return hidden_states class BertOutput(nn.Module): def __init__(self, config): super(BertOutput, self).__init__() self.dense = nn.Linear(config.intermediate_size, config.hidden_size) self.dense.bert_output_layer = True self.dropout = nn.Dropout(config.hidden_dropout_prob) def forward(self, hidden_states): hidden_states = self.dense(hidden_states) hidden_states = self.dropout(hidden_states) return hidden_states class BertLayer(nn.Module): def __init__(self, config): super(BertLayer, self).__init__() self.attention = BertAttention(config) self.config = config BertLayerNorm = get_layer_norm_type(config) self.PreAttentionLayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) self.PostAttentionLayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) self.intermediate = BertIntermediate(config) self.output = BertOutput(config) def maybe_layer_norm(self, hidden_states, layer_norm, current_ln_mode): if self.config.useLN and self.config.encoder_ln_mode in current_ln_mode: return layer_norm(hidden_states) else: return hidden_states def forward(self, hidden_states, attention_mask, action=1, keep_prob=1.0): attention_probs = None intermediate_input = None if action == 0: intermediate_input = hidden_states else: pre_attn_input = self.maybe_layer_norm( hidden_states, self.PreAttentionLayerNorm, "pre-ln" ) self_attn_out = self.attention(pre_attn_input, attention_mask) attention_output, attention_probs = self_attn_out attention_output = attention_output * 1 / keep_prob intermediate_input = hidden_states + attention_output intermediate_input = self.maybe_layer_norm( intermediate_input, self.PreAttentionLayerNorm, "post-ln" ) if action == 0: layer_output = intermediate_input else: intermediate_pre_ffn = self.maybe_layer_norm( intermediate_input, self.PostAttentionLayerNorm, "pre-ln" ) intermediate_output = self.intermediate(intermediate_pre_ffn) layer_output = self.output(intermediate_output) layer_output = layer_output * 1 / keep_prob layer_output = layer_output + intermediate_input layer_output = self.maybe_layer_norm( layer_output, self.PostAttentionLayerNorm, "post-ln" ) output = ( layer_output, attention_probs, ) return output class BertEncoder(nn.Module): def __init__(self, config, args): super(BertEncoder, self).__init__() self.config = config BertLayerNorm = get_layer_norm_type(config) self.FinalLayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) self.is_transformer_kernel = ( hasattr(args, "deepspeed_transformer_kernel") and args.deepspeed_transformer_kernel ) if hasattr(args, "deepspeed_transformer_kernel") and args.deepspeed_transformer_kernel: from deepspeed import DeepSpeedTransformerConfig, DeepSpeedTransformerLayer ds_config = get_deepspeed_config(args) has_huggingface = hasattr(args, "huggingface") ds_transformer_config = DeepSpeedTransformerConfig( batch_size=ds_config.train_micro_batch_size_per_gpu, hidden_size=config.hidden_size, intermediate_size=config.intermediate_size, heads=config.num_attention_heads, attn_dropout_ratio=config.attention_probs_dropout_prob, hidden_dropout_ratio=config.hidden_dropout_prob, num_hidden_layers=config.num_hidden_layers, initializer_range=config.initializer_range, local_rank=args.local_rank if hasattr(args, "local_rank") else -1, seed=args.seed, fp16=ds_config.fp16_enabled, pre_layer_norm=True if "pre-ln" in config.encoder_ln_mode else False, normalize_invertible=args.normalize_invertible, gelu_checkpoint=args.gelu_checkpoint, adjust_init_range=True, attn_dropout_checkpoint=args.attention_dropout_checkpoint, stochastic_mode=args.stochastic_mode, huggingface=has_huggingface, training=self.training, ) self.layer = nn.ModuleList( [ copy.deepcopy(DeepSpeedTransformerLayer(ds_transformer_config)) for _ in range(config.num_hidden_layers) ] ) else: layer = BertLayer(config) self.layer = nn.ModuleList( [copy.deepcopy(layer) for _ in range(self.config.num_hidden_layers)] ) def add_attention(self, all_attentions, attention_probs): if attention_probs is not None: all_attentions.append(attention_probs) return all_attentions def forward( self, hidden_states, attention_mask, output_all_encoded_layers=True, checkpoint_activations=False, output_attentions=False, ): all_encoder_layers = [] all_attentions = [] def custom(start, end): def custom_forward(*inputs): layers = self.layer[start:end] x_ = inputs[0] for layer in layers: x_ = layer(x_, inputs[1]) return x_ return custom_forward if checkpoint_activations: l = 0 num_layers = len(self.layer) chunk_length = math.ceil(math.sqrt(num_layers)) while l < num_layers: hidden_states = checkpoint.checkpoint( custom(l, l + chunk_length), hidden_states, attention_mask * 1 ) l += chunk_length # decoder layers else: for layer_module in self.layer: if self.is_transformer_kernel: # using Deepspeed Transformer kernel hidden_states = layer_module(hidden_states, attention_mask) else: layer_out = layer_module( hidden_states, attention_mask, ) hidden_states, attention_probs = layer_out # get all attention_probs from layers if output_attentions: all_attentions = self.add_attention(all_attentions, attention_probs) if output_all_encoded_layers: all_encoder_layers.append(hidden_states) if not output_all_encoded_layers or checkpoint_activations: if self.config.useLN and self.config.encoder_ln_mode in "pre-ln": hidden_states = self.FinalLayerNorm(hidden_states) all_encoder_layers.append(hidden_states) outputs = (all_encoder_layers,) if output_attentions: outputs += (all_attentions,) return outputs class BertPooler(nn.Module): def __init__(self, config): super(BertPooler, self).__init__() if config.fused_linear_layer: linear_layer = LinearActivation else: linear_layer = RegularLinearActivation self.dense_act = linear_layer(config.hidden_size, config.hidden_size, act="tanh") def forward(self, hidden_states): # We "pool" the model by simply taking the hidden state corresponding # to the first token. first_token_tensor = hidden_states[:, 0] pooled_output = self.dense_act(first_token_tensor) return pooled_output class BertPredictionHeadTransform(nn.Module): def __init__(self, config): super(BertPredictionHeadTransform, self).__init__() self.config = config if config.fused_linear_layer: linear_layer = LinearActivation else: linear_layer = RegularLinearActivation self.dense_act = linear_layer(config.hidden_size, config.hidden_size, act=config.hidden_act) BertLayerNorm = get_layer_norm_type(config) self.LayerNorm = BertLayerNorm(config.hidden_size, eps=1e-12) def forward(self, hidden_states): hidden_states = self.dense_act(hidden_states) if self.config.useLN: hidden_states = self.LayerNorm(hidden_states) return hidden_states class BertLMPredictionHead(nn.Module): def __init__(self, config, bert_model_embedding_weights): super(BertLMPredictionHead, self).__init__() self.transform = BertPredictionHeadTransform(config) self.decoder = nn.Linear( bert_model_embedding_weights.size(1), bert_model_embedding_weights.size(0), bias=False ) self.decoder.weight = bert_model_embedding_weights self.bias = nn.Parameter(torch.zeros(bert_model_embedding_weights.size(0))) self.sparse_predict = config.sparse_mask_prediction if not config.sparse_mask_prediction: self.decoder.bias = self.bias def forward(self, hidden_states, masked_token_indexes): if self.sparse_predict: if masked_token_indexes is not None: hidden_states = hidden_states.view(-1, hidden_states.shape[-1])[ masked_token_indexes ] hidden_states = self.transform(hidden_states) hidden_states = self.decoder(hidden_states) if not self.sparse_predict: hidden_states = torch.index_select( hidden_states.view(-1, hidden_states.shape[-1]), 0, masked_token_indexes ) return hidden_states class BertOnlyMLMHead(nn.Module): def __init__(self, config, bert_model_embedding_weights): super(BertOnlyMLMHead, self).__init__() self.predictions = BertLMPredictionHead(config, bert_model_embedding_weights) def forward(self, sequence_output, masked_token_indexes=None): prediction_scores = self.predictions(sequence_output, masked_token_indexes) return prediction_scores class BertOnlyNSPHead(nn.Module): def __init__(self, config): super(BertOnlyNSPHead, self).__init__() self.seq_relationship = nn.Linear(config.hidden_size, 2) def forward(self, pooled_output): seq_relationship_score = self.seq_relationship(pooled_output) return seq_relationship_score class BertPreTrainingHeads(nn.Module): def __init__(self, config, bert_model_embedding_weights): super(BertPreTrainingHeads, self).__init__() self.predictions = BertLMPredictionHead(config, bert_model_embedding_weights) self.seq_relationship = nn.Linear(config.hidden_size, 2) def forward(self, sequence_output, pooled_output, masked_token_indexes=None): prediction_scores = self.predictions(sequence_output, masked_token_indexes) seq_relationship_score = self.seq_relationship(pooled_output) return prediction_scores, seq_relationship_score class BertPreTrainedModel(PreTrainedModel): """ An abstract class to handle weights initialization and a simple interface for downloading and loading pretrained models. """ config_class = BertConfig base_model_prefix = "bert" authorized_missing_keys = [r"position_ids"] def __init__(self, config, *inputs, **kwargs): super().__init__(config, inputs=inputs, kwargs=kwargs) print(inputs) print(kwargs) def _init_weights(self, module): """Initialize the weights""" if isinstance(module, (nn.Linear, nn.Embedding)): # Slightly different from the TF version which uses truncated_normal for initialization # cf https://github.com/pytorch/pytorch/pull/5617 module.weight.data.normal_(mean=0.0, std=self.config.initializer_range) elif isinstance(module, nn.LayerNorm): module.bias.data.zero_() module.weight.data.fill_(1.0) if isinstance(module, nn.Linear) and module.bias is not None: module.bias.data.zero_() class BertModel(BertPreTrainedModel): """BERT model ("Bidirectional Embedding Representations from a Transformer"). Params: config: a BertConfig class instance with the configuration to build a new model Inputs: `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary(see the tokens preprocessing logic in the scripts `extract_features.py`, `run_classifier.py` and `run_squad.py`) `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. It's the mask that we typically use for attention when a batch has varying length sentences. `output_all_encoded_layers`: boolean which controls the content of the `encoded_layers` output as described below. Default: `True`. Outputs: Tuple of (encoded_layers, pooled_output) `encoded_layers`: controled by `output_all_encoded_layers` argument: - `output_all_encoded_layers=True`: outputs a list of the full sequences of encoded-hidden-states at the end of each attention block (i.e. 12 full sequences for BERT-base, 24 for BERT-large), each encoded-hidden-state is a torch.FloatTensor of size [batch_size, sequence_length, hidden_size], - `output_all_encoded_layers=False`: outputs only the full sequence of hidden-states corresponding to the last attention block of shape [batch_size, sequence_length, hidden_size], `pooled_output`: a torch.FloatTensor of size [batch_size, hidden_size] which is the output of a classifier pretrained on top of the hidden state associated to the first character of the input (`CLS`) to train on the Next-Sentence task (see BERT's paper). Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]]) config = modeling.BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072) model = modeling.BertModel(config=config) all_encoder_layers, pooled_output = model(input_ids, token_type_ids, input_mask) ``` """ def __init__(self, config, args=None): super(BertModel, self).__init__(config) self.embeddings = BertEmbeddings(config) # set pad_token_id that is used for sparse attention padding self.pad_token_id = ( config.pad_token_id if hasattr(config, "pad_token_id") and config.pad_token_id is not None else 0 ) self.encoder = BertEncoder(config, args) self.pooler = BertPooler(config) logger.info("Init BERT pretrain model") def get_input_embeddings(self): return self.embeddings.word_embeddings def set_input_embeddings(self, value): self.embeddings.word_embeddings = value def forward( self, input_ids, token_type_ids=None, attention_mask=None, output_all_encoded_layers=True, checkpoint_activations=False, output_attentions=False, ): if attention_mask is None: attention_mask = torch.ones_like(input_ids) if token_type_ids is None: token_type_ids = torch.zeros_like(input_ids) # We create a 3D attention mask from a 2D tensor mask. # Sizes are [batch_size, 1, 1, to_seq_length] # So we can broadcast to [batch_size, num_heads, from_seq_length, to_seq_length] # this attention mask is more simple than the triangular masking of causal attention # used in OpenAI GPT, we just need to prepare the broadcast dimension here. extended_attention_mask = attention_mask.unsqueeze(1).unsqueeze(2) # Since attention_mask is 1.0 for positions we want to attend and 0.0 for # masked positions, this operation will create a tensor which is 0.0 for # positions we want to attend and -10000.0 for masked positions. # Since we are adding it to the raw scores before the softmax, this is # effectively the same as removing these entirely. extended_attention_mask = extended_attention_mask.to( dtype=self.embeddings.word_embeddings.weight.dtype # should be of same dtype ) # fp16 compatibility extended_attention_mask = (1.0 - extended_attention_mask) * -10000.0 embedding_output = self.embeddings(input_ids, token_type_ids) encoder_output = self.encoder( embedding_output, extended_attention_mask, output_all_encoded_layers=output_all_encoded_layers, checkpoint_activations=checkpoint_activations, output_attentions=output_attentions, ) encoded_layers = encoder_output[0] sequence_output = encoded_layers[-1] pooled_output = self.pooler(sequence_output) if not output_all_encoded_layers: encoded_layers = encoded_layers[-1] output = ( encoded_layers, pooled_output, ) if output_attentions: output += (encoder_output[-1],) return output class BertForPreTraining(BertPreTrainedModel): """BERT model with pre-training heads. This module comprises the BERT model followed by the two pre-training heads: - the masked language modeling head, and - the next sentence classification head. Params: config: a BertConfig class instance with the configuration to build a new model. Inputs: `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary(see the tokens preprocessing logic in the scripts `extract_features.py`, `run_classifier.py` and `run_squad.py`) `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. It's the mask that we typically use for attention when a batch has varying length sentences. `masked_lm_labels`: optional masked language modeling labels: torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [-1, 0, ..., vocab_size]. All labels set to -1 are ignored (masked), the loss is only computed for the labels set in [0, ..., vocab_size] `next_sentence_label`: optional next sentence classification loss: torch.LongTensor of shape [batch_size] with indices selected in [0, 1]. 0 => next sentence is the continuation, 1 => next sentence is a random sentence. Outputs: if `masked_lm_labels` and `next_sentence_label` are not `None`: Outputs the total_loss which is the sum of the masked language modeling loss and the next sentence classification loss. if `masked_lm_labels` or `next_sentence_label` is `None`: Outputs a tuple comprising - the masked language modeling logits of shape [batch_size, sequence_length, vocab_size], and - the next sentence classification logits of shape [batch_size, 2]. Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]]) config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072) model = BertForPreTraining(config) masked_lm_logits_scores, seq_relationship_logits = model(input_ids, token_type_ids, input_mask) ``` """ def __init__(self, config, args): super(BertForPreTraining, self).__init__(config) self.bert = BertModel(config, args) self.cls = BertPreTrainingHeads(config, self.bert.embeddings.word_embeddings.weight) self.apply(self.init_bert_weights) def forward(self, batch): input_ids = batch[1] token_type_ids = batch[3] attention_mask = batch[2] masked_lm_labels = batch[5] next_sentence_label = batch[4] checkpoint_activations = False sequence_output, pooled_output = self.bert( input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False, checkpoint_activations=checkpoint_activations, ) if masked_lm_labels is not None and next_sentence_label is not None: # filter out all masked labels. masked_token_indexes = torch.nonzero( (masked_lm_labels + 1).view(-1), as_tuple=False ).view(-1) prediction_scores, seq_relationship_score = self.cls( sequence_output, pooled_output, masked_token_indexes ) target = torch.index_select(masked_lm_labels.view(-1), 0, masked_token_indexes) loss_fct = CrossEntropyLoss(ignore_index=-1) masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), target) next_sentence_loss = loss_fct( seq_relationship_score.view(-1, 2), next_sentence_label.view(-1) ) total_loss = masked_lm_loss + next_sentence_loss return total_loss else: prediction_scores, seq_relationship_score = self.cls(sequence_output, pooled_output) return prediction_scores, seq_relationship_score class BertLMHeadModel(BertPreTrainedModel): """BERT model with the masked language modeling head. This module comprises the BERT model followed by the masked language modeling head. Params: config: a BertConfig class instance with the configuration to build a new model. Inputs: `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary(see the tokens preprocessing logic in the scripts `extract_features.py`, `run_classifier.py` and `run_squad.py`) `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. It's the mask that we typically use for attention when a batch has varying length sentences. `masked_lm_labels`: masked language modeling labels: torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [-1, 0, ..., vocab_size]. All labels set to -1 are ignored (masked), the loss is only computed for the labels set in [0, ..., vocab_size] Outputs: if `masked_lm_labels` is not `None`: Outputs the masked language modeling loss. if `masked_lm_labels` is `None`: Outputs the masked language modeling logits of shape [batch_size, sequence_length, vocab_size]. Example usage: python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]]) config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072) model = BertMultiHead(config) masked_lm_logits_scores = model(input_ids, token_type_ids, input_mask) """ def __init__(self, config, *model_args, **model_kwargs): super(BertLMHeadModel, self).__init__(config) args = model_args self.bert = BertModel(config, args) self.cls = BertOnlyMLMHead(config, self.bert.embeddings.word_embeddings.weight) self.init_weights() def forward(self, batch, output_attentions=False): input_ids = batch[1] token_type_ids = batch[3] attention_mask = batch[2] masked_lm_labels = batch[4] checkpoint_activations = False bert_output = self.bert( input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False, checkpoint_activations=checkpoint_activations, ) sequence_output = bert_output[0] if masked_lm_labels is None: prediction_scores = self.cls(sequence_output) return prediction_scores masked_token_indexes = torch.nonzero((masked_lm_labels + 1).view(-1), as_tuple=False).view( -1 ) prediction_scores = self.cls(sequence_output, masked_token_indexes) if masked_lm_labels is not None: loss_fct = CrossEntropyLoss(ignore_index=-1) target = torch.index_select(masked_lm_labels.view(-1), 0, masked_token_indexes) masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), target) outputs = (masked_lm_loss,) if output_attentions: outputs += (bert_output[-1],) return outputs else: return prediction_scores class BertForNextSentencePrediction(BertPreTrainedModel): """BERT model with next sentence prediction head. This module comprises the BERT model followed by the next sentence classification head. Params: config: a BertConfig class instance with the configuration to build a new model. Inputs: `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary(see the tokens preprocessing logic in the scripts `extract_features.py`, `run_classifier.py` and `run_squad.py`) `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. It's the mask that we typically use for attention when a batch has varying length sentences. `next_sentence_label`: next sentence classification loss: torch.LongTensor of shape [batch_size] with indices selected in [0, 1]. 0 => next sentence is the continuation, 1 => next sentence is a random sentence. Outputs: if `next_sentence_label` is not `None`: Outputs the total_loss which is the sum of the masked language modeling loss and the next sentence classification loss. if `next_sentence_label` is `None`: Outputs the next sentence classification logits of shape [batch_size, 2]. Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]]) config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072) model = BertForNextSentencePrediction(config) seq_relationship_logits = model(input_ids, token_type_ids, input_mask) ``` """ def __init__(self, config, args): super(BertForNextSentencePrediction, self).__init__(config) self.bert = BertModel(config) self.cls = BertOnlyNSPHead(config) self.apply(self.init_bert_weights) def forward( self, input_ids, token_type_ids=None, attention_mask=None, next_sentence_label=None, checkpoint_activations=False, ): _, pooled_output = self.bert( input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False ) seq_relationship_score = self.cls(pooled_output) if next_sentence_label is not None: loss_fct = CrossEntropyLoss(ignore_index=-1) next_sentence_loss = loss_fct( seq_relationship_score.view(-1, 2), next_sentence_label.view(-1) ) return next_sentence_loss else: return seq_relationship_score class BertForSequenceClassification(BertPreTrainedModel): """BERT model for classification. This module is composed of the BERT model with a linear layer on top of the pooled output. Params: `config`: a BertConfig class instance with the configuration to build a new model. `num_labels`: the number of classes for the classifier. Default = 2. Inputs: `input_ids`: a torch.LongTensor of shape [batch_size, sequence_length] with the word token indices in the vocabulary(see the tokens preprocessing logic in the scripts `extract_features.py`, `run_classifier.py` and `run_squad.py`) `token_type_ids`: an optional torch.LongTensor of shape [batch_size, sequence_length] with the token types indices selected in [0, 1]. Type 0 corresponds to a `sentence A` and type 1 corresponds to a `sentence B` token (see BERT paper for more details). `attention_mask`: an optional torch.LongTensor of shape [batch_size, sequence_length] with indices selected in [0, 1]. It's a mask to be used if the input sequence length is smaller than the max input sequence length in the current batch. It's the mask that we typically use for attention when a batch has varying length sentences. `labels`: labels for the classification output: torch.LongTensor of shape [batch_size] with indices selected in [0, ..., num_labels]. Outputs: if `labels` is not `None`: Outputs the CrossEntropy classification loss of the output with the labels. if `labels` is `None`: Outputs the classification logits of shape [batch_size, num_labels]. Example usage: ```python # Already been converted into WordPiece token ids input_ids = torch.LongTensor([[31, 51, 99], [15, 5, 0]]) input_mask = torch.LongTensor([[1, 1, 1], [1, 1, 0]]) token_type_ids = torch.LongTensor([[0, 0, 1], [0, 1, 0]]) config = BertConfig(vocab_size_or_config_json_file=32000, hidden_size=768, num_hidden_layers=12, num_attention_heads=12, intermediate_size=3072) num_labels = 2 model = BertForSequenceClassification(config, num_labels) logits = model(input_ids, token_type_ids, input_mask) ``` """ def __init__(self, config, args=None): super(BertForSequenceClassification, self).__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config, args) self.dropout = nn.Dropout(config.hidden_dropout_prob) self.classifier = nn.Linear(config.hidden_size, self.num_labels) self.init_weights() def forward( self, input_ids, token_type_ids=None, attention_mask=None, labels=None, checkpoint_activations=False, **kwargs, ): outputs = self.bert( input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False, checkpoint_activations=checkpoint_activations, ) pooled_output = outputs[1] pooled_output = self.dropout(pooled_output) logits = self.classifier(pooled_output) loss = None if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) return SequenceClassifierOutput( loss=loss, logits=logits, hidden_states=None, attentions=None, ) class BertForQuestionAnswering(BertPreTrainedModel): _keys_to_ignore_on_load_unexpected = [r"pooler"] def __init__(self, config): super().__init__(config) self.num_labels = config.num_labels self.bert = BertModel(config) self.qa_outputs = nn.Linear(config.hidden_size, config.num_labels) self.init_weights() def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, start_positions=None, end_positions=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): r""" start_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the start of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. end_positions (:obj:`torch.LongTensor` of shape :obj:`(batch_size,)`, `optional`): Labels for position (index) of the end of the labelled span for computing the token classification loss. Positions are clamped to the length of the sequence (:obj:`sequence_length`). Position outside of the sequence are not taken into account for computing the loss. """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.bert( input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False, ) sequence_output = outputs[0] logits = self.qa_outputs(sequence_output) start_logits, end_logits = logits.split(1, dim=-1) start_logits = start_logits.squeeze(-1).contiguous() end_logits = end_logits.squeeze(-1).contiguous() total_loss = None if start_positions is not None and end_positions is not None: # If we are on multi-GPU, split add a dimension if len(start_positions.size()) > 1: start_positions = start_positions.squeeze(-1) if len(end_positions.size()) > 1: end_positions = end_positions.squeeze(-1) # sometimes the start/end positions are outside our model inputs, we ignore these terms ignored_index = start_logits.size(1) start_positions = start_positions.clamp(0, ignored_index) end_positions = end_positions.clamp(0, ignored_index) loss_fct = CrossEntropyLoss(ignore_index=ignored_index) start_loss = loss_fct(start_logits, start_positions) end_loss = loss_fct(end_logits, end_positions) total_loss = (start_loss + end_loss) / 2 if not return_dict: output = (start_logits, end_logits) + outputs[2:] return ((total_loss,) + output) if total_loss is not None else output return QuestionAnsweringModelOutput( loss=total_loss, start_logits=start_logits, end_logits=end_logits, hidden_states=None, attentions=None, ) class BertForMaskedLM(BertPreTrainedModel): def __init__(self, config): super().__init__(config) if config.is_decoder: logger.warning( "If you want to use `BertForMaskedLM` make sure `config.is_decoder=False` for " "bi-directional self-attention." ) self.bert = BertModel(config) self.cls = BertOnlyMLMHead(config,self.bert.embeddings.word_embeddings.weight) self.init_weights() def get_output_embeddings(self): return self.cls.predictions.decoder def set_output_embeddings(self, new_embeddings): self.cls.predictions.decoder = new_embeddings def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, encoder_hidden_states=None, encoder_attention_mask=None, labels=None, output_attentions=None, output_hidden_states=None, checkpoint_activations=False, return_dict=None, ): r""" labels (:obj:`torch.LongTensor` of shape :obj:`(batch_size, sequence_length)`, `optional`): Labels for computing the masked language modeling loss. Indices should be in ``[-100, 0, ..., config.vocab_size]`` (see ``input_ids`` docstring) Tokens with indices set to ``-100`` are ignored (masked), the loss is only computed for the tokens with labels in ``[0, ..., config.vocab_size]`` """ return_dict = return_dict if return_dict is not None else self.config.use_return_dict outputs = self.bert( input_ids, token_type_ids, attention_mask, output_all_encoded_layers=False, checkpoint_activations=checkpoint_activations, ) sequence_output = outputs[0] prediction_scores = self.cls(sequence_output) masked_lm_loss = None if labels is not None: loss_fct = CrossEntropyLoss() # -100 index = padding token masked_lm_loss = loss_fct(prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) if not return_dict: output = (prediction_scores,) + outputs[2:] return ((masked_lm_loss,) + output) if masked_lm_loss is not None else output return MaskedLMOutput( loss=masked_lm_loss, logits=prediction_scores, hidden_states=None, attentions=None, ) def prepare_inputs_for_generation(self, input_ids, attention_mask=None, **model_kwargs): input_shape = input_ids.shape effective_batch_size = input_shape[0] # add a dummy token assert self.config.pad_token_id is not None, "The PAD token should be defined for generation" attention_mask = torch.cat([attention_mask, attention_mask.new_zeros((attention_mask.shape[0], 1))], dim=-1) dummy_token = torch.full( (effective_batch_size, 1), self.config.pad_token_id, dtype=torch.long, device=input_ids.device ) input_ids = torch.cat([input_ids, dummy_token], dim=1) return {"input_ids": input_ids, "attention_mask": attention_mask} ######################################################################### ``` <|||||>We run 'run_glue.py' using the modified 'BertForSequenceClassification(BertPreTrainedModel'.<|||||>Thank you. Looks like you are redefining every object so the error has nothing to do with the Transformers library.<|||||>> Thank you. Looks like you are redefining every object so the error has nothing to do with the Transformers library. OK Thx!
transformers
15,170
closed
Using the accelerate MLM example still results in CUDA out of memory
Hi, I am using [this example](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm_no_trainer.py) and instructions on [this discussion](https://discuss.huggingface.co/t/how-to-use-specified-gpus-with-accelerator-to-train-the-model/10967) to adapt the code from [run_summarization.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py), and train (fine tune) a summarization BART model (specifically model_name="sshleifer/distilbart-xsum-12-6" that I got from HF model repo). I have one GPU and my batch size is 8. My training data sample size is 15k. However, as soon as the training starts, I get the following error: `RuntimeError: CUDA out of memory. Tried to allocate 512.00 MiB (GPU 0; 14.76 GiB total capacity; 13.49 GiB already allocated; 419.75 MiB free; 13.54 GiB reserved in total by PyTorch)` My full code can be found [here](https://github.com/tanyaroosta/test/blob/master/accl_trainer.py). Other relevant parameters: cuda available" True current device: 0 device count: 1 device name: Tesla T4 Allocated: 1.1 GB accelerator device" cuda ***** Running training ***** Num examples = 15909 Num Epochs = 3 Instantaneous batch size per device = 8 Total train batch size (w. parallel, distributed & accumulation) = 8 Gradient Accumulation steps = 1 Total optimization steps = 5967 I would appreciate any help as I am at a loss of what else to try to get this training to go through. I am using a SageMaker instance of ml.g4dn.16xlarge. Thanks for your help.
01-16-2022 05:16:57
01-16-2022 05:16:57
Hi, Can you try out this: https://github.com/rentruewang/koila Let me know if it resolves your issue.<|||||>koila didn't work for me. I basically had to use a p3.24 instance to get it to work with a batch size of 1.
transformers
15,169
closed
model.generate with prefix_allowed_tokens_fn throws RuntimeError: probability tensor contains either `inf`, `nan` or element < 0
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.15.0 - Platform: Linux-5.4.0-90-generic-x86_64-with-debian-bullseye-sid - Python version: 3.7.12 - PyTorch version (GPU?): 1.10.0+cu102 (True) - Tensorflow version (GPU?): 2.7.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten @narsil <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using T5ForConditionalGeneration: The problem arises when using my own modified scripts: Script to reproduce error is mentioned below. The tasks I am working on is my own task or dataset: The task requires conditional generation from T5, in such a way, that the output vocabulary is restricted to a small set. ## To reproduce 1. Run the following script to reproduce the behaviour. ```python from transformers import T5Tokenizer, T5ForConditionalGeneration, T5Config lm_model = 't5-small' model = T5ForConditionalGeneration.from_pretrained(lm_model) tokenizer = T5Tokenizer.from_pretrained(lm_model) def restrict_decode_vocab(batch_idx, prefix_beam): if len(prefix_beam)==3: restricted_vocab = tokenizer(' ', return_tensors="pt")['input_ids'].tolist() else: restricted_vocab = tokenizer('<extra_id_0> cute dog <extra_id_1> the <pad>', return_tensors="pt")['input_ids'].tolist() return restricted_vocab source = ['The <extra_id_0> walks in <extra_id_1> park .'] source_encoding = tokenizer(source[:], padding='longest', return_tensors="pt") input_ids, attention_mask = source_encoding['input_ids'], source_encoding['attention_mask'] decoded_beams = model.generate(input_ids=input_ids, attention_mask=attention_mask, do_sample=True, num_beams=2, prefix_allowed_tokens_fn=restrict_decode_vocab, min_length=4, max_length=4, remove_invalid_values=True) print(decoded_beams) ``` 2. Above script produces the following stack trace. ``` /home/jsingh319/uploaded_venvs/venv-koala-torch-1.10-python-3.7.12/lib/python3.7/site-packages/transformers/generation_utils.py:2259: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). next_indices = next_tokens // vocab_size Traceback (most recent call last): File "reproduce_error.py", line 17, in <module> decoded_beams = model.generate(input_ids=input_ids, attention_mask=attention_mask, do_sample=True, num_beams=2, prefix_allowed_tokens_fn=restrict_decode_vocab, min_length=4, max_length=4, remove_invalid_values=True) File "/home/jsingh319/uploaded_venvs/venv-koala-torch-1.10-python-3.7.12/lib/python3.7/site-packages/torch/autograd/grad_mode.py", line 28, in decorate_context return func(*args, **kwargs) File "/home/jsingh319/uploaded_venvs/venv-koala-torch-1.10-python-3.7.12/lib/python3.7/site-packages/transformers/generation_utils.py", line 1220, in generate **model_kwargs, File "/home/jsingh319/uploaded_venvs/venv-koala-torch-1.10-python-3.7.12/lib/python3.7/site-packages/transformers/generation_utils.py", line 2253, in beam_sample next_tokens = torch.multinomial(probs, num_samples=2 * num_beams) RuntimeError: probability tensor contains either `inf`, `nan` or element < 0 ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior No error. <!-- A clear and concise description of what you would expect to happen. --> ## Possible solution The __call__ function for class "InfNanRemoveLogitsProcessor" should include the following statement before returning "scores". ``` scores[scores == float("-inf")] = torch.finfo(scores.dtype).min ```
01-16-2022 03:40:27
01-16-2022 03:40:27
@patrickvonplaten Pinging you to get your input on this. It seems `-inf` are explicitely set by `prefix_allowed_tokens_fn` and `remove_invalid_values` **doesn't** remove `float(-inf)` specifically. However the script does seems to fail currently. I added a PR containing the "fix" to accelerate things along, but given everything is ingrained in tests and other logits processors actively use `float(-inf)` I am not sure this is the desired behavior. Other options I consider viable: - Stop using `float(-inf)` directly and use `torch.finfo(scores.dtype).min` instead (we don't introduce infinities anymore so should solve it) - Change `float(-inf)` only before using `torch.multinomialt`.<|||||>I don't think that this is related in any way to the `InfNanRemoveLogitsProcessor` processor. IMO, the reason for the error here is that in the 3rd generation step, **all** values of `next_token_scores` are set to `-inf` (I think) due to the `prefix_allowed_tokens_fn` that you've added. This is not a bug IMO with `transformers`, but with the `prefix_allowed_tokens_fn` function as it should not set all values to `-inf`. A tip from my side @iamjanvijay would be to do the following. Create the `PrefixConstrainedLogitsProcessor` object with your function and just play around with it locally (what happens at generation step 3) I think you'll see then that it sets all values to `-inf` at some point which it shouldn't do<|||||>@patrickvonplaten @Narsil Thanks for your response. I was trying to check why this is happening. I found that if the restricted_vocab at any generation step only includes "\</s\>" (end-of-sentence token) this error occurs. In other cases, the script doesn't encounter such an error. I'll try to look if all the elements at that generation step are set to -inf.<|||||>I'll close my PR in the meantime. We can reopen it if needed, but I tend to agree with @patrickvonplaten that having everything `float(-inf)` can be considered a bug already.<|||||>> @patrickvonplaten @Narsil Thanks for your response. I was trying to check why this is happening. I found that if the restricted_vocab at any generation step only includes "</s>" (end-of-sentence token) this error occurs. In other cases, the script doesn't encounter such an error. I'll try to look if all the elements at that generation step are set to -inf. Have you found what was causing the issue by any chance @iamjanvijay? I'm encountering the same issue while I'm using the generate function with BART, but I'm not using any prefix_allowed_tokens, and this error usually happens when I've been training the model for a while. Like @iamjanvijay said, I suspect something to do with cases where some tokens are masked or filtered, but I haven't really figured out where/why it's happening. I'd appreciate any pointers. <|||||>@mindojune - could you maybe open a new issue as it's not related to `prefix_allowed_tokens_fn` ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@mindojune Hi, I am facing the same problem as you and this error usually happens after I have trained the model for a while. And I am also using BART. Do you have any idea why this is happening or how to fix this error? Thank you a lot.<|||||>Hey @hongyuntw, the reason is that BART forces the second token to be this id https://huggingface.co/facebook/bart-large/blob/main/config.json#L27 . However if you use additionally something like `prefix_allowed_tokens_fn` which might also not allow this id: https://huggingface.co/facebook/bart-large/blob/main/config.json#L27 => then all ids are set to `-inf` in which case the model cannot generate anything anymore. To solve this I would probably set this config: https://huggingface.co/facebook/bart-large/blob/main/config.json#L27 to None<|||||>Was running into similar issues when using `prefix_allowed_tokens_fn` in tandem with beam-search multinomial sampling, and realized the `top_k` and `top_p` args were sometimes preventing all the allowed tokens from being used, as they were outside those two tops. `no_repeat_ngram_size` can have a similar effect. Consider removing `top_k` and `top_p` if only allowing certain tokens is more important.<|||||>i also have this problem,but i dont know how to fix <|||||>Same I am also running into this issue, has there been any resolution for this ? <|||||>i try another way to avoid this problem,i was apply minigpt-4 in 3090,when i use v0 version weight,the problem happen,for result it ,i try many pytorch version ,but it dosent work.finally,i use a new model version v1.1,this problem also away.so,i think the problem is relative with model.for minigpt4,model decoding is relateted with fschat [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md)
transformers
15,168
closed
Add Model Interpretability Module
# 🚀 Feature request What about adding an Interpretability Module like integrating the captum with hugging face, because there are a lot of difficulties when trying to use captum with hugging face transformer models
01-15-2022 14:19:23
01-15-2022 14:19:23
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,167
closed
Enable tqdm toggling
# What does this PR do? Fixes https://github.com/huggingface/transformers/issues/14889. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @stas00, @LysandreJik
01-15-2022 00:44:22
01-15-2022 00:44:22
1. In `datasets`, `tqdm_utils.py` functions are exposed at the upper level via imports in `__init__.py` ([link](https://github.com/huggingface/datasets/blob/ccc28d4141006924af85c2d940d82d71da4c6ec0/src/datasets/utils/__init__.py#L41)). It seems that this is not the case with utility functions in `transformers`. Do we want to keep functions like `transformers.utils.set_progress_bar_enabled` hidden at its current level? 2. Should toggling `set_progress_bar_enabled` affect the entire API, for instance the behavior of the `Trainer` class? This would mean replacing all occurrences of `from tqdm.auto import tqdm` with `from .utils.tqdm_utils import tqdm`. Thanks in advance!<|||||>> 1. In `datasets`, `tqdm_utils.py` functions are exposed at the upper level via imports in `__init__.py` ([link](https://github.com/huggingface/datasets/blob/ccc28d4141006924af85c2d940d82d71da4c6ec0/src/datasets/utils/__init__.py#L41)). It seems that this is not the case with utility functions in `transformers`. Do we want to keep functions like `transformers.utils.set_progress_bar_enabled` hidden at its current level? I don't think there is a need for that. This will be very rarely used. > 2. Should toggling `set_progress_bar_enabled` affect the entire API, for instance the behavior of the `Trainer` class? This would mean replacing all occurrences of `from tqdm.auto import tqdm` with `from .utils.tqdm_utils import tqdm`. No. the turning off is advanced use, so we shouldn't turn tqdm anywhere since it has a very specific goal. And those who don't want it can now turn it off at will. <|||||>@stas00 Thanks for the review. I've updated the test and the docs/docstrings. Let me know if there is anything else that needs to be done!<|||||>Thanks everyone for the comments! I've distilled the discussion into a to-do list, which I'll update as I go. Please let me know if anything seems missing or out of place. - [x] Use `AutoConfig` to make the test framework-agnostic - [x] Move the test into `test_logging.py` instead of creating a separate new file - [x] Create `enable_progress_bar` and `disable_progress_bar` - [x] Create an [issue](https://github.com/huggingface/datasets/issues/3586) on `datasets` side to unify API/de-deprecate `disable_progress_bar` - [x] Move docstring to comment - [x] Merge `tqdm_utils.py` into `logging.py` - [x] Remove `set_progress_bar_enabled()`
transformers
15,166
open
Add FastSpeech2
# 🌟 New model addition ## Model description FastSpeech2 is a TTS model that outputs mel-spectrograms given some input text. From the [paper](https://arxiv.org/abs/2006.04558) abstract: > Non-autoregressive text to speech (TTS) models such as FastSpeech can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models. Audio samples are available at [this https URL](https://speechresearch.github.io/fastspeech2/). ## Open source status * [x] the model implementation is available * [x] the model weights are available * [x] who are the authors: @RayeRen The authors have not open-sourced their code implementation. However, the first author replied to an email inquiry and pointed me to the official implementation of DiffSinger, which includes FastSpeech2 code. This is likely the closest original implementation we can access. * [DiffSinger](https://github.com/MoonInTheRiver/DiffSinger/tree/master/modules/fastspeech) LJ Speech model weights are available [here](https://drive.google.com/file/d/1Zp45YjKkkv5vQSA7woHIqEggfyLqQdqs/view). Other notable unofficial implementations include: * [ming024](https://github.com/ming024/FastSpeech2) * [ESPmet](https://espnet.github.io/espnet/_modules/espnet2/tts/fastspeech2/fastspeech2.html) * [TensorFlowTTS](https://github.com/TensorSpeech/TensorFlowTTS/blob/master/tensorflow_tts/models/fastspeech2.py) ## Additional Context This issue is a revisiting of https://github.com/huggingface/transformers/pull/11135. cc @anton-l @patrickvonplaten
01-14-2022 21:53:58
01-14-2022 21:53:58
This thread is a summary of some details discussed in today's meeting with Patrick and Anton. - Ideally, use weights from [DiffSinger](https://github.com/MoonInTheRiver/DiffSinger), as it is the best approximation of the original implementation currently available - Check inference results - Compare with [fairseq's implementation](https://github.com/pytorch/fairseq/blob/main/fairseq/models/text_to_speech/fastspeech2.py) to see if there are architectural differences - (Tentative) Use non-neural vocoding algorithms (e.g. Griffin-Lim) or find a light-weight, clean library for optional dependency Below are some relevant observations made thus far. - Inference seems to work with DiffSinger's weights ([listen to LJ introduce HF in style](https://github.com/huggingface/transformers/files/7899982/out.zip)) - There are a total of 6 versions of the paper on [arXiv](https://arxiv.org/abs/2006.04558v6). In early versions of the paper, F0 is used, whereas the final version uses continuous wavelet transforms. To the best of my knowledge, all open source implementations of FastSpeech2 follow the early version. DiffSinger's checkpoint/configuration, however, uses CWT. Therefore, there are minor architectural differences between other implementations, including fairseq's, and DiffSinger's (even with checkpoint conversion, some keys relating to CWT would be missing). This is also noted in [ming024's repo](https://github.com/ming024/FastSpeech2). > This implementation is more similar to version 1, which uses F0 values as the pitch features. On the other hand, pitch spectrograms extracted by continuous wavelet transform are used as the pitch features in the later versions. My personal vote would be to use DiffSinger's weights and code given that it reflects the most up-to-date version of the paper. What do you think @patrickvonplaten @anton-l?<|||||>Sorry to answer that late here @jaketae. @anton-l, could you take over the guidance of the PR here? :-) I'm drowning a bit in GitHub issues at the moment :sweat_smile: My 2 cents, if the inference works well with DiffSinger's weights - let's go for this code as the official fastspeech2 code.<|||||>@patrickvonplaten, @jaketae and I agreed to take a brief break until the speech sprint ends, so that we can make a more informed decision about which version of the model to implement first (or focus on a more simple architecture altogether) <|||||>@patrickvonplaten, thank you for looping back to this issue! Adding on top of what Anton said, I might try to port more low hanging fruit, such as FastPitch or TransformerTTS in the meantime. I'll be coordinating with Anton throughout. Thank you both!<|||||>@patrickvonplaten @anton-l @jaketae I would be interested in helping with this, as I am super genuinely interested in Speech Synthesis and its been a hobby of mine. I am eventually looking to return back to (Local / Serverless) Machine Learning Engineering from Mobile Software Development in the payments domain. I was mainly working in the Mobile Machine Learning Space before DL was a thing. I worked on feature extraction from gyroscope accelerometer motion signals. I am a bit rusty however, since more recently. I have been pushed to take a leadership role. Anyways talk is cheap, delivery means more to me. I have gotten Fast Pitch 1.1 to work on custom dataset and fine tuned it on this popular actress. ``` Here's a sample of this https://vocaroo.com/14E4KeW0ymXI yeah we could be living in a "her" based universe in 10 years? powered by hugging face =) Encase you missed this movie https://en.wikipedia.org/wiki/Her_(film) ``` I will be looking to apply to the wild card position or the pytorch engineer position once I get about 4 weeks of leetcode practice and read and memorize a few things in the pytorch book. Let me know how I can help thanks!<|||||>Hey @ArEnSc, thank you for your interest in FastSpeech2 integration to HF transformers! Glad to see someone who is also interested in TTS. At the moment, the integration is FastSpeech2 using weights from fairseq. I also briefly considered FastPitch, but FS2 is the priority at the moment. If you would like to contribute FastPitch, please feel free to do so! The FastSpeech2 PR is WIP at the moment, but when it matures it will likely introduce a number of TTS-specific APIs that you might want to reference in your own work with FastPitch. If you have any questions, ideas, or feedback, they are always welcome. I'm not involved in hiring, but I believe the door is always open. Feel free to apply and wish you the best of luck!<|||||>@jaketae ill take a look at that FastSpeech2 PR, I am going through the hugging face course to get an idea of the API surface right now seems striaght forward I believe. Ill get some of the ground work started and then wait for the TTS specific API's to mature =)
transformers
15,165
closed
Update tutorial docs
This is a first draft of tutorial docs for the `pipeline`, `AutoClasses`, and how to preprocess data. Main updates include: - How to use the `pipeline` for image and vision tasks. - Highlight the `AutoFeatureExtractor` and `AutoProcessor` for image, vision and multimodal tasks. - Add tutorials for how to preprocess audio, vision, and multimodal data. I'd also really appreciate it if @patrickvonplaten or @anton-l could take a look at the audio tutorial, and if @NielsRogge could take a look at the vision and multimodal tutorials (thank you so much for your notebooks! ❤️ ). Once we have a concept guide section, I think we should move this [section](https://huggingface.co/docs/transformers/master/en/preprocessing#everything-you-always-wanted-to-know-about-padding-and-truncation) about padding/truncation there as well as the summary pages about the tasks and models. The preprocess document is also quite long now, so please let me know if you'd like me to create a separate page for each modality. I think this would make it easier for users to read and find what they're looking for.
01-14-2022 18:48:16
01-14-2022 18:48:16
Thanks for the feedback y'all! I've also made a few additional changes: - Rewrite multimodal tutorial with a different dataset (LJ Speech) because we can't use TIMIT. - Add output image for preprocessed image so users can see what it looks like. Let me know if there is anything else you'd like to see, otherwise I think we are ready to merge!
transformers
15,164
closed
[WIP] add ctc speech streaming
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-14-2022 18:33:58
01-14-2022 18:33:58
What's missing here ? Anything you could take some support?<|||||>superseded by https://github.com/huggingface/transformers/pull/15309
transformers
15,163
closed
[Speech models] Disable non-existing chunking in tests
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Speech models don't have feed-forward chunking implemented so it makes no sense to test for it. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-14-2022 17:28:04
01-14-2022 17:28:04
transformers
15,162
closed
Update from keras2onnx to tf2onnx
# What does this PR do? [keras2onnx](https://github.com/onnx/keras-onnx) has been dead for a while, with the development being made in [tf2onnx](https://github.com/onnx/tensorflow-onnx/tree/6c417b4210285208ba7165068318542d9e3d9656). It is the successor, yet also retrocompatible. In fact, our tests were failing right at the import level (i.e. `import keras2onnx` fails with recent `keras` versions). These changes fix the tests for TF+ONNX. Example of test command that gets fixed with this PR: ```RUN_SLOW=1 pytest tests/test_modeling_tf_bert.py::TFBertModelTest::test_onnx_runtime_optimize```
01-14-2022 17:01:54
01-14-2022 17:01:54
transformers
15,161
closed
Add in `model.generate` support to encoder outputs for flax models
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Add support to encoder outputs in `model.generate` for flax models (similar to Pytorch models). ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> With JAX, we need to pass a seed for each generation iteration. So when we want different samples, we may need to run the same function multiple times. For Encoder/Decoder models such as Bart, we could calculate the encoder outputs once and then directly pass them to the decoder, which could potentially save half of the computations (when encoder/decoder have same size). ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> Not sure the best way to implement it but it seems that they typically come from `model._prepare_encoder_decoder_kwargs_for_generation` @patil-suraj @patrickvonplaten
01-14-2022 15:54:05
01-14-2022 15:54:05
Hey @borisdayma, Aren't we already doing this here: https://github.com/huggingface/transformers/blob/b8810847d0576e3c142854ad3b8a607ecd3df291/src/transformers/generation_flax_utils.py#L140 ? <|||||>If you do: ```python model.generate(…) model.generate(…) #same params ``` You will get twice a call to `model.encode(…)`. You will call several times `model.generate(…)` if you want for example a bunch of predictions (dalle-mini case) and then sort them with another model (CLIP scoring). In that case the encoder part does not depend on the key so we could cache it and allow passing it as an argument to `model.generate(encoder_outputs = manually_cached_value)`. I think it's just about checking if `encoder_outputs` is part of `model_kwargs` in `_prepare_encoder_decoder_kwargs_for_generation`.<|||||>I see! Yeah you're right it should be as easy as this. We can add a `encoder_outputs` to the `generate()` function and then only call `encode()` if it's not present (that's what we're doing in PyTorch as well). Would you like to open a PR about it maybe? :-)
transformers
15,160
closed
Fix typo in test_configuration_common.py
# What does this PR do? Fixes a small typo in test_configuration_common.py
01-14-2022 13:43:55
01-14-2022 13:43:55
transformers
15,159
closed
Fix RuntimeError on generation_utils.py
This PR fixes a below runtime error on generation_utils.py ``` RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead. ```
01-14-2022 12:57:58
01-14-2022 12:57:58
Think https://github.com/huggingface/transformers/pull/15498 superseeds this PR a bit as it corrects all occurances of PyTorch's integer divison. Hope it's ok if we go for https://github.com/huggingface/transformers/pull/15498 instead :sweat_smile:
transformers
15,158
closed
fix BertTokenizerFast `tokenize_chinese_chars` arg
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #15156 This PR proposes a change that allows to adapt a bert fast tokenizer according to the `tokenize_chinese_chars` value given during the initialization (this requires to fix the `backend_tokenizer`). This change affects the following tokenizers: - `BertTokenizerFast` - `DistilBertTokenizerFast` - `DPRContextEncoderTokenizerFast` - `DPRQuestionEncoderTokenizerFast` - `DPRReaderTokenizerFast` - `SqueezeBertTokenizerFast` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. I would love to have your review @LysandreJik or @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-14-2022 12:19:38
01-14-2022 12:19:38
Very nice :)
transformers
15,157
closed
Fix RuntimeError on generation_utils.py
This PR fix a below runtime error on generation_utils.py ``` RuntimeError: Integer division of tensors using div or / is no longer supported, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead. ```
01-14-2022 12:15:36
01-14-2022 12:15:36
transformers
15,156
closed
the `tokenize_chinese_chars` argument is not always taken into account with the fast version of the bert tokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.16.0.dev0 - Platform: Linux-5.11.0-46-generic-x86_64-with-glibc2.17 - Python version: 3.8.12 - PyTorch version (GPU?): 1.10.1+cu102 (False) - Tensorflow version (GPU?): 2.7.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu) - Jax version: 0.2.26 - JaxLib version: 0.1.75 - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik - T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten - Blenderbot, MBART: @patil-suraj - Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten - FSMT: @stas00 - Funnel: @sgugger - GPT-2, GPT: @patrickvonplaten, @LysandreJik - RAG, DPR: @patrickvonplaten, @lhoestq - TensorFlow: @Rocketknight1 - JAX/Flax: @patil-suraj - TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge - GPT-Neo, GPT-J, CLIP: @patil-suraj - Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor. Library: - Benchmarks: @patrickvonplaten - Deepspeed: @stas00 - Ray/raytune: @richardliaw, @amogkam - Text generation: @patrickvonplaten @narsil - Tokenizers: @SaulLu - Trainer: @sgugger - Pipelines: @Narsil - Speech: @patrickvonplaten, @anton-l - Vision: @NielsRogge, @sgugger Documentation: @sgugger Model hub: - for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj For research projetcs, please ping the contributor directly. For example, on the following projects: - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python from transformers import BertTokenizer, BertTokenizerFast list_of_commun_chinese_char = ["的", "人", "有"] text = "".join(list_of_commun_chinese_char) print(text) # 的人有 model_name = "bert-base-uncased" tokenizer_slow = BertTokenizer.from_pretrained(model_name, tokenize_chinese_chars=False) tokenizer_slow.tokenize(text) # ['的', '##人', '##有'] tokenizer_slow = BertTokenizer.from_pretrained(model_name, tokenize_chinese_chars=True) tokenizer_slow.tokenize(text) # ['的', '人', '有'] tokenizer_fast = BertTokenizerFast.from_pretrained(model_name, tokenize_chinese_chars=False) tokenizer_fast.tokenize(text) # ['的', '人', '有'] tokenizer_fast = BertTokenizerFast.from_pretrained(model_name, tokenize_chinese_chars=True) tokenizer_fast.tokenize(text) # ['的', '人', '有'] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior If the user indicates `tokenize_chinese_chars=False` when he initializes a fast bert tokenizer, we expect that this characteristic is reflected on the tokenizer. In other words, in the previous example, we expect that: ```python tokenizer_fast = BertTokenizerFast.from_pretrained(model_name, tokenize_chinese_chars=False) tokenizer_fast.tokenize(text) # ['的', '##人', '##有'] ```
01-14-2022 12:12:46
01-14-2022 12:12:46
transformers
15,155
closed
[Robust Speech Event] Add guides
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-14-2022 11:51:05
01-14-2022 11:51:05
transformers
15,154
closed
Fixing flaky test (hopefully).
# What does this PR do? Fixed flaky test: ``` RUN_PIPELINE_TESTS=1 pytest -sv tests/test_pipelines_question_answering.py::QAPipelineTests::test_pt_XLNetConfig_XLNetForQuestionAnsweringSimple_XLNetTokenizer_nofeature_extractor ``` This was a real bug (contrary to `p_mask` invalid padding which was linked to a bad rebase). Basically, qa pipeline can handle batching, but since there are some many elements being passed around everywhere in that pipeline, some get padded (tensors), some not(everything else), leading to issues down the line. It was silent for most pipeline side padding is on the right (no issues for valid offsets) but XLNET has padding_side on the left, meaning the offsets were wrong: `input_ids = [pad pad, h, i]` and `offsets = [(0, 1), (1, 2)]` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
01-14-2022 10:32:00
01-14-2022 10:32:00
transformers
15,153
closed
It is better to add a function to train additon tokens for the pre_trained tokenizer. esp. for the language like Chinese.
https://github.com/huggingface/transformers/blob/96881729ce83cfc8e5fa04c903ee4296ad17cfbb/src/transformers/models/bert/tokenization_bert.py#L117 Lately, I use bert to train a NER model for Chinese. I found many Chinese Charaters in my data can not be tokenize by the bert model. Literally, it is tokened to [PAD], and can not give a definitely word embedding vector. So it is better to add a function to the class tokenziation , and the funtion can train new tokenzation , finally can extend the old tokenizer. It is useful . I was trying .
01-14-2022 08:46:37
01-14-2022 08:46:37
it use the source code for generating bert tokenizer , i am finding the src.<|||||>Have you tried with the `bert-base-chinese` checkpoint? Also cc @SaulLu :)<|||||>> Have you tried with the `bert-base-chinese` checkpoint? > > Also cc @SaulLu :) Sure , i have tryed some examle is 锶 is not in bert_vocab() . i have use a dummy solution ``` tokenizer = AutoTokenizer.from_pretrained("ckiplab/bert-base-chinese-ner") model = AutoModelForTokenClassification.from_pretrained("ckiplab/bert-base-chinese-ner") def dummy_way_to_find_all_new_tokens_to_bert_tokenizer(fp,tokenizer): def _is_chinese_char(cp): """Checks whether CP is the codepoint of a CJK character.""" # This defines a "chinese character" as anything in the CJK Unicode block: # https://en.wikipedia.org/wiki/CJK_Unified_Ideographs_(Unicode_block) # # Note that the CJK Unicode block is NOT all Japanese and Korean characters, # despite its name. The modern Korean Hangul alphabet is a different block, # as is Japanese Hiragana and Katakana. Those alphabets are used to write # space-separated words, so they are not treated specially and handled # like the all of the other languages. if ( (cp >= 0x4E00 and cp <= 0x9FFF) or (cp >= 0x3400 and cp <= 0x4DBF) # or (cp >= 0x20000 and cp <= 0x2A6DF) # or (cp >= 0x2A700 and cp <= 0x2B73F) # or (cp >= 0x2B740 and cp <= 0x2B81F) # or (cp >= 0x2B820 and cp <= 0x2CEAF) # or (cp >= 0xF900 and cp <= 0xFAFF) or (cp >= 0x2F800 and cp <= 0x2FA1F) # ): # return True return False out=[] with open(fp,encoding='utf-8') as f: for i in f: for j in i: if _is_chinese_char(ord(j)) and j not in tokenizer.get_vocab(): out.append(j) return out print(1) tokenizer.add_tokens(dummy_way_to_find_all_new_tokens_to_bert_tokenizer('data/train1.txt',tokenizer)) #只能写一行,写一次. model.resize_token_embeddings(len(tokenizer)) ``` <|||||>@zhangbo2008 , your technique to add tokens to a tokenizer already trained because you want to fine-tune a model using this tokenizer seems very good to me. However, I'm not sure I understand your request / demand / question in this issue. :relaxed: <|||||>> @zhangbo2008 , your technique to add tokens to a tokenizer already trained because you want to fine-tune a model using this tokenizer seems very good to me. > > However, I'm not sure I understand your request / demand / question in this issue. ☺️ yes, you get the point . but I think my method is dummy. there is a better way to solve the problem, because in some language the token after trained is not one character ,like English . so it's a better way to get the bigger vocab by fix the bpe algorithm i think. but i haven't find some bpe algorithm code .<|||||>Thank you, I understand your request a little better! As you may have seen, `transformers` does not directly implement the algorithms that allow to train a tokenizer (and thus get the vocabulary, merge rules, etc). A method like `train_new_from_iterator`, actually uses a feature of the [`tokenizers` library ](https://huggingface.co/docs/tokenizers/python/latest/) (which has the advantage to be much faster as the library is coded in RUST). I opened [an issue](https://github.com/huggingface/tokenizers/issues/879) in this library to see if we can imagine to implement this kind of feature in tokenizers, it seems quite feasible for tokenization algorithms like BPE (for which [an issue](https://github.com/huggingface/tokenizers/issues/839) was already opened previously) but harder with tokenization algorithms like Unigram. It is therefore probably better to advance on this subject in the library tokenizers :relaxed: <|||||>hello,I wonder whether we can utilize the unused tokens in the tokenizer,because many tokenizer have saved many unused tokens,but I dont know how to implement it !Could anyone tell me how to to it<|||||>i am trying to read and learn the rust code in tokenizers library. rust is difficult to use . It is hard to config for VS code to run rust code . perhaps there is no IDE like pycharm for RUST. <|||||>I already give a solution by python. you can check this. https://github.com/zhangbo2008/bpe_algorithm_can_finetune_tokenizer<|||||>here is the example, definetely you should download py_bpe from upeer url. I change some code from others.. ``` import tqdm from py_bpe import BpeTokenizer from pathlib import Path savepath = Path("penguin_of_doom.vocab") corpus = """ hi every1 im new!!!!!!! *holds up spork* my name is katy but u can call me t3h PeNgU1N oF d00m!!!!!!!! lol…as u can see im very random!!!! thats why i came here, 2 meet random ppl like me ^_^… im 13 years old (im mature 4 my age tho!!) i like 2 watch invader zim w/ my girlfreind (im bi if u dont like it deal w/it) its our favorite tv show!!! bcuz its SOOOO random!!!! shes random 2 of course but i want 2 meet more random ppl =) like they say the more the merrier!!!! lol…neways i hope 2 make alot of freinds here so give me lots of commentses!!!! DOOOOOMMMM!!!!!!!!!!!!!!!! <--- me bein random again ^_^ hehe…toodles!!!!! love and waffles, t3h PeNgU1N oF d00m """ learn_bpe_args = dict( vocab_size=1000, pairable_chars="a-zA-Z0-9", ) bpet = BpeTokenizer.from_corpus(corpus, savepath, learn_bpe_args=learn_bpe_args) unk_char = "%" tokens = bpet.tokenize("t3h PeNgU1N oF d00m"+unk_char) print(tokens) finetune_corpus='''hi every1 im new sssdlaj ssdsajlfk ssdsafjkl的斯拉克福建烤老鼠大解放路卡啥的''' token_before_finetune=bpet.encode(finetune_corpus) print(token_before_finetune)#[22, 22, 22, 25, 23, 18, 0, 12, 22, 22, 123, 18, 0, 23, 28, 33, 12, 22, 22, 123, 220, 0, 33, 23, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] print('we see there are too many zero means unk') #==========================adding code for extension: finetune a new tokenizer the_factor_of_new_added_token_divided_unk_number=1.5 # we set this factor because, we expand our tokenizer, so the new corpus must be many unk with old tokenizer, our new_tokenizer length of need a new_added_token, so we set a factor, if the factor is higher, we have more new tokenizer. the factor must be bigger than 1.0 new_tokenizer=bpet.finetune_tokenizer(finetune_corpus,the_factor_of_new_added_token_divided_unk_number) token_after_finetune=new_tokenizer.encode(finetune_corpus) print(token_after_finetune)#[239, 240, 244, 223, 12, 239, 123, 241, 246, 33, 12, 239, 123, 220, 223, 254, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 224] print("we see we have no unk for the token_after_finetune") ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
15,152
closed
[Fix doc example] UniSpeechSatForPreTraining
# What does this PR do? In `UniSpeechSatForPreTraining`, this doc example fails ``` >>> feature_extractor = UniSpeechSatFeatureEncoder.from_pretrained("patrickvonplaten/unispeech_sat-base") >>> model = UniSpeechSatForPreTraining.from_pretrained("patrickvonplaten/unispeech_sat-base") ``` (can't find this checkpoint on the Hub) This PR changes it to ``` >>> feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained("microsoft/unispeech-sat-base") >>> model = UniSpeechSatForPreTraining.from_pretrained("microsoft/unispeech-sat-base") ``` ## Who can review @patrickvonplaten
01-14-2022 08:31:50
01-14-2022 08:31:50
transformers
15,151
closed
Build dev doc
null
01-14-2022 07:50:45
01-14-2022 07:50:45
Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15151). All of your documentation changes will be reflected on that endpoint.<|||||>Thank you for your PR. The documentation will now be removed from the staging environment - feel free to reopen this PR to recreate it.<|||||>Closing a PR puts the message above ^ merging a PR does too, without the invitation to reopen the PR<|||||>Thank you for your PR! The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15110).\n\nAll of your documentation changes will be reflected on that endpoint.<|||||>I'll remove the `\n\n` but other than that, does the message look good to you?<|||||>Closing
transformers
15,150
closed
[summarization example] better error message
https://github.com/huggingface/transformers/pull/15125 introduced a new required arg. This PR improves on the error message that is easier to act on as the original is not actionable unless one intimately knows the script. @sgugger
01-13-2022 21:53:37
01-13-2022 21:53:37
Wait, this is an optional argument that only needs to be set if MBart is used as a model. There is an assert later on in the script, so this test should just be removed from the postinit.<|||||>you're saying it was invalid logic in the first place in the original PR, correct? Need more tests then since the issue wasn't caught by normal live tests?<|||||>There is a [test](https://github.com/huggingface/transformers/blob/1eb40338ac41a2ffbb2137216de3bb63bb739aad/examples/pytorch/test_examples.py#L350) that catches the problem, it's just marked as slow.<|||||>ah, right, I forgot about the slow ones! thank you, Sylvain. So closing this as it is no longer relevant due to revert in https://github.com/huggingface/transformers/commit/96881729ce83cfc8e5fa04c903ee4296ad17cfbb
transformers
15,149
closed
[deepspeed tests] fix summarization
https://github.com/huggingface/transformers/pull/15125 has broken deepspeed tests by introducing a new required flag in summarization examples. This PR adapts to this change.
01-13-2022 21:39:13
01-13-2022 21:39:13
(rushed to merge as it unbreaks Deepspeed's live CI that uses our tests)
transformers
15,148
closed
Better dummies
# What does this PR do? This PR reworks how the dummy objects work by using a metaclass to build them, and make sure that any classmethod used will raise the appropriate error message. Fixes the bad error message when trying to do something like: ```python FlaxVisionEncoderDecoderModel.from_encoder_decoder_pretrained("vit", "gpt2") ``` with Flax not installed.
01-13-2022 21:12:15
01-13-2022 21:12:15
Yes, it will send the imprt error for any attribute, but at the same time the object should just not be used if the right frameworks are not installed.<|||||>Failure is non-related so merging.
transformers
15,147
closed
[doc] performance: Efficient Software Prebuilds
This PR is adding a section on PyTorch NGC docker images for those who don't want to waste time figuring out their own builds. @sgugger
01-13-2022 20:07:17
01-13-2022 20:07:17
transformers
15,146
closed
Add TF glu activation function
# What does this PR do? Bite-sized PR that adds the GLU activation function for TF, as defined in the original paper. It is a requirement for a few speech models, like `Speech2Text`. We can see that it matches PyTorch's implementation ```python import numpy as np import torch from transformers.activations_tf import glu a = np.asarray([[1., 2., 3., 4.], [5., 6., 7., 8.]]) assert np.allclose(torch.nn.functional.glu(torch.tensor(a), dim=0).numpy(), glu(a, axis=0).numpy()) assert np.allclose(torch.nn.functional.glu(torch.tensor(a), dim=1).numpy(), glu(a, axis=1).numpy()) assert np.allclose(torch.nn.functional.glu(torch.tensor(a)).numpy(), glu(a).numpy()) ``` After I get one model using it fully operational, I'm going to attempt to add it to Keras 😎
01-13-2022 20:04:26
01-13-2022 20:04:26
A little late but looks good to me!
transformers
15,145
closed
Is it possible to use ONNX for summarisation with BART yet?
Hi there, I've been trying out the latest iteration of ONNX support for BART because it looks like it now supports past key values which I think means I could generate summaries from an ONNX version of my BART models. I seem to be able to get the model converting nicely as you can see here ```python import numpy as np from transformers import AutoTokenizer from onnxruntime import InferenceSession model_ckpt = "/home/ubuntu/article-summarisation/checkpoints/bart/2021_08_20_15_08_00/model" onnx_path = "/home/ubuntu/article-summarisation/checkpoints/bart/2021_08_20_15_08_00/onnx" tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") tokenizer.save_pretrained(model_ckpt) !python -m transformers.onnx --model={model_ckpt} --feature="seq2seq-lm-with-past" {onnx_path} --atol 1e-04 ``` ``` Using framework PyTorch: 1.10.1+cu102 Overriding 1 configuration item(s) - use_cache -> True /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/torch/onnx/utils.py:90: UserWarning: 'enable_onnx_checker' is deprecated and ignored. It will be removed in the next PyTorch release. To proceed despite ONNX checker failures, catch torch.onnx.ONNXCheckerError. warnings.warn("'enable_onnx_checker' is deprecated and ignored. It will be removed in " /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/torch/onnx/utils.py:103: UserWarning: `use_external_data_format' is deprecated and ignored. Will be removed in next PyTorch release. The code will work as it is False if models are not larger than 2GB, Otherwise set to False because of size limits imposed by Protocol Buffers. warnings.warn("`use_external_data_format' is deprecated and ignored. Will be removed in next " /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:217: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len): /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:223: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attention_mask.size() != (bsz, 1, tgt_len, src_len): /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:254: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim): /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:889: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if input_shape[-1] > 1: /home/ubuntu/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/transformers/models/bart/modeling_bart.py:88: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs! if past_key_values_length > 0: Validating ONNX model... -[✓] ONNX model output names match reference model ({'present.10.decoder.key', 'present.11.decoder.value', 'present.6.decoder.value', 'present.1.decoder.value', 'present.11.decoder.key', 'present.11.encoder.key', 'present.11.encoder.value', 'present.10.encoder.value', 'present.9.decoder.value', 'present.9.encoder.value', 'present.6.encoder.key', 'present.3.encoder.value', 'present.3.decoder.key', 'logits', 'present.4.decoder.value', 'present.4.encoder.value', 'present.8.encoder.value', 'present.7.decoder.key', 'present.7.encoder.value', 'present.0.encoder.key', 'present.5.decoder.key', 'present.9.encoder.key', 'present.2.decoder.key', 'present.10.encoder.key', 'present.3.decoder.value', 'present.1.encoder.value', 'present.10.decoder.value', 'present.9.decoder.key', 'present.0.encoder.value', 'present.2.decoder.value', 'present.5.decoder.value', 'present.8.encoder.key', 'present.5.encoder.value', 'present.0.decoder.value', 'present.5.encoder.key', 'present.7.encoder.key', 'present.2.encoder.value', 'present.6.decoder.key', 'present.7.decoder.value', 'present.4.decoder.key', 'present.1.encoder.key', 'present.2.encoder.key', 'present.0.decoder.key', 'present.1.decoder.key', 'present.3.encoder.key', 'present.4.encoder.key', 'present.8.decoder.key', 'present.8.decoder.value', 'present.6.encoder.value'}) - Validating ONNX Model output "logits": -[✓] (2, 2, 50265) matches (2, 2, 50265) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output "present.0.decoder.key": -[✓] (2, 16, 7, 64) matches (2, 16, 7, 64) -[✓] all values close (atol: 0.0001) -[✓] all values close (atol: 0.0001) - Validating ONNX Model output "present.11.encoder.value": -[✓] (2, 16, 8, 64) matches (2, 16, 8, 64) -[✓] all values close (atol: 0.0001) All good, model saved at: /home/ubuntu/article-summarisation/checkpoints/bart/2021_08_20_15_08_00/onnx/model.onnx ``` But it all falls apart when I run anything ```python ort_session = InferenceSession(f"{onnx_path}/model.onnx") inputs = tokenizer("My name is Henry", return_tensors="pt") onnx_outputs = ort_session.run(None, dict(inputs)) ``` ``` --------------------------------------------------------------------------- ValueError Traceback (most recent call last) Input In [5], in <module> ----> 1 onnx_outputs = ort_session.run(None, dict(inputs)) File ~/.pyenv/versions/3.9.9/envs/py399/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:188, in Session.run(self, output_names, input_feed, run_options) 186 # the graph may have optional inputs used to override initializers. allow for that. 187 if num_inputs < num_required_inputs: --> 188 raise ValueError("Model requires {} inputs. Input Feed contains {}".format(num_required_inputs, num_inputs)) 189 if not output_names: 190 output_names = [output.name for output in self._outputs_meta] ValueError: Model requires 52 inputs. Input Feed contains 2 ``` Am I missing something, or is this still beyond the scope of what's possible? I've spent the day playing round with different approaches which throw up different errors but this feels like the closest I've got to success.
01-13-2022 19:38:28
01-13-2022 19:38:28
I haven't run the script, but reading off of your code, is there a reason why you put `None` in `onnx_outputs = ort_session.run(None, dict(inputs))`? My understanding is that you would need something like `outputs = ort_session.run(["last_hidden_state"], dict(inputs))` instead, where the string key is obtained from `BartOnnxConfig(BartConfig()).outputs.keys()`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@HenryDashwood , Check by creating if all input criteria's are met. since you are using seq2seq-lm-with-past feature, it means you will have to pass decoder hidden state values too in input.
transformers
15,144
closed
Make sure all submodules are properly registered
# What does this PR do? We have seen some mysterious bugs in Python 3.8 linked to the lazy init in Transformers, where some of the submodules are not properly registered as such because they don't appear in the `_import_structure` dict. This PR fixes that and adds a check to make sure newly added modules are always added to the init.
01-13-2022 18:28:38
01-13-2022 18:28:38