repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
13,128
closed
Fix missing `seq_len` in `electra` model when `inputs_embeds` is used.
## Before submitting - [x] This PR Fixes a small bug discussed in #[13122](https://github.com/huggingface/transformers/issues/13122) - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj, @NielsRogge
08-14-2021 22:09:04
08-14-2021 22:09:04
LGTM! Did you verify that it works now?<|||||>Thanks a lot for the PR @sararb !
transformers
13,127
closed
RuntimeError: Error(s) in loading state_dict for BeitForImageClassification: size mismatch for classifier.weight
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> Hi trying to run the BEiTForImageClassification with a custom dataset for a binary classification problem in Google Colab and got the following "RuntimeError: Error(s) in loading state_dict for BeitForImageClassification: size mismatch for classifier.weight and classifier.bias". Seems like the last layer doesn't match with the binary output, instead is mapping to the number of 1000 classes from the ImageNet trained. Any suggestion on how to fix it? - `transformers` version: 4.10.0 - Platform: Google Colab Models: - nielsr/beit-base-patch16-224 ## To reproduce Steps to reproduce the behavior: Based on https://huggingface.co/nielsr/beit-base-patch16-224. 1. Run and using the following code ` ` ` feature_extractor = BeitFeatureExtractor.from_pretrained('nielsr/beit-base-patch16-224') model = BeitForImageClassification.from_pretrained('nielsr/beit-base-patch16-224', num_labels =2, label2id=label2id, id2label=id2label) ` ` ` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ` ` ` RuntimeError: Error(s) in loading state_dict for BeitForImageClassification: size mismatch for classifier.weight: copying a param with shape torch.Size([1000, 768]) from checkpoint, the shape in current model is torch.Size([2, 768]). size mismatch for classifier.bias: copying a param with shape torch.Size([1000]) from checkpoint, the shape in current model is torch.Size([2]). ` ` `
08-14-2021 17:00:45
08-14-2021 17:00:45
Hi, Thanks to #12664, it's now possible to load a fine-tuned checkpoint and replace the head which has a different number of classes, by setting `ignore_mismatched_sizes` to `True` when calling the `from_pretrained` method, like so: ``` from transformers import BeitForImageClassification model = BeitForImageClassification.from_pretrained('microsoft/beit-base-patch16-224', num_labels=2, ignore_mismatched_sizes=True) ``` This prints the warning: ``` Some weights of BeitForImageClassification were not initialized from the model checkpoint at microsoft/beit-base-patch16-224 and are newly initialized because the shapes did not match: - classifier.weight: found shape torch.Size([1000, 768]) in the checkpoint and torch.Size([2, 768]) in the model instantiated - classifier.bias: found shape torch.Size([1000]) in the checkpoint and torch.Size([2]) in the model instantiated You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. ``` From that PR, I see that only in `modeling_flax_utils.py` users get an error message that says "use ignore_mismatched_sizes if you really want to load this checkpoint inside this model." in case not all keys match. Not sure why this suggestion is not printed for PyTorch models. cc @sgugger <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>size mismatch for model.classifier.weight: copying a param with shape torch.Size([555, 2208]) from checkpoint, the shape in current model is torch.Size([563, 2208]). size mismatch for model.classifier.bias: copying a param with shape torch.Size([555]) from checkpoint, the shape in current model is torch.Size([563]).
transformers
13,126
closed
torch.jit.trace quantized bigbird leads to 0INTERNAL ASSERT FAILED runtime error
Attempt to torch jit trace and save a quantized bigbird model leads to 0INTERNAL ASSERT FAILED runtime error. I also ran the same code for BERT and RoBERTa (see `example.ipynb`) but did not encounter the same issue and was able to trace the quantized models for both respectively. ## To Reproduce Steps to reproduce the behavior: 1. Git clone this [repo](https://github.com/matthiaslmz/quantized_bigbird_issue) 2. Run `example.ipynb` ### Stacktrace: ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-4-1dfdd2340788> in <module> 4 ) 5 ----> 6 traced_model = torch.jit.trace(model, (input_ids, attention_mask)) 7 torch.jit.save(traced_model, "traced_bigbird.pt") /opt/conda/lib/python3.7/site-packages/torch/jit/_trace.py in trace(func, example_inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 742 strict, 743 _force_outplace, --> 744 _module_class, 745 ) 746 /opt/conda/lib/python3.7/site-packages/torch/jit/_trace.py in trace_module(mod, inputs, optimize, check_trace, check_inputs, check_tolerance, strict, _force_outplace, _module_class, _compilation_unit) 957 strict, 958 _force_outplace, --> 959 argument_names, 960 ) 961 check_trace_method = module._c._get_method(method_name) RuntimeError: 0INTERNAL ASSERT FAILED at "/pytorch/torch/csrc/jit/ir/alias_analysis.cpp":532, please report a bug to PyTorch. We don't have an op for aten::constant_pad_nd but it isn't a special case. Argument types: Tensor, int[], bool, ``` ## Expected behavior Quantized bigbird should be able to be saved. ## Environment - PyTorch Version: 1.9.0+cu111 - Transformers Version: 4.9.1 - OS: Debian GNU/Linux 10 (buster) - Python version: 3.7.9 - CUDA/cuDNN version: 11.0.194 - GPU models and configuration: NVIDIA Tesla V100 16GB ### Who can help @patrickvonplaten
08-14-2021 02:18:55
08-14-2021 02:18:55
Uff trying to `torch.jit(...)` our most complex model BigBird won't be easy I think :-/ Sadly I won't find time to dig deeper into this as it will require a lot of work :-/ Could you maybe try to go wiht `Longformer` for now?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten Sadly this error is also raised for me when actually using Longformer.<|||||>In Longformer, looks like this bug comes from this single line: https://github.com/huggingface/transformers/blob/ee6674d45030161d8d60533b7d469a727d492113/src/transformers/models/longformer/modeling_longformer.py#L1573 ``` attention_mask = nn.functional.pad( attention_mask, (0, padding_len), value=False # <-- should be 0 ) # no attention on the padding tokens ``` `nn.functional.pad` expects a number, not a boolean, value In BigBird, the same bug is here: https://github.com/huggingface/transformers/blob/ee6674d45030161d8d60533b7d469a727d492113/src/transformers/models/big_bird/modeling_big_bird.py#L2252 <|||||>Great catch @dadamson - if you want feel free to open a PR for it :-)
transformers
13,125
closed
type object 'AutoModelForSequenceClassification' has no attribute 'from_config'
I'm using Transformer version 4.4.2 and have been getting "type object 'AutoModelForSequenceClassification' has no attribute 'from_config'" error. Here is my code snippet. I went through the document, the syntax seem to be correct. Your help is very much appreciated. from transformers import AutoConfig, AutoTokenizer, AutoModel, AutoModelForSequenceClassification, Trainer, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") config = AutoConfig.from_pretrained('distilbert-base-uncased', num_labels=2)
08-14-2021 01:31:32
08-14-2021 01:31:32
Hello! I just tried the following code snippet in both `v4.9.2` and `v4.4.2` and both seem to work: ```py from transformers import AutoConfig, AutoTokenizer, AutoModel, AutoModelForSequenceClassification, Trainer, TrainingArguments tokenizer = AutoTokenizer.from_pretrained("distilbert-base-uncased") config = AutoConfig.from_pretrained('distilbert-base-uncased', num_labels=2) AutoModelForSequenceClassification.from_config(config) ``` <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,124
closed
You must login to the Hugging Face hub on this computer by typing `transformers-cli login` and entering your credentials to use `use_auth_token=True`. Alternatively, you can pass your own token as the `use_auth_token` argument in the translation notebook.
I'm trying to run the following but gives me this error. I made an account and login but am not sure about `transformers-cli login`. any help would be appreciated. ![image](https://user-images.githubusercontent.com/32965166/129429037-5909f09d-4bfd-4dd7-870e-eb07e32b34bf.png) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-14-2021 00:46:39
08-14-2021 00:46:39
In order to be able to push to model to the hub after training, make sure to follow these steps: Add the following arguments to `TrainingArguments`: ``` push_to_hub=True, push_to_hub_model_id="name of your model" # optional, will default to the name of your output directory push_to_hub_organization="name of the organization to which to upload the model" # optional push_to_hub_token="your authentication token" ``` => your authentication token can be obtained by typing `!huggingface-cli login` in Colab/in a terminal to get your authentication token stored in local cache. Actually, you don't need to pass the `push_to_hub_token` argument, as it will default to the token in the cache folder as stated in the [docs](https://huggingface.co/transformers/main_classes/trainer.html#trainingarguments). Also, make sure git LFS is installed, as this is required to upload your model to the hub. In Colab, you can do this as follows: ``` !sudo apt-get install git-lfs !git config --global user.email "your email address" # Tip: using the same email than for your huggingface.co account will link your commits to your profile !git config --global user.name "your username" ```<|||||>Thank you!
transformers
13,123
closed
Value error while running run_glue.py example with gpt2
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.7.0 - Platform: Linux - Python version: 3.8.1 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.3 , GPU : yes - Using GPU in script?: yes - Using distributed or parallel set-up in script?:no Tagging people: @patrickvonplaten, @LysandreJik, @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): GPT2 The problem arises when using: * [ ] the official example scripts: tensorflow/run_glue.py The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) GLUE (MNLI/SST2) ## Error ValueError: Dimension size must be evenly divisible by 192 but is 8 for '{{node sparse_categorical_crossentropy_2/Reshape_2}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](sparse_categorical_crossentropy_2/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits, sparse_categorical_crossentropy_2/strided_slice_1)' with input shapes: [8], [4] and with input tensors computed as partial shapes: input[1] = [2,8,12,?]. ## To reproduce python run_glue.py --model_name_or_path gpt2 --task_name mnli --do_train --do_eval --do_predict --output_dir ./output ## Expected behavior Successfully complete training
08-13-2021 23:01:24
08-13-2021 23:01:24
Hello! Is this the exact command you're using? I tried to reproduce but I'm getting an error with the pad token which is not defined in the GPT-2 tokenizer. Did you tweak your GPT-2 tokenizer in order to add a padding token?<|||||>That's the exact command I am running. The only change I did (see below) was to comment out clipnorm, to fix the error "ValueError: Gradient clipping in the optimizer (by setting clipnorm or clipvalue) is currently unsupported when using a distribution strategy." - clipnorm=training_args.max_grad_norm, + #clipnorm=training_args.max_grad_norm,<|||||>@LysandreJik any luck reproducing the error?<|||||>When I set the dataset_mode to constant_batch, I see the following error. Any idea why the logits output dimension is (batch_size, sequence_length, num_labels) and not (batch_size, num_labels) ? ValueError: Shape mismatch: The shape of labels (received (8, 1)) should equal the shape of logits except for the last dimension (received (8, 128, 3)).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,122
closed
Electra raises UnboundLocalError: local variable 'seq_length' referenced before assignment when inputs are pre-computed embeddings
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> -`transformers` version: 4.9.2 - Platform: Linux-4.15.0-15-generic-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): 2.6.0 (True) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help Models: - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj ## Information Model I am using (ELECTRA): The problem arises when using: * [ ] my own modified scripts: (give details below) I am pre-training the ELECTRA model for session-based recommendation task and directly feeding the inputs embeddings instead of their ids. ## To reproduce Steps to reproduce the behavior: 1. Load ELECTRA model from config : ``` transformers.MODEL_MAPPING[transformers.ElectraConfig(hidden_size=d_model, embedding_size=d_model, num_hidden_layers=n_layer, num_attention_heads=n_head,...)] ``` 2. Apply the model to pre-computed embeddings : ``` model(inputs_embeds=inputs) ``` 3. The error raised is : ``` def forward( self, input_ids=None, attention_mask=None, token_type_ids=None, position_ids=None, head_mask=None, inputs_embeds=None, output_attentions=None, output_hidden_states=None, return_dict=None, ): output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions output_hidden_states = ( output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states ) return_dict = return_dict if return_dict is not None else self.config.use_return_dict if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") device = input_ids.device if input_ids is not None else inputs_embeds.device if attention_mask is None: attention_mask = torch.ones(input_shape, device=device) if token_type_ids is None: if hasattr(self.embeddings, "token_type_ids"): > buffered_token_type_ids = self.embeddings.token_type_ids[:, :seq_length] E UnboundLocalError: local variable 'seq_length' referenced before assignment /opt/conda/lib/python3.8/site-packages/transformers/models/electra/modeling_electra.py:869: UnboundLocalError ``` ## Expected behavior - The seq_len value should also be computed when inputs are pre-computed embeddings instead of raw ids. <!-- A clear and concise description of what you would expect to happen. -->
08-13-2021 22:28:39
08-13-2021 22:28:39
That's indeed a small bug. It can be fixed as follows: ```diff if input_ids is not None and inputs_embeds is not None: raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") elif input_ids is not None: input_shape = input_ids.size() - batch_size, seq_length = input_shape elif inputs_embeds is not None: input_shape = inputs_embeds.size()[:-1] else: raise ValueError("You have to specify either input_ids or inputs_embeds") + batch_size, seq_length = input_shape device = input_ids.device if input_ids is not None else inputs_embeds.device ``` Btw, I love Github's abilities to showcase this haha. Mind opening a PR to fix this?<|||||>Sure, I opened a PR #13128. Thank you for your reply ! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,121
closed
AutoModel KeyError: 'layoutlmv2'
## Environment info - `transformers` version: 4.9.2 - Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.11 - PyTorch version (GPU?): 1.9.0+cu102 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ## Information I am trying to run layoutlmv2. When I run the code from documentation: ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") ``` I get the below error: > KeyError Traceback (most recent call last) > <ipython-input-7-457d9de7bf01> in <module>() > 1 from transformers import AutoTokenizer, AutoModel > 2 > ----> 3 tokenizer = AutoTokenizer.from_pretrained("microsoft/layoutlmv2-base-uncased") > 4 > 5 model = AutoModel.from_pretrained("microsoft/layoutlmv2-base-uncased") > > /usr/local/lib/python3.7/dist-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) > 532 if config_tokenizer_class is None: > 533 if not isinstance(config, PretrainedConfig): > --> 534 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) > 535 config_tokenizer_class = config.tokenizer_class > 536 > > /usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) > 450 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) > 451 if "model_type" in config_dict: > --> 452 config_class = CONFIG_MAPPING[config_dict["model_type"]] > 453 return config_class.from_dict(config_dict, **kwargs) > 454 else: > > KeyError: 'layoutlmv2'
08-13-2021 18:04:33
08-13-2021 18:04:33
@NielsRogge <|||||>Hello @nurgel! LayoutLM v2 is not merged yet so it isn't available in the latest version. You can follow the development here https://github.com/huggingface/transformers/pull/12604
transformers
13,120
closed
Deberta_v2 tf
# What does this PR do? Deberta-v2 TF <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @LysandreJik @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-13-2021 16:16:37
08-13-2021 16:16:37
@Rocketknight1 https://github.com/huggingface/transformers/pull/12972#discussion_r684418611 gather function fails while running `run_glue.py` from examples ![Screenshot 2021-08-13 at 9 55 59 PM](https://user-images.githubusercontent.com/17096858/129390299-22e28c85-3b17-4fbe-9408-03d2ea163fd1.png) If i replace the gather function with experimental NumPy take_along_axis works - https://gist.github.com/kamalkraj/73ad5fa2b84de7e201e05464e11a4fec <|||||>Hi @kamalkraj, do you know what shape the inputs are to the gather/take_along_axis? I'm going to try to construct a small test case that fails for my gather function but not for take_along_axis. If you can find a simple test case that fails, feel free to send that too so I can fix the function!<|||||>Hi @Rocketknight1 I have tried few tests for `torch.gather ` when you initially shared the function. notebook link- https://colab.research.google.com/drive/1ujI6zKTuuryAO2Nfw9U1ZftyZyC4VUVS?usp=sharing<|||||>In all of those cases, it looks like the TF `torch_gather` function gets the same results as the actual `torch.gather`, right? Is there a difference?<|||||>No. TF `torch_gather` function gets the same output as `torch.gather`. Actually, in runtime, this branch never gets called https://github.com/huggingface/transformers/blob/e2f07c01e93611fbd96f85204c9a2129bc81862b/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L766-L771 because both query_layer and key_layer are of the same size https://github.com/huggingface/transformers/blob/e2f07c01e93611fbd96f85204c9a2129bc81862b/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L571-L572 <|||||>Hi @BigBird01, I was going through `deberta-v2` implementation inside huggingface and as per my understanding, for `deberta-v2` the below branch will be never executed. https://github.com/huggingface/transformers/blob/e2f07c01e93611fbd96f85204c9a2129bc81862b/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L766 Because query_layer and key_layer shapes are -> `[batch_size * num_attention_heads, sequence_length, attention_head_size] ` the above condition may be needed for `deberta`. But Huggingface has separate implementation for `deberta` and `deberta-v2` if my assumption is correct we can remove those never executed control flow branches from the `deberta-v2` code. <|||||>Yes. We can remove it to make the code clear. Thanks! Pengcheng From: Kamal Raj ***@***.***> Sent: Monday, August 16, 2021 1:03 PM To: huggingface/transformers ***@***.***> Cc: Pengcheng He ***@***.***>; Mention ***@***.***> Subject: Re: [huggingface/transformers] Deberta_v2 tf (#13120) Hi @BigBird01<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBigBird01&data=04%7C01%7CPengcheng.H%40microsoft.com%7C5d5abaf3549d4964849008d960f0deb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637647409915898630%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=lc6grZfUDwbI8XqIUK4JLTj3W%2F2evr6AkgrG2N27TeY%3D&reserved=0>, I was going through deberta-v2 implementation inside huggingface and as per my understanding, for deberta-v2 the below branch will be never executed. https://github.com/huggingface/transformers/blob/e2f07c01e93611fbd96f85204c9a2129bc81862b/src/transformers/models/deberta_v2/modeling_deberta_v2.py#L766<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fblob%2Fe2f07c01e93611fbd96f85204c9a2129bc81862b%2Fsrc%2Ftransformers%2Fmodels%2Fdeberta_v2%2Fmodeling_deberta_v2.py%23L766&data=04%7C01%7CPengcheng.H%40microsoft.com%7C5d5abaf3549d4964849008d960f0deb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637647409915898630%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=nMC317uM%2Fsa7XTmnF1bvG9Blnabdawhxuu9jayoY8GA%3D&reserved=0> Because query_layer and key_layer shapes are -> [batch_size * num_attention_heads, sequence_length, attention_head_size] the above condition may be needed for deberta. Huggingface has separate implementation for deberta and deberta-v2 if my assumption is correct we can remove those never executed control flow branches from the deberta-v2 code. - You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fhuggingface%2Ftransformers%2Fpull%2F13120%23issuecomment-899782332&data=04%7C01%7CPengcheng.H%40microsoft.com%7C5d5abaf3549d4964849008d960f0deb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637647409915908587%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=ZWPmlym37Xufrg18IhWl9hLPiz74rzOMrqKZViwC6Bg%3D&reserved=0>, or unsubscribe<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAJDNDRTGVQBRQ5QKLPWTKXTT5FVHVANCNFSM5CD4MTLA&data=04%7C01%7CPengcheng.H%40microsoft.com%7C5d5abaf3549d4964849008d960f0deb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637647409915908587%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=Jtx3lLmO%2FV7%2BUkmH6%2BJgHoLfKXgkPtDK%2FueTvkN4u%2Bs%3D&reserved=0>. Triage notifications on the go with GitHub Mobile for iOS<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fapps.apple.com%2Fapp%2Fapple-store%2Fid1477376905%3Fct%3Dnotification-email%26mt%3D8%26pt%3D524675&data=04%7C01%7CPengcheng.H%40microsoft.com%7C5d5abaf3549d4964849008d960f0deb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637647409915918545%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=a1Gav7n4ZNejVJ4ufuDq0t0QC2G%2FWdsQyTuN2ctyckg%3D&reserved=0> or Android<https://nam06.safelinks.protection.outlook.com/?url=https%3A%2F%2Fplay.google.com%2Fstore%2Fapps%2Fdetails%3Fid%3Dcom.github.android%26utm_campaign%3Dnotification-email&data=04%7C01%7CPengcheng.H%40microsoft.com%7C5d5abaf3549d4964849008d960f0deb8%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C637647409915918545%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=lkAkdc34Y4JxscuaZWrLFjlyn3JrgNzuDNe3GkqZtl8%3D&reserved=0>. <|||||>> @Rocketknight1 > [#12972 (comment)](https://github.com/huggingface/transformers/pull/12972#discussion_r684418611) > gather function fails while running `run_glue.py` from examples > ![Screenshot 2021-08-13 at 9 55 59 PM](https://user-images.githubusercontent.com/17096858/129390299-22e28c85-3b17-4fbe-9408-03d2ea163fd1.png) > > If i replace the gather function with experimental NumPy take_along_axis works - https://gist.github.com/kamalkraj/73ad5fa2b84de7e201e05464e11a4fec Hi @kamalkraj, can you share the exact glue task / command you used? I still can't reproduce the bug - I tried this: ``` python run_glue.py --model_name_or_path kamalkraj/deberta-v2-xlarge --task_name mnli --do_train --do_eval --do_predict --output_dir output ``` This seemed to work fine with `torch_gather`.<|||||>@Rocketknight1 the issue is solved with this commit https://github.com/huggingface/transformers/pull/13120/commits/90c122dedf95e6f4d1ff4395b08783f851e6eb02 . `torch_gather` function under those `if` condition was creating the issue. I removed those conditions as it was unnecessary . You can see the discussion https://github.com/huggingface/transformers/pull/13120#issuecomment-899782332 I also opened another pull request to remove from PyTorch model also. https://github.com/huggingface/transformers/pull/13145<|||||>Hi @Rocketknight1 , https://github.com/huggingface/transformers/pull/13145 is merged to master. Now the TF implementation is the same as the torch Implementation. and runs without any issues<|||||>Hi @patrickvonplaten , thanks for the review. committed changes. <|||||>Hi @LysandreJik, committed changes.<|||||>Is this code compatible with model.fit?
transformers
13,119
closed
Optimizes ByT5 tokenizer
# What does this PR do? - Removes unused logic (actual special tokens are handled by super class <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #12884 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-13-2021 15:40:00
08-13-2021 15:40:00
There's also a way to speed it up even for 100+ special tokens (by using a single cut pass instead of 100 with byt5) but as mentionned in the issue it's more involved and side effects harder to apprehend.<|||||>With this updated version, I am now getting issues with encoding characters which require multiple bytes, e.g. "€" gets tokenized as [8367], where it should be [229, 133, 175]. <|||||>cc @Narsil - I had a similar problem as @gggg8000 previously. Are you sure the optimized ByT5 tokenizer correctly takes single characters that are made of multiple unicode bytes into account?<|||||>Oups, I imagined those were covered in tests so I didn't: The fix is here: https://github.com/huggingface/transformers/pull/13447
transformers
13,118
closed
Fix frameworks table so it's alphabetical
# What does this PR do? This is a minor PR to make the frameworks table alphabetical <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-13-2021 15:35:36
08-13-2021 15:35:36
Thanks for the PR! Could you run `make fix-copies` to fix the code quality issue?<|||||>Thanks, I didn't realize there was a script to automatically generate the table. I changed the `sort` call so there is no difference between uppercase and lowercase, hence removing lowercase models from the end of the list. This creates few other diffs, so please let me know if this is ok.
transformers
13,117
closed
Can we directly replace gpt2LMHeadModel with BertLMHeadModel to see bert's performance? #7
I have a code for gpt2LMHeadModel which runs well, and I want to test my code on BertLMHeadModel. But when I directly replace gpt2LMHeadModel with BertLMHeadModel and replace gpt2Tokenizer with BertTokenizer, the ppl remains at 1 (the BertLMHeadModel predicts perfectly the same with labels) So can anyone help me, is there any difference of the input format between gpt2LMHeadModelwith and BertLMHeadModel? Thanks so much!
08-13-2021 15:03:56
08-13-2021 15:03:56
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,116
open
Problem about using mBART50 for Russian to Chinese translation
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.1 - Platform: ubuntu 18.04 - Python version: 3.6.9. - PyTorch version (GPU?): 1.8.0 - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help mbart-large-50-many-to-many-mmt:@LysandreJik @patrickvonplaten ## Information Model I am using: mbart-large-50-many-to-many-mmt The problem arises when using: * my own modified scripts: (give details below) We originally wanted to do a Russian-Chinese translation task, but our translation results showed a lot of English. We used a script to test. ## To reproduce Steps to reproduce the behavior: 1.The code is as follow: ```python from transformers import MBartForConditionalGeneration, MBart50TokenizerFast text_list = ['Это позволит облегчить транспортировку грузов для Китая и Германии.', 'Россия останется одним из лидеров, возможности для наращивания экспорта есть.', 'Это позволит оптимизировать торговые отношения.'] src_lang = 'ru_RU' tgt_lang = 'zh_CN' model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt") for text in text_list: tokenizer.src_lang = src_lang encoded_hi = tokenizer(text, return_tensors="pt") generated_tokens = model.generate(**encoded_hi, forced_bos_token_id=tokenizer.lang_code_to_id[tgt_lang]) translated = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True) print(translated) ``` when the src_lang was ‘ru_RU’ and the tgt_lang was ‘zh_CN’, the results were: ``` ['This will facilitate the transport of goods for China and Germany.'] ['Russia will remain one of the leaders, there are opportunities to increase export.'] ['This will allow to optimize trade relations.'] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> We wanted to obtain a set of Chinese translations. Here are the Chinese translations for reference. ``` ['这将使中国和德国更容易运输货物。'] ['俄罗斯仍将是一个领导者,有机会增加出口。'] ['这将有助于改善贸易关系。'] ```
08-13-2021 13:58:34
08-13-2021 13:58:34
Same problem, please have a look~ @patil-suraj<|||||>Yes I've seen similar issues with mBART50 returning random sentences as output. Related issues are #12104 and #12958 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I don't think this was fixed<|||||>@patil-suraj - pinging again here. Would be great to take some action here soon<|||||>Sorry about being super slow here. Going to take look at it this week. First step is to do the same generation with the original model [here](https://github.com/pytorch/fairseq/tree/main/examples/multilingual), the setup is very complicated. Will do it and post the instructions here as well. If the generations match then the issue is with the model itself. <|||||>is there a translation from English to Chinese ? Or From Chinese to English ?
transformers
13,114
closed
Migrating conversational pipeline tests to new testing format
# What does this PR do? Moving the cnversational pipeline tests to new format. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-13-2021 10:18:48
08-13-2021 10:18:48
Friendly ping @LysandreJik @sgugger
transformers
13,113
closed
Fix CircleCI nightly tests
# What does this PR do? The pipelines TF job was not setup properly, so failed in the nightlies.
08-13-2021 06:49:30
08-13-2021 06:49:30
transformers
13,115
closed
typeerror: textinputsequence must be str
## Describe the bug I use dataset.map() to encode the data, but get this problem. # I use the code to transfer data to local csv files,.As i use colab, local files are more convenient. dataset = load_dataset(path='glue', name='mnli') keys = ['train', 'validation_matched','validation_mismatched'] for k in keys: result = [] for record in dataset[k]: c1, c2, c3 = record['premise'], record['hypothesis'], record['label'] if c1 and c2 and c3 in {0,1,2}: result.append(c1,c2,c3)) result = pd.DataFrame(result, columns=['premise','hypothesis','label']) result.to_csv('mnli_'+k+'.csv',index=False) # then I process data like this ,and get the issue. tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) def encode(batch): return tokenizer(batch['premise'], batch['hypothesis'], max_length=MAXLEN, padding='max_length', truncation=True ) train_dict = load_dataset('csv', data_files=train_data_path) train_dataset = train_dict['train'] train_dataset = train_dataset.map(encode, batched=True) ## Expected results encode the data successfully. ## Actual results TypeError Traceback (most recent call last) <ipython-input-19-00acc2cded49> in <module>() 5 val_dataset = val_dict['train'] 6 ----> 7 train_dataset = train_dataset.map(encode, batched=True) 8 val_dataset = val_dataset.map(encode, batched=True) 9 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc) 1680 new_fingerprint=new_fingerprint, 1681 disable_tqdm=disable_tqdm, -> 1682 desc=desc, 1683 ) 1684 else: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs) 183 } 184 # apply actual function --> 185 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) 186 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out] 187 # re-apply format to the output /usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py in wrapper(*args, **kwargs) 395 # Call actual function 396 --> 397 out = func(self, *args, **kwargs) 398 399 # Update fingerprint of in-place transforms + update in-place history of transforms /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, disable_tqdm, desc) 2018 indices, 2019 check_same_num_examples=len(input_dataset.list_indexes()) > 0, -> 2020 offset=offset, 2021 ) 2022 except NumExamplesMismatch: /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in apply_function_on_filtered_inputs(inputs, indices, check_same_num_examples, offset) 1904 effective_indices = [i + offset for i in indices] if isinstance(indices, list) else indices + offset 1905 processed_inputs = ( -> 1906 function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1907 ) 1908 if update_data is None: <ipython-input-11-3dad555201d4> in encode(batch) 6 max_length=MAXLEN, 7 padding='max_length', ----> 8 truncation=True 9 ) /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in __call__(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2383 return_length=return_length, 2384 verbose=verbose, -> 2385 **kwargs, 2386 ) 2387 else: /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py in batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs) 2568 return_length=return_length, 2569 verbose=verbose, -> 2570 **kwargs, 2571 ) 2572 /usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose) 406 batch_text_or_text_pairs, 407 add_special_tokens=add_special_tokens, --> 408 is_pretokenized=is_split_into_words, 409 ) 410 TypeError: TextInputSequence must be str ## Environment info <!-- You can run the command `datasets-cli env` and copy-and-paste its output below. --> - `datasets` version:1.11.0 - Platform:colab - Python version:3.7 - PyArrow version: @lhoestq
08-13-2021 06:08:53
08-13-2021 06:08:53
by the way ,the same code works when i process the xnli dataset.<|||||>Hi @justwangqian, I think your issue is with the `transformers` library. I guess you should update it, but I prefer transferring your issue to them, so that they can keep the record. Feel free to reopen an issue in `datasets` if there is finally a bug here. :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,112
closed
modified roberta source code
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ #] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2021 18:47:13
08-12-2021 18:47:13
transformers
13,111
closed
`ModelError` when calling SageMaker Endpoint for prediction using the official notebooks
## Environment info - `transformers` version: 4.6.1 - Platform: Linux-4.14.232-123.381.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.10 - PyTorch version (GPU?): 1.8.1+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @sgugger @patil-suraj ## Information I have trained and saved a BertForSequenceClassification model to S3. I then used [this notebook](https://github.com/huggingface/notebooks/blob/master/sagemaker/10_deploy_model_from_s3/deploy_transformer_model_from_s3.ipynb) to deploy the model to SageMaker Endpoints. I ran: ```python from sagemaker.huggingface import HuggingFaceModel import sagemaker role = sagemaker.get_execution_role() # create Hugging Face Model Class huggingface_model = HuggingFaceModel( model_data="s3://XXXXXXXXXXXX/model.tar.gz", # path to your trained sagemaker model role=role, # iam role with permissions to create an Endpoint transformers_version="4.6", # transformers version used pytorch_version="1.7", # pytorch version used py_version="py36", # python version of the DLC env={'HF_TASK':'text-classification'} ) predictor = huggingface_model.deploy( initial_instance_count=1, instance_type="ml.m5.xlarge" ) ``` But when I use the provided snippet: ```python # example request, you always need to define "inputs" data = { "inputs": "The new Hugging Face SageMaker DLC makes it super easy to deploy models in production. I love it!" } # request predictor.predict(data) ``` I get the following error: ```bash ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (400) from model with message "{ "code": 400, "type": "InternalServerException", "message": "Can\u0027t load config for \u0027/.sagemaker/mms/models/model\u0027. Make sure that:\n\n- \u0027/.sagemaker/mms/models/model\u0027 is a correct model identifier listed on \u0027https://huggingface.co/models\u0027\n\n- or \u0027/.sagemaker/mms/models/model\u0027 is the correct path to a directory containing a config.json file\n\n" } ". See https://us-east-2.console.aws.amazon.com/cloudwatch/home?region=us-east-2#logEventViewer:group=/aws/sagemaker/Endpoints/huggingface-pytorch-inference-XXXXXXXX in account XXXXXXXXXX for more information. ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below)
08-12-2021 18:39:07
08-12-2021 18:39:07
cc @philschmid <|||||>Hey @xegulon, How have you created your `model.tar.gz` and what does it contain? It looks like that the file structure of it is wrong and the inference toolkit cannot find the `config.json` and `pytorch_model.bin`. You can take a look [here](https://huggingface.co/docs/sagemaker/inference#creating-a-model-artifact-modeltargz-for-deployment) at how to properly create a `model.tar.gz`. https://huggingface.co/docs/sagemaker/inference#creating-a-model-artifact-modeltargz-for-deployment<|||||>Here are the contents of `model.tar.gz`: ![image](https://user-images.githubusercontent.com/74178038/129331926-90568924-9cae-4566-8102-2cd23c3d239b.png) I used the `save_pretrained` method on the model and tokenizer to get that. P. S.: after re-checking, I remarked `transformers` version is `4.9.2`<|||||>How have you created this archive? and are you sure the structure is not the one below? ```bash model.tar.gz directory pytorch_model.bin ``` Could you try creating the archive with the following steps? 1. cd and create a tar file ```bash cd {repository} tar zcvf model.tar.gz * ``` the repository should be the directory where your artifacts are stored 2. Upload model.tar.gz to s3 ```Bash aws s3 cp model.tar.gz <s3://{my-s3-path}> ``` After that, you can use the S3 uri as model_data. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,110
closed
adding modified roberta
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ #] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2021 18:28:54
08-12-2021 18:28:54
transformers
13,109
closed
Fix flax gpt2 hidden states
# What does this PR do? Fixes #13102 ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? [ inconsistency of the last element in hidden_states between PyTorch/Flax GPT2(Neo) #13102 ](https://github.com/huggingface/transformers/issues/13102#issuecomment-897687182) ## Who can review? @patrickvonplaten @patil-suraj
08-12-2021 17:11:17
08-12-2021 17:11:17
Hi, @patil-suraj , thank you for the suggestions. There is however some issues regarding ``` if output_hidden_states: all_hidden_states = outputs[1] + (hidden_states,) outputs = (hidden_states, all_hidden_states) + outputs[2:] else: outputs = (hidden_states,) + outputs[1:] ``` because this will change `outputs` to a `tuple` even if it is previously a `FlaxBaseModelOutput`, and this causes problem to the end (i.e. if `return_dict=True`) ``` return FlaxBaseModelOutput( last_hidden_state=hidden_states, hidden_states=outputs.all_hidden_states, attentions=outputs.attentions, ) ``` Do you have a good solution to address this while keep your suggestions?<|||||>Ahh, yeah, you're right! I wanted to avoid multiple if/else conds, but seems we will need to add one either way. I could see two options: - we have already stored `all_hidden_states`, we could store the `all_attentions` using ```python3 all_attentions = outputs[-1] if output_attentions else None ``` and then use that in the output class - another option is, `FlaxGPT2BlockCollection` is only used internally, so we could also just always return `outputs` (including `None` values) as a `tuple`. So in the `FlaxGPT2Module`, we could do ```python3 if output_hidden_states: all_hidden_states = outputs[1] + (hidden_states,) outputs = (hidden_states, all_hidden_states) + outputs[2:] else: outputs = (hidden_states,) + outputs[1:] if not return_dict: return tuple(v for v in outputs if v is not None) return FlaxBaseModelOutput( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=outputs[-1], ) ```<|||||>@patil-suraj I went for option 2, with a slightly change `hidden_states=all_hidden_states,` -> `hidden_states=outputs[1],` (all_hidden_states not always define).
transformers
13,108
closed
Multi Lang Marian Translator not working (opus_mt_mul_en)
When attempting to use the [opus_mt_mul_en](https://huggingface.co/Helsinki-NLP/opus-mt-mul-en) model, no translations are generated. Based on [this issue](https://github.com/JohnSnowLabs/spark-nlp/issues/2472) on the SparkNLP repo, this has been happening for a while, but perhaps never raised here. I'm currently accessing the model through SparkNLP on an Amazon EMR cluster (release 5.30.0). Spark version 2.4.5, SparkNLP version 3.1.0.. The same issue occurs when using SparkNLP 2.7.0. Code to reproduce the issue: ```python import os ! apt-get update -qq > /dev/null # Install java ! apt-get install -y openjdk-8-jdk-headless -qq > /dev/null os.environ["JAVA_HOME"] = "/usr/lib/jvm/java-8-openjdk-amd64" os.environ["PATH"] = os.environ["JAVA_HOME"] + "/bin:" + os.environ["PATH"] ! pip install pyspark==2.4.5 spark-nlp==2.7.0 from sparknlp.annotator import * from sparknlp.common import * from sparknlp.base import * from pyspark.sql import SparkSession from pyspark.ml import Pipeline from pyspark.sql.functions import array_contains from pyspark.ml import Pipeline, PipelineModel import sparknlp from sparknlp.annotator import * from sparknlp.pretrained import PretrainedPipeline spark = sparknlp.start() documentAssembler = DocumentAssembler()\ .setInputCol("text")\ .setOutputCol("document") sentencerDL = SentenceDetectorDLModel.pretrained("sentence_detector_dl", "xx")\ .setInputCols(["document"])\ .setOutputCol("sentences") marian = MarianTransformer.pretrained("opus_mt_mul_en", "xx")\ .setInputCols(["sentences"])\ .setOutputCol("translation") marian_pipeline = Pipeline(stages=[documentAssembler, sentencerDL, marian]) sdf = spark.createDataFrame([[">>deu<< Hallo wie geht es dir Ich bin hubert aus Deutschland"], [">>fra<< Wikipédia est un projet d'encyclopédie collective en ligne, universelle, multilingue et fonctionnant sur le principe du wiki. Ce projet vise à offrir un contenu librement réutilisable, objectif et vérifiable, que chacun peut modifier et améliorer."]]).toDF("text") m_fit = marian_pipeline.fit(sdf ) res_Df = m_fit.transform(sdf) res_Df.select('translation').show(truncate=False) ``` I've tried using the `setLangId` method instead of putting the tags inline with the input text, with the same result.
08-12-2021 16:58:55
08-12-2021 16:58:55
Hey @rp13g10, We are not really familiar with the `sparkNLP` repo...from the issue I assume that the following is the error from our side: ```python # opus-mt-mul-en # opus-mt-en-mul from transformers import MarianMTModel, MarianTokenizer model_name = 'Helsinki-NLP/opus-mt-mul-en' tokenizer = MarianTokenizer.from_pretrained(model_name) tokenizer.supported_language_codes # this returns nothing ``` E.g. the tokenizers should return some supported language codes @patil-suraj have you already taken a look at multi-lingual marian models? Also gently pinging the Marian OG @sshleifer - should we update https://huggingface.co/Helsinki-NLP/opus-mt-mul-en/blob/main/tokenizer_config.json analogs to https://huggingface.co/Helsinki-NLP/opus-mt-en-ROMANCE/blob/main/tokenizer_config.json ? <|||||>This is still an issue as far as I can tell, and it would be cool if it was fixed. :)<|||||>Gently pinging @patil-suraj here - do you have an opinion here?<|||||>Not 100% sure, but language codes are required when there are multiple target languages as for such models we need to prepend the target language code to the source text. The `opus-mt-mul-en` models, translate multiple languages to English, so we do not need to insert any language codes as you can see from this example ```python3 model_name = 'Helsinki-NLP/opus-mt-mul-en' tokenizer = MarianTokenizer.from_pretrained(model_name) texts = [ "c'est une phrase en anglais que nous voulons traduire en français", # french "Isto deve ir para o português.", # portuguese "Y esto al español", # Spanish ] inputs = tokenizer(texts, return_tensors="pt", padding=True) gen_ids = model.generate(**inputs) tokenizer.batch_decode(gen_ids, skip_special_tokens=True) # ['is a phrase in English that we want to translate into French', 'This has to go to Portugal.', 'And this is in Spanish'] ``` So it seems there is no issue with the model. And if you look at `opus-mt-en-mul` or `opus-mt-en-ROMANCE` where there are multiple target languages, it does return non-empty `supported_language_codes` list.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,107
closed
[WIP] Add TFSpeech2Text
# What does this PR do? This PR adds TFSpeech2Text. The issue that requested it was recently closed due to inactivity so I don't think it is being worked on currently. If this is an incorrect assessment, feel free to let me know and I will close this. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @patil-suraj Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2021 16:27:04
08-12-2021 16:27:04
Gently pinging @stancld to see if he's still working on this :) <|||||>@patil-suraj I'm having a little trouble with the generate tests. Should I just do a custom generate function for this model in the modeling file or modify the base generate function to accommodate this model? <|||||>Sorry to only reply now. What is the issue with `generate`? Ideally `generate` should work, so let's try to avoid adding a custom `generate`. <|||||>This [assert](https://github.com/huggingface/transformers/blob/596bb85f2fabde6c5611cfa2664ddb357e228ec7/src/transformers/generation_tf_utils.py#L624) is the beginning of the issue. This is probably my interpretation of `input_ids`. Since the `input_ids` is changed to `input_features`, all of the generate functions pop the `input_ids` key. The shape of `input_features` is 3 dimensions though which most other models have a 2 dimension input for encoder-decoder models. I'm going to try to wrap this up today though so I'll figure out what the issue is without doing a custom `generate`.
transformers
13,106
closed
Fix VisualBERT docs
# What does this PR do? This PR fixes VisualBERT docs and adds demo link. Please let me know in case of any issues. Reviewers @LysandreJik @patil-suraj
08-12-2021 14:50:13
08-12-2021 14:50:13
transformers
13,105
closed
TF/Numpy variants for all DataCollator classes
This is a draft PR again - I've written an example of what a TF variant of one of our data collators would look like. If we're happy with this format, it should be easy to expand it to support Numpy/JAX as well, and to do the same for other data collators, and I'll probably add most of the other data collators to this PR before merging it. Let me know what you think!
08-12-2021 14:31:13
08-12-2021 14:31:13
More updates done - please note that tests will fail until all of the data collators are updated, because I removed the top-level imports. I definitely won't be merging this until that's done, don't worry!<|||||>All the classes are in! Thank you to @aromans and @sdwalker62, whose PR #12199 I cannibalized for MLM and its variants. Next step is finishing tests and making sure all of this actually works.<|||||>Hi @aromans and @sdwalker62, we're ready to merge now. I just realized I'll need your Github no-reply e-mail addresses to add you though - see the docs [here](https://docs.github.com/en/github/committing-changes-to-your-project/creating-and-editing-commits/creating-a-commit-with-multiple-authors#required-co-author-information). <|||||>[email protected]<|||||>Thanks!<|||||>[email protected]<|||||>It's in, and all authors have been properly credited! If you want to delete the messages with your e-mails (in case of spambot harvesting), feel free.
transformers
13,104
closed
Fix VisualBERT docs
# What does this PR do? This PR fixes VisualBERT docs. Please let me know in case on any remaining issues. Reviewers @patil-suraj @LysandreJik
08-12-2021 14:30:18
08-12-2021 14:30:18
transformers
13,103
closed
Ci last fix
# What does this PR do? The GPU/multi-GPU tests for the cuda extensions failed on the last commit on master because there is nothing to report if no tests were run. Changing the condition from always to failure (we don't want to report anything if there is no failure anyway) fixes that.
08-12-2021 14:19:37
08-12-2021 14:19:37
transformers
13,102
closed
inconsistency of the last element in hidden_states between PyTorch/Flax GPT2(Neo)
### Who can help @patrickvonplaten @patil-suraj ## Information The current Flax version of GPT2/GPTNeo give different results for the last element in `hidden_states` if `output_hidden_states=True`. This difference comes from the following fact: In Flax GPT2 (and GPTNeo similarly), `all_hidden_states` is prepared in `FlaxGPT2BlockCollection` which has no layer norm layer (`ln_f`), therefore the last hidden state is added before applying layer normalization. While in PyTorch/TF GPT2, it is prepared in `GPT2Model` or `TFGPT2MainLayer`, which contain `ln_f` layer, and the last hidden state is added after applying layer normalization. This could be fixed by updating the outputs in `FlaxGPT2Module.__call__`, (if it's worth the change), something like ``` hidden_states = outputs[0] hidden_states = self.ln_f(hidden_states) all_hidden_states = None if output_hidden_states: if not return_dict: all_hidden_states = outputs[1] else: all_hidden_states = outputs.hidden_states all_hidden_states = all_hidden_states[:-1] + (hidden_states,) if not return_dict: if all_hidden_states: return (hidden_states, all_hidden_states) + outputs[2:] else: return (hidden_states,) + outputs[1:] return FlaxBaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=outputs.attentions, cross_attentions=outputs.cross_attentions, ) ``` ### Related places in the source code PyTroch GPT2 https://github.com/huggingface/transformers/blob/773d386041b2761204dcc67b316904d8d5b412da/src/transformers/models/gpt2/modeling_gpt2.py#L820 ``` hidden_states = self.ln_f(hidden_states) ... # Add last hidden state if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) ``` TensorFlow GPT2 https://github.com/huggingface/transformers/blob/773d386041b2761204dcc67b316904d8d5b412da/src/transformers/models/gpt2/modeling_tf_gpt2.py#L397 ``` hidden_states = self.ln_f(hidden_states) ... # Add last hidden state if inputs["output_hidden_states"]: all_hidden_states = all_hidden_states + (hidden_states,) ``` Flax GPT2 https://github.com/huggingface/transformers/blob/773d386041b2761204dcc67b316904d8d5b412da/src/transformers/models/gpt2/modeling_flax_gpt2.py#L461 ``` # In `FlaxGPT2BlockCollection` which has no `ln_f` (only exist in `FlaxGPT2Module`) if output_hidden_states: all_hidden_states += (hidden_states,) ```
08-12-2021 14:17:02
08-12-2021 14:17:02
Hey @ydshieh, That's a great catch! And we should definitely correct this. The way to go here in my opinion is to remove the second: ```python if output_hidden_states: all_hidden_states += (hidden_states,) ``` of ```FlaxGPT2BlockCollection``` and move it to: ```FlaxGPT2Module``` as you've suggested I think. Would you be interested in opening a PR for this? :-)<|||||>Hi, yes, I can open a PR for this. But just to be sure > my opinion is to remove the second do you mean, in `FlaxGPT2BlockCollection`, we should (if specified) return the tuple containing all the hidden states EXCEPT the last one? And add the last one in `FlaxGPT2Module`. I am OK with it - it is just slightly different from what I wrote originally (still add the last one in `FlaxGPT2BlockCollection`, but updating later).<|||||>Yes I think we should add all hidden states EXCEPT the last one. This class is never used externally without using `FlaxGPT2Module` so it's safe to do IMO. Adding it once is the better option instead of adding it and updating it later IMO.<|||||>Hi, While working on a PR for this, it seems there is another bug in `FlaxGPT2BlockCollection`. Near the end of its call method, ``` outputs = (hidden_states,) if not return_dict: return tuple(v for v in outputs if v is not None) ``` it should be ``` outputs = (hidden_states, all_hidden_states, all_attentions) ``` I think. Otherwise, we never get `all_hidden_states` / `all_attentions` in the tuple. (FlaxBartModel has done the right way.) I am going to include a fix for this into the same PR. Is it ok for you?<|||||>That's a great catch! >I am going to include a fix for this into the same PR. Is it ok for you? Yes! Would be great if you fix it in the same PR.
transformers
13,101
closed
[To Show] Required changes for general multi-modal models
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2021 12:35:23
08-12-2021 12:35:23
transformers
13,100
closed
Rely on huggingface_hub for common tools
# What does this PR do? This PR removes the `hf_api` module from Transformers to rely on the one in `huggingface_hub`. It also fully deprecates the `transformers-cli` command lines relying on it (such as `login`, `whoami`, `logout`). In passing, when `model_list` was used, this PR switches to the new version, `list_models`. cc @julien-c
08-12-2021 12:28:24
08-12-2021 12:28:24
Awesome, wanted to do this for quite some time, thanks!
transformers
13,099
closed
[FlaxCLIP] allow passing params to image and text feature methods
# What does this PR do? Allows passing `params` to `get_text_features` and `get_image_features` methods. This is needed when we use transformations like `pmap/pjit` where we need to pass replicated or sharded params to functions.
08-12-2021 12:09:08
08-12-2021 12:09:08
transformers
13,098
closed
Fix Flax params dtype
# What does this PR do? The `dtype` argument in flax models is used ambiguously. This argument is actually supposed to specify the `dtype` of computation and not the `dtype` of model parameters. But in some models/modules, it's passed to kernel_initializers which causes the `kernel` parameters to be initialized with that `dtype`. This causes the following issues - in flax models, we don't pass `bias_init` to `Dense` layers since the default value is as expected by our models. So if we pass `dtype=jnp.bfloat16` it's only passed to `kernel_init`, so for a dense layer the kernel params are in `bfloat16` while the `bias` params are in `fp32` - This also causes issues with saving and loading models as explained in #12534 This PR corrects the usage of `dtype` in flax models and adds `to_bf16`, `to_fp16` and `to_fp32` methods in `FlaxPreTrainedModel`. These methods could accept any arbitrary params tree and change its `dtype`. So if users want they could keep certain params in bf16 and certain others in fp32 however they like, by just passing the right parameters to these methods. To allow keeping only certain params in half-precision the `to_bf16` method accepts a mask that specifies what params to keep in `bf16` and what params in `fp32`. For example ```python3 import jax import jax.numpy as jnp from flax.core.frozen_dict import freeze, unfreeze from flax.traverse_util import flatten_dict, unflatten_dict from transformers import FlaxBertModel, BertConfig config = BertConfig(num_hidden_layers=1) model = FlaxBertModel(config, dtype=jnp.dtype("bfloat16")) # keep layer norm in fp32 def mask_fn(params): flat_params = flatten_dict(params) flat_mask = {path: (path[-2:] != ("LayerNorm", "bias") and path[-2:] != ("LayerNorm", "scale")) for path in flat_params} return unflatten_dict(flat_mask) mask = mask_fn(model.params) params = model.to_bf16(model.params, mask) jax.eval_shape(lambda x : x, freeze(params)) # view the dtypes ``` - This PR also fixes an issue in some models where the `dtype` was never passed to some modules, so those modules were always doing computation in fp32 even if the user passed `bf16` or `fp16` dtype . - This should now help enable mixed-precision training in flax models as we can keep the params and computation `dtype` separate. --- 🚨 **BREAKING CHANGE** 🚨 **Note that: this will be a breaking change since the meaning of `dtype` is now changed and it's only used to specify the data type of computation and does not influence the data type of model parameters.**
08-12-2021 11:29:05
08-12-2021 11:29:05
I like the design and I think it follows jax-design quite nicely (similar to how optax optimizers mask certain weights: https://github.com/huggingface/transformers/blob/1fec32adc6a4840123d5ec5ff5cf419c02342b5a/examples/flax/summarization/run_summarization_flax.py#L588) This PR will necessarly have some breaking changes as after it loading a model with `dtype=bfloat16` won't convert the weights into bfloat16 anymore, so we should announce it well. Also it would be great if @avital could maybe quickly give his opinion on the API here<|||||>Hi folks, sorry for the radio silence, I'm back now. @jheek has thought carefully about the exact meaning of dtypes in Flax modules so I'd like to hear his take on this confusion. <|||||>I think masking is the right approach here. The right dtype is very context dependent. During inference half precision is almost always better while during training it's pretty much never worth it. And then of course there is fine-tuning where the masked weights are basically in inference mode. The mask based API captures this complexity really well. <|||||>just noticed a sneaky bug in some flax models, the `dtype` is never passed to some modules, for example here in bart, https://github.com/huggingface/transformers/blob/3fbb55c75779824aacfc43067f0892674a9cfbc6/src/transformers/models/bart/modeling_flax_bart.py#L400-L405 attention never receives `dtype`, so it’s always in `fp32` even if the user passed `bf16` . same with T5 here https://github.com/huggingface/transformers/blob/3fbb55c75779824aacfc43067f0892674a9cfbc6/src/transformers/models/t5/modeling_flax_t5.py#L1368 @patrickvonplaten @sgugger I propose we make `dtype` required for all modules except user-facing once? So all main model classes will have a default type (`fp32`) but for all other submodules make it required to avoid such bugs.<|||||>And I think the flax template has to be adapted here as well<|||||>Hey @patrickvonplaten ! - added a couple more tests as you suggested - updated the templates - ran tests on both GPU and TPU and they pass Would be awesome if you could take quick final look :) <|||||>Thanks for finishing the PR!<|||||>Think we just need to update the Flax templates now and we're good to go :-)
transformers
13,097
closed
Reactive test fecthers on scheduled test with proper git install
# What does this PR do? This PR reactivates the test fecther on the scheduled jobs, now that we have debug the root of the issue: PyTorch docker image does not contain a recent `git` version, which in turns does not work properly with GitHub actions, so we need to: - install it manually - **then** check out the repo after.
08-12-2021 09:33:31
08-12-2021 09:33:31
transformers
13,096
closed
Optimize Token Classification models for TPU
As per the XLA [documentation](https://github.com/pytorch/xla/blob/master/TROUBLESHOOTING.md#known-performance-caveats) XLA cannot handle masked indexing well. So token classification models for BERT and others use an implementation based on `torch.where`. This implementation works well on TPU. ALBERT, ELECTRA and LayoutLM token classification model uses the masked indexing which causes performance issues on TPU. This PR fixes this issue by following the BERT implementation. Relevant code in [BERT](https://github.com/huggingface/transformers/blob/c4e1586db8ef6b4102016fc5cb038940fde45325/src/transformers/models/bert/modeling_bert.py#L1741) # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-12-2021 08:25:23
08-12-2021 08:25:23
Hello, thank you for your PR! Would you happen to have performance results we can take a look at to see the improvement your PR offers?<|||||>Sure. I found this issue while running experiments for our paper. I cannot make that code public yet. I will make a colab notebook illustrating the issue.<|||||>Hi. Sorry for being so late. I have prepared a colab notebook showing the speedup [here](https://colab.research.google.com/drive/1Y9f3BWkTeQS7lFJSGXjcCULURYAJ1NZw?usp=sharing). With this PR we can improve ALBERT Token Classification model training time from 27.5 minutes to 3minutes.<|||||>Thank you for sharing, I have requested access to the doc<|||||>I have updated the permission. <|||||>Thank you, this looks good! @sgugger, @mfuntowicz, I think you're the most experienced with torch operations - do you have some feedback for this PR?<|||||>Sure. I will run the code on GPU too :)<|||||>Here is the [colab notebook](https://colab.research.google.com/drive/1OTBfHNt-ZGqFlz1kFvFEq-oCBRYFEuLw?usp=sharing) comparing the execution of the original implementation and the patched implementation on GPU. As expected, there is no performance degradation on GPU.<|||||>Thanks for checking! The failing test has been fixed on master so this is good to merge for me.<|||||>Thanks for your work @ibraheem-moosa, super nice addition!
transformers
13,095
closed
Memory accumulation when using hybrid clip script
## Environment info - `transformers` version: 4.9.1 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.18 - JaxLib version: 0.1.69 - Using GPU in script?: no - Using distributed or parallel set-up in script?: yes, TPU v3-8 ### Who can help @patil-suraj ## Information Model I am using (Bert, XLNet ...): BERT + ViT The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) I'm currently working on pretraining CLIP for Indonesian using scripts that are based on the [Hybrid CLIP](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip) example. I've run the code using the COCO dataset and it ran with no problem and managed to produce a working model, however I didn't track this run with wandb so I'm not sure how the system looked like with COCO. Right now I'm trying to train a model on a larger dataset (~12M image-text pairs). However it seems that the memory keeps accumulating as seen on the graph below. During one of my runs it (not tracked) it eventually crashed after ~7 hours which is why I noticed this. After rerunning and tracking it (terminated - [here](https://wandb.ai/galuh/clip-indonesian/runs/2qb4zp6v?workspace=user-galuh) is the wandb run) turns out it's probably because of the memory: ![image](https://user-images.githubusercontent.com/10180442/129117152-cd9cb507-532a-482f-bdfb-a99178f2ccb8.png) Looks like the sharp jumps happened during evaluation. Changes on the script compared to the hybrid clip example: - add wandb logging - save training, evaluation metrics, and checkpoints in steps instead of epochs -> at first I thought the memory accumulation was due to the training and evaluation metrics, however the issue still persists despite having logged and cleared my training metrics with a 200 step interval <note: if it's OK to add to the existing script, maybe I can make a separate issue to add logging and saving by steps and make a PR for that?> - use adafactor instead of adamw ## To reproduce Steps to reproduce the behavior: 1. Run the code in [this folder](https://github.com/galuhsahid/clip-indonesian/tree/main/hybrid_clip) by running `run_training.sh`, with any large image-text dataset that is prepared as instructed in the [readme of the examples folder](https://github.com/huggingface/transformers/tree/master/examples/research_projects/jax-projects/hybrid_clip) (jsonl file). ## Expected behavior Memory stays roughly constant throughout
08-11-2021 23:45:06
08-11-2021 23:45:06
Thanks a lot for posting the detailed issue. I'm not exactly sure about this, but could you try disabling `persistent_workers` in `DataLoader`, it was causing some issues for another team. > if it's OK to add to the existing script, maybe I can make a separate issue to add logging and saving by steps and make a PR for that? yes, that would be great! Feel free to open a PR for that :)<|||||>Along with what @patil-suraj suggested, you should also bring down your `num_workers` to 16 or 32 instead of 96. That should keep your memory in check if you're planning to train your model for 2-3 days.<|||||>@bhavitvyamalik Yup, for the run I linked in the original thread I've used `num_workers` = 16. I have used 96 earlier which crashed the run almost immediately with ~12M images. Decreasing the `num_workers` helped a lot indeed, although the memory is still seeing an increasing trend. @patil-suraj Thank you for the suggestion - disabling `persistent_workers` seems to work, this is how the graph looks like after I disabled it ([wandb run](https://wandb.ai/galuh/clip-indonesian/runs/33dqdxtd/system?workspace=user-galuh)): ![image](https://user-images.githubusercontent.com/10180442/129300348-6b7a7ae0-d9bb-4c85-9c15-ec7ef5c330ed.png) It's still increasing but at a much slower pace than when `persistent_workers` was enabled and somehow drops again much later. Not sure if this is expected though. (If it's not expected - this might be somehow related to this issue https://github.com/pytorch/pytorch/issues/13246. I've tried one of the solutions (converting the `examples` list in the DataLoader into np.array) but I'm still seeing the same increasing trend) Also sure would be happy to open a PR later! Thank you<|||||>Looking at the PyTorch issue it does seem related to the dataloader. In the `ImageTextDataset` in `run_hybrid_clip.py`, all examples, captions, image_paths are stored in python lists https://github.com/huggingface/transformers/blob/bda1cb02360fc9d290636cfcb6bcbeb4a18484ce/examples/research_projects/jax-projects/hybrid_clip/run_hybrid_clip.py#L220-L228 As suggested in that issue, could try storing the examples in a zero-copy object?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,094
closed
Improve type checker performance
# What does this PR do? conditionally declare `TOKENIZER_MAPPING_NAMES` within a `if TYPE_CHECKING` block so that type checkers don't need to evaluate the RHS of the assignment. this improves performance of the pylance/pyright type checkers ```Python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("gpt2") ``` from 12 seconds, down to 2.5 seconds <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-11-2021 22:04:57
08-11-2021 22:04:57
Last failure is not linked to this PR and has been fixed on master already, so we're good to go, thanks again!
transformers
13,093
closed
AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' in translation.ipynb notebook
I'm trying to use translation.ipynb notebook. I'm getting below error: ![image](https://user-images.githubusercontent.com/32965166/129086137-d5a1daca-ad95-4e8f-98a5-01f1618de791.png) ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-11-2021 18:50:31
08-11-2021 18:50:31
Hello! A new release of sacrebleu was released with breaking changes. Could you reinstall sacrebleu on version 1.5.1 to see if it runs? cc @sgugger <|||||>El El jue, ago. 12, 2021 a la(s) 12:28 a.m., Lysandre Debut < ***@***.***> escribió: > Hello! A new release of sacrebleu was released with breaking changes. > Could you reinstall sacrebleu on version 1.5.1 to see if it runs? cc > @sgugger <https://github.com/sgugger> > > — > You are receiving this because you are subscribed to this thread. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/13093#issuecomment-897412269>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ATAWLJIIQSKSOC5FCT6GLULT4NZYVANCNFSM5B7G2BFQ> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email> > . > <|||||>installing version 1.5.1 solves the problem. thanks!<|||||>Removed Sacrebleu 2.0.0 and installed 1.5.1 It works!<|||||>I used 1.5.1 but faild
transformers
13,092
closed
[Benchmark]
# 🖥 Benchmarking `transformers` ## Benchmark Which part of `transformers` did you benchmark? ## Set-up What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use? ## Results Put your results here!
08-11-2021 16:16:33
08-11-2021 16:16:33
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,091
closed
Install git
Add git to the installation instructions for pytorch-based images which do not have git installed.
08-11-2021 15:44:35
08-11-2021 15:44:35
Just added it in the TensorFlow tests that are commented out for now as I'm afraid we will forget otherwise when we uncomment them. Thanks a lot for adding this!
transformers
13,090
closed
[Flax/JAX] Run jitted tests at every commit
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Thanks to @sgugger's PR to run only tests that are affected by the code change, we can re-enable jitted Flax/JAX tests at every commit in my opinion. A jitted Flax/JAX test takes between 20 seconds and 5 minutes per model (only BigBird takes 5 minutes, the second longest test takes 1min), so a total of around 20 minutes (only when files affected all Flax models are pushed). If just a single Flax model is changed the tests will take a minute or so, see: https://circle-production-customer-artifacts.s3.amazonaws.com/picard/forks/5bdabdd888af1f000130874a/226201271/6113ef4861c2ff26950fd762-0-build/artifacts/~/transformers/reports/tests_flax_durations.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Date=20210811T154534Z&X-Amz-SignedHeaders=host&X-Amz-Expires=60&X-Amz-Credential=AKIAJR3Q6CR467H7Z55A%2F20210811%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Signature=1eae0d3553c5ebf6daac2dfa0db9af4dde96e7fe1630bab6d16e442eb8832cb7 @sgugger I think it's fine to run the jitted Flax/JAX tests now everytime thanks to your PR :-) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-11-2021 15:39:46
08-11-2021 15:39:46
@sgugger - I'll wait with merging this PR hten until "efficient testing" is rolled out for self-push github action<|||||>Efficient tests are rolled out on `self-push` -> merging the PR
transformers
13,089
closed
can"t connect ther online datasets.the issue:ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:4.6.1 - Platform: - Python version:3.6 - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy):run_glue.py @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. 2. 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
08-11-2021 15:30:20
08-11-2021 15:30:20
Hi, Can you create a Colab/code example to reproduce the issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,088
closed
Doctests job
Add a doctests job that runs on a daily basis. It currently goes through two files which are cleaned up for doctests (see `documentation_tests.txt`). As files get cleaned up, they should be added to that file to enable the tests. Once all files are cleaned, this logic can be removed.
08-11-2021 15:13:12
08-11-2021 15:13:12
transformers
13,087
closed
Fix classifier dropout in AlbertForMultipleChoice
Classification head of AlbertForMultipleChoice uses `hidden_dropout_prob` instead of `classifier_dropout_prob`. This is not desirable as we cannot change classifer head dropout probability without changing the dropout probabilities of the whole model. As shown in the paper Albert performance is hurt by dropout. So we should be able to change classifier head probability without changing internal dropout of Albert. Also I wonder if changing the internal dropout of a pretrained model is a good idea or not. Also I have seen similar issue in Bert and Roberta multiple choice models. I wonder if this is a conscious choice or an unintended bug. This PR fixes this behaviour for Albert. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-11-2021 14:48:56
08-11-2021 14:48:56
Hello! Indeed, thank you for fixing it! We'll gladly welcome PRs that update other models that have this issue.
transformers
13,086
closed
Missing `lm_head` parameter in FlaxGPT2LMHeadModel.params
## Environment info - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.29 - Python version: 3.8.10 - PyTorch version (GPU?): 1.9.0+cpu (False) - Tensorflow version (GPU?): 2.5.1 (False) - Flax version (CPU?/GPU?/TPU?): 0.3.4 (cpu) - Jax version: 0.2.16 - JaxLib version: 0.1.68 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten ## Information Model I am using: `FlaxGPT2LMHeadModel` ## To reproduce Steps to reproduce the behavior: This code snippet ``` from transformers import FlaxGPT2LMHeadModel model = FlaxGPT2LMHeadModel.from_pretrained('gpt2') {k for k in model.params} ``` gives ``` {'transformer'} ``` ## Expected behavior I expect the output will be ``` {'transformer', 'lm_head'} ``` because `FlaxGPT2LMHeadModule` has `self.lm_head` as in the code snippet below ``` class FlaxGPT2LMHeadModule(nn.Module): def setup(self): self.transformer = FlaxGPT2Module(self.config, dtype=self.dtype) self.lm_head = nn.Dense(...) ```
08-11-2021 14:36:31
08-11-2021 14:36:31
Hey @ydshieh, Thanks for the issue! The reason why `lm_head` is missing in the parameters is because the input and output embeddings are tied for `gpt2` which is the default case if not stated otherwise in the specific config -> see: https://github.com/huggingface/transformers/blob/c71f73f438c7848b7d86af5258e886f03ba45f1e/src/transformers/configuration_utils.py#L227 As a consequence the jax models run through this line of code: https://github.com/huggingface/transformers/blob/c71f73f438c7848b7d86af5258e886f03ba45f1e/src/transformers/models/gpt2/modeling_flax_gpt2.py#L590 which means that the `lm_head` weights are never used **instead** the `shared_kernel = self.transformer.variables["params"]["wte"]["embedding"].T` weights are passed through `lm_head` module. When the first were first created, jax therefore did not trace through an uninitialzied `self.lm_head` but just applied existing weights to the `lm_head` which then didn't create any weights for `lm_head`. => In short one can remember that weights are only created if the control flow goes through the `flax.linen.Module.__call__(...)` method. If the control flow just goes through a `flax.linen.Module.apply(...)` method with given weights, then the model does not expect any weights for this model and will never create it. <|||||>Hi, @patrickvonplaten , thank you for this explanation. Now I feel more sure about the code below (copied from `modeling_hybrid_clip.py`) for the recent work on `FlaxEncoderDecoderModel` ``` class FlaxEncoderDecoderModel(FlaxPreTrainedModel): @classmethod def from_encoder_decoder_pretrained( ... # init model model = cls(config, dtype=dtype, **kwargs) model.params["encoder"] = encoder.params model.params["decoder"] = decoder.params return model ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,085
closed
Proper import for unittest.mock.patch
# What does this PR do? Import from `unittest.mock` to avoid errors.
08-11-2021 14:08:16
08-11-2021 14:08:16
transformers
13,084
closed
やからん
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
08-11-2021 13:18:12
08-11-2021 13:18:12
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,082
closed
I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task.
I modified https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py script few days ago for training electra from scratch. But there were some problems(maybe bugs) i had to solve for this task. Currently I´m setting up a clean running version for training a electra language model from scratch with an additional document classification head based on the script. _Originally posted by @miketrimmel in https://github.com/huggingface/transformers/issues/4425#issuecomment-630715171_
08-11-2021 12:02:15
08-11-2021 12:02:15
location is currently not available...please share the exact location<|||||>Here's the folder containing the language modeling scripts: https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,081
closed
location is currently not available...please share the exact locationDetailed Explanation
Detailed Explanation https://mlcom.github.io/Create-Language-Model/ _Originally posted by @mlcom in https://github.com/huggingface/transformers/issues/4425#issuecomment-774689668_
08-11-2021 12:01:32
08-11-2021 12:01:32
location is currently not available...please share the exact location<|||||>@apkbala107 mlcom.github.io<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,080
closed
[Vision Transformers] examples and pretraining
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> Example scripts for fine-tuning and pretraining for clip and Beit models ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> At the moment I am training ViT and DeiT for several tasks and thought it would be interesting to compare with clip or from scratch pretrained beit model in self supervised way. I saw there are already examples for training clip using transformers and flax, not sure if there is an specific reason why it's not already in this repo. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md --> If there is someone out who could give me advice, I would be interested adding the Beit pretraining script for pytorch. Longtime I definitely want to add comparisons for several tasks between ViT and from scratch trained BeiT especially for low resource tasks to the hub
08-11-2021 11:53:55
08-11-2021 11:53:55
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,079
closed
fix: keep separate hypothesis for different beam group
# What does this PR do? It fixes the issue in diverse beam search implementation. <be> # What was the issue? By definition, diverse beam search is a variant of beam search which tries to group beams and introduce variance between the groups. After successful decoding, we select beams from each group thus giving diverse solutions. For example with beam size 3 and group size 3, we will have 1 beam in each group. And finally, if we want the top 3 suggestions, we should select the top suggestion from each group. This condition was violated in the current implementation thus giving similar generated sequences in some cases. Paper for reference [Diverse Beam Search](https://arxiv.org/pdf/1610.02424.pdf) ## Who can review? @patrickvonplaten Please review changes
08-11-2021 11:12:24
08-11-2021 11:12:24
@ayushtiku5 could you maybe take a look here? :-)<|||||>@patrickvonplaten @ayushtiku5 doesn't seem to be available, could you please take a look? Regarding test failure, It is related to the change itself, there are some assertions on structure of beam hypothesis array, which needs to be changed. Once change seems fine to you I can make test work.<|||||>@patrickvonplaten If you need some examples I can try to find them. I tried to debug this issue on my custom trained weights, it would be hard to find an example on public models.<|||||>Thanks for pinging me again on this @ayubSubhaniya! I finally took some time to look a bit deeper into the PR - IMO it's not really a bug, but just a matter of how to define "beam_per_group". If `num_beams` is thought of as the total number of beams for generation then, `num_beams // num_beam_groups` is the number of beams per group. It is questionable whether `num_beams` in this case should represent the overall number of beams across all groups or *per* group - if I understand correctly in your opinion in should be per group. I understand this point, but the problem is that making this change now is a big backwards compatibility breaking change: Image all the users using group beam search in their pipelines now that all of a sudden get different results because the meaning of `num_beams` change. So, I'd prefer to not merge this PR as IMO it doesn't really fix a bug but just re-interprets the meaning of `num_beams` which is a very public API<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,078
closed
[Doctest] Setup, quicktour and task_summary
# What does this PR do? This PR starts the work of re-enabling the doc tests and makes sure our documentation uses the latest version of the APIs. For the doctest setup, it registers the options we will need for `doctest` in the setup.cfg. For the quicktour and task_summary, it tweaks all results to match the output of the code so the doctests pass for those two files, and removes the use of deprecated APIs (AutoModelWithLMHead) as well as favoring the call method of the tokenizers over the encode method.
08-11-2021 08:57:24
08-11-2021 08:57:24
transformers
13,077
closed
Add MultiBERTs conversion script
# What does this PR do? This PR adds MultiBERTs checkpoint conversion script to the bert model. This PR closes #13069. Currently the issue is that some weights required in HuggingFace `BertForPreTraining` are not present in the classifier layer. The `cls` keys present in `BertForPreTraining`: ```python cls.predictions.bias cls.predictions.transform.dense.weight cls.predictions.transform.dense.bias cls.predictions.transform.LayerNorm.weight cls.predictions.transform.LayerNorm.bias cls.predictions.decoder.weight cls.predictions.decoder.bias cls.seq_relationship.weight cls.seq_relationship.bias ``` Name splits starting with `cls` in the MultiBERTs checkpoint `seed_0.zip`: ```python ['cls', 'predictions', 'output_bias'] ['cls', 'predictions', 'transform', 'LayerNorm', 'bias'] ['cls', 'predictions', 'transform', 'LayerNorm', 'kernel'] ['cls', 'predictions', 'transform', 'dense', 'bias'] ['cls', 'predictions', 'transform', 'dense', 'kernel'] ['cls', 'seq_relationship', 'output_bias'] ['cls', 'seq_relationship', 'output_weights'] ``` The following weights are not present: ```python cls.predictions.decoder.weight cls.predictions.decoder.bias ``` How do I handle this? EDIT: ------ I checked for the original BERT checkpoints present in the table [here](https://github.com/google-research/bert) , the same issue happens there as well. EDIT 2: ------ The BERT conversion skip also skips the final layers (pre-training). Specifically, this script does not handle MLM/NSP heads.
08-11-2021 07:58:54
08-11-2021 07:58:54
Thanks for working on this. Where are the MultiBERT checkpoints in the README you link to? Nvm, found them here: https://github.com/google-research/language/tree/master/language/multiberts And question out of interest: was it not possible to use the existing conversion script?<|||||>@NielsRogge Thanks for taking a look at this. The existing scripts do not consider NLP/MLM heads - which are probably randomly initialized for downstream models, including pre-training, MLM and NSP. I feel, for MultiBERTs, we might want the heads too? I'm not 100% sure of this requirement. Maybe @yanaiela can share the exact requirement he has in mind if that is not the case? What do you think?<|||||>So, although in my specific use case I don't need the MLM heads, it would be nice to have. Also, I don't think that this is what this PR is doing, but I think it would be beneficial to integrate such function into the standard model loading. There's an increasing interest of researchers in studying the intermediate steps of models, and there's more and more publications that release these checkpoints. It would be good to be able to load these checkpoint seamlessly through the regular api, rather than converting each checkpoint on its own.<|||||>@yanaiela Are you saying that the checkpoints should be available on the Hub for easy use? For example: ```python from transformers import BertModel model = BertModel.from_pretrained('multiberts-seed-0') ``` ? Or, add a custom definition in the modeling file which does something like: ```python from transformers import BertModel model = BertModel.from_pretrained_original_tf_url(<URL>) ```<|||||>Well, the first option would be super convenient (btw, it should also contain the checkpoint step).<|||||>I think this conversion script will be needed in order to convert. I can put these models up on the hub after that so the first option can be used.<|||||>ah sure, but I'm not sure if the best option would be to upload all of them to the hub? It may accumulate to a lot of storage (relatively) with all the checkpoints, so maybe a good option would be to integrate the conversion when calling the `from_pretrained` function locally. What do you think?<|||||>@yanaiela If that is the requirement, then is there an issue with downloading the checkpoint, using conversion script and then using the model? I could add another method which downloads from the URL if not present in cache and returns the model. For example ```python from transformers.models.bert.convert_original_multiberts_tf2_checkpoint_to_pytorch import get_pretrained_multiberts_checkpoint seed_0_model = get_pretrained_multiberts_checkpoint(seed=0, force_download=True) ``` <|||||>Ah I mainly was commenting on the comment of putting models on the hub. Integrating the script sounds like a good idea though!<|||||>:P Okay. Not sure which is the best way to go, wdyt @NielsRogge?<|||||>Hi, > ah sure, but I'm not sure if the best option would be to upload all of them to the hub? Storage is not a problem on the hub. All MultiBERT checkpoints can be uploaded there. In that way, people can do the following: ``` from transformers import BertModel model = BertModel.from_pretrained("google/multibert-...") ``` Regarding the conversion script, I wonder whether we could update the existing conversion script in `modeling_bert.py` to also include the MLM and NSP heads, such that we don't need to add a new one. > The following weights are not present: > cls.predictions.decoder.weight > cls.predictions.decoder.bias => is that because MultiBERT checkpoints use weight tying (i.e. same embedding layer at the input and output)?<|||||>@NielsRogge Sorry I didn't check this earlier. You are right, they [use the input embeddings](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/run_pretraining.py#L140) to the [`get_masked_lm_output` method](https://github.com/google-research/bert/blob/eedf5716ce1268e56f0a50264a88cafad334ac61/run_pretraining.py#L257). Should I edit the bert conversion script with an option to include MLM/NSP heads? <|||||>@NielsRogge I checked. The current `convert_bert_original_tf_checkpoint_to_pytorch` script does exactly what I'm doing ;-; Sorry, I was looking at the `tf2` checkpoint script earlier. :/ My bad. Should I start pushing the multiberts checkpoints to the hub, then?<|||||>Yes, you can upload all checkpoints to the hub (under the "Google" namespace), and then close this PR.<|||||>Thanks a lot for working on this @gchhablani!<|||||>I have pushed all final checkpoints to the hub as `multiberts-seed-x` where x ranges from 0 to 24. For intermediate, I'm thinking something like `multiberts-seed-x-10k` for the 10k-th checkpoint. Does this sound okay?<|||||>Yes, that's fine!<|||||>--- language: en tags: - exbert - multiberts license: apache-2.0 datasets: - bookcorpus - wikipedia --- # MultiBERTs Seed 0 (uncased) Seed 0 pretrained BERT model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). ## Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the BERT model as inputs. ## Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=bert) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2. ### How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0') model = BertModel.from_pretrained("bert-base-uncased") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` ### Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased#limitations-and-bias) of the [bert-base-uncased](https://huggingface.co/bert-base-uncased) checkpoint. ## Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers). ## Training procedure ### Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is. ### Pretraining The model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after. ### BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a> Adding this README to all, will just replace `seed-0` with `seed-x` and `Seed 0` with `Seed X`.<|||||>That looks great @gchhablani !<|||||>@NielsRogge @patrickvonplaten I have added all the MultiBERTs checkpoints to `google` organization on the hub. @yanaiela You can access them like: ```python from transformers import BertModel model = BertModel.from_pretrained('google/multiberts-seed-0') intermediate_checkpoint = BertModel.from_pretrained('google/multiberts-seed-0-20k') ``` Please let me know in case of any issues!<|||||>Awesome, thanks! btw, based on the documentation, it seems like the tokenizer uses the `bert-base-uncased` tokenizer. Is there a reason not to allow the same name with the tokenization?<|||||>Looks good to me!<|||||>@gchhablani - just to follow-up here...did you manage to correctly convert the mulit-bert checkpoints with an existing conversion script (that also took into account MLM and NSP)? In this case, I might have merged the PR too quickly :sweat_smile: <|||||>Hi @patrickvonplaten Sorry, I didn't check this PR was merged. Yes, I had updated the script. However, we don't need this script as the `load_tf_checkpoint` method works fine for the conversion. Should I create another PR to remove the script? Or should I revert the merge? I used the following script for pushing intermediate checkpoints: [MultiBERTs Pushing Script](https://gist.github.com/gchhablani/070d41ec7b02a0b3b0429d04cadee557) and a similar one for the final checkpoints. We don't need a new conversion script. <|||||>@yanaiela No, there's no reason why we cannot add the tokenizer files as well to the checkpoints. I can do that if needed. Wdyt @patrickvonplaten @NielsRogge?<|||||>Sure it would be nice to add the tokenizer files as well!<|||||>> Hi @patrickvonplaten Sorry, I didn't check this PR was merged. Yes, I had updated the script. > > However, we don't need this script as the `load_tf_checkpoint` method works fine for the conversion. Should I create another PR to remove the script? Or should I revert the merge? > > I used the following script for pushing intermediate checkpoints: [MultiBERTs Pushing Script](https://gist.github.com/gchhablani/070d41ec7b02a0b3b0429d04cadee557) and a similar one for the final checkpoints. We don't need a new conversion script. Yeah it would be great if you could open a new PR to delete the conversion file in this case then :-) Thanks a lot!<|||||>@patrickvonplaten @yanaiela I have added the tokenizer files to the checkpoints and updated the model card accordingly. Please let me know if you find any issues.<|||||>It works great btw. Well done, and thanks!
transformers
13,076
closed
respect dtype of the the model when instiating not working
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.2 - Platform: Linux-4.18.0-25-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0a0+52ea372 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: No ### Who can help @stas00 as he is the writer of the [#12316](https://github.com/huggingface/transformers/pull/12316) <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce First case: ```python from transformers import AutoModel AutoModel.from_pretrained("my_path", torch_dtype=torch.float16) ``` The above code results in ```python /opt/conda/envs/ml/lib/python3.7/site-packages/transformers/models/auto/auto_factory.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) [40/1573] 377 if not isinstance(config, PretrainedConfig): 378 config, kwargs = AutoConfig.from_pretrained( --> 379 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs 380 ) 381 /opt/conda/envs/ml/lib/python3.7/site-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 451 if "model_type" in config_dict: 452 config_class = CONFIG_MAPPING[config_dict["model_type"]] --> 453 return config_class.from_dict(config_dict, **kwargs) 454 else: 455 # Fallback: use pattern matching on the string. /opt/conda/envs/ml/lib/python3.7/site-packages/transformers/configuration_utils.py in from_dict(cls, config_dict, **kwargs) 579 kwargs.pop(key, None) 580 --> 581 logger.info(f"Model config {config}") 582 if return_unused_kwargs: 583 return config, kwargs /opt/conda/envs/ml/lib/python3.7/site-packages/transformers/configuration_utils.py in __repr__(self) 611 612 def __repr__(self): --> 613 return f"{self.__class__.__name__} {self.to_json_string()}" 614 615 def to_diff_dict(self) -> Dict[str, Any]: /opt/conda/envs/ml/lib/python3.7/site-packages/transformers/configuration_utils.py in to_json_string(self, use_diff) 675 else: 676 config_dict = self.to_dict() --> 677 return json.dumps(config_dict, indent=2, sort_keys=True) + "\n" 678 679 def to_json_file(self, json_file_path: Union[str, os.PathLike], use_diff: bool = True): /opt/conda/envs/ml/lib/python3.7/json/__init__.py in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw) 236 check_circular=check_circular, allow_nan=allow_nan, indent=indent, 237 separators=separators, default=default, sort_keys=sort_keys, --> 238 **kw).encode(obj) 239 240 /opt/conda/envs/ml/lib/python3.7/json/encoder.py in encode(self, o) 199 chunks = self.iterencode(o, _one_shot=True) 200 if not isinstance(chunks, (list, tuple)): --> 201 chunks = list(chunks) 202 return ''.join(chunks) 203 /opt/conda/envs/ml/lib/python3.7/json/encoder.py in _iterencode(o, _current_indent_level) 429 yield from _iterencode_list(o, _current_indent_level) 430 elif isinstance(o, dict): --> 431 yield from _iterencode_dict(o, _current_indent_level) 432 else: 433 if markers is not None: /opt/conda/envs/ml/lib/python3.7/json/encoder.py in _iterencode_dict(dct, _current_indent_level) 403 else: 404 chunks = _iterencode(value, _current_indent_level) --> 405 yield from chunks 406 if newline_indent is not None: 407 _current_indent_level -= 1 /opt/conda/envs/ml/lib/python3.7/json/encoder.py in _iterencode(o, _current_indent_level) 436 raise ValueError("Circular reference detected") 437 markers[markerid] = o --> 438 o = _default(o) 439 yield from _iterencode(o, _current_indent_level) 440 if markers is not None: /opt/conda/envs/ml/lib/python3.7/json/encoder.py in default(self, o) 177 178 """ --> 179 raise TypeError(f'Object of type {o.__class__.__name__} ' 180 f'is not JSON serializable') 181 TypeError: Object of type dtype is not JSON serializable ``` Second case: ```python m = GPT2LMHeadModel.from_pretrained(model_path, torch_dtype_auto_detect=True) ``` yields the following error. ```python /opt/conda/envs/ml/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1319 else: 1320 with no_init_weights(_enable=_fast_init): -> 1321 model = cls(config, *model_args, **model_kwargs) 1322 1323 if from_pt: TypeError: __init__() got an unexpected keyword argument 'torch_dtype_auto_detect' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior First case Regarding the first case, setting torch_dtype works with AutoModel as well as specific model classes. Can this be fixed? It would be convenient for me if we could sue "torch_dtype" key-value pair in config.json which [is not supported in the current version](https://github.com/huggingface/transformers/pull/12316/commits/368c71c0978e0d2f731cec72daea2a5a687e7b97). Second case Shouldn't the second case run without any errors? <!-- A clear and concise description of what you would expect to happen. -->
08-11-2021 05:45:41
08-11-2021 05:45:41
Thank you for the great report, @hwijeen I'm able to reproduce both problems: ``` python -c "from transformers import GPT2LMHeadModel; GPT2LMHeadModel.from_pretrained('sshleifer/tiny-gpt2', torch_dtype_auto_detect=True)" python -c "import torch; from transformers import AutoModel; AutoModel.from_pretrained('sshleifer/tiny-gpt2', torch_dtype=torch.float16)" ``` Once I get a chance I will work on it and we will sort it out.<|||||>ok, where did you find `torch_dtype_auto_detect`? The documented syntax is: `torch_dtype='auto'` for auto detection. Perhaps you were looking at the original proposal discussion before the API was selected? This works just fine: ``` python -c "from transformers import AutoModel; AutoModel.from_pretrained('sshleifer/tiny-gpt2', torch_dtype='auto')" ```<|||||>Oh, I see. `torch_dtype` is the right keyword. But setting it "auto" does not seem to work: `python -c "from transformers import AutoModel; m=AutoModel.from_pretrained('sshleifer/tiny-gpt2', torch_dtype='auto');print(m.dtype)"` # This gives torch.float32. Just for a sanity check, I tried loading my own model whose weight is float16 and the result was the same. `python -c "from transformers import AutoModel; m=AutoModel.from_pretrained(my_path, torch_dtype='auto');print(m.dtype)"` # This gives torch.float32! It seems that `torch_dtype='auto'` is not working as expected? <|||||>why do you think it's float16? the auto-detector checks the first entry: ``` $ wget https://huggingface.co/sshleifer/tiny-gpt2/resolve/main/pytorch_model.bin $ python -c "import torch; sd=torch.load('pytorch_model.bin'); print(next(iter(sd.values())).dtype)" torch.float32 ``` but we can look at all of them: ``` python -c "import torch; sd=torch.load('pytorch_model.bin'); print([v.dtype for v in sd.values()])" [torch.float32, torch.float32, torch.float32, torch.float32, torch.uint8, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.uint8, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32, torch.float32] ``` Also I think Sam was making many test models `half()`, perhaps just not this one? Try it on other of his tiny test models? You can see the test that saves as fp16 and then auto-detects it to be fp16: https://github.com/huggingface/transformers/blob/c89180a9de1fc2e98654812fd1c233c3bc6a8d43/tests/test_modeling_common.py#L1687-L1692<|||||>I was not sure whether `sshleifer/tiny-gpt2` uses float16 or not, and that's why I tried with my own model (megatronLM) which (mostly) has float16. ``` python -c "import torch; sd=torch.load('pytorch_model.bin'); print([v.dtype for v in sd.values()])" [torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.uint8, torch.float32, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16] ``` I tried to load this model with two ways, and only one yields the correct result: ```python # load correctly with specific model class GPT2LMHeadModel.from_pretrained(".", torch_dtype="auto").dtype torch.float16 # but fails with AutoModelForCausalLM AutoModelForCausalLM.from_pretrained(".", torch_dtype="auto").dtype torch.float32 ``` The test cases you linked seem to be using specific model classes, so perhaps this is AutoModel's fault?<|||||>Yes, clearly `AutoModel` goes through a different path and needs to be better tested and fixed. > I tried with my own model (megatronLM) which (mostly) has float16. The question is what to do with models that have mixed dtypes - typically a model is either fp16 or fp32. I can see how a custom buffer may be of fp32 while the params are in fp16. Could you explain your situation and how mixed is your model? <|||||>I am using [Megatron-LM](https://github.com/NVIDIA/Megatron-LM) by Nvidia. As you may know, this code trains a billion scale language model using various parallelism techniques. One thing to note is that this library does not rely on apex amp to achieve mixed precision training, and it has a complicated and self-contained code to deal with fp16 -- so I would say that models with various data types are not a usual case and is not a higher priority. But the `AutoModel` problem shown above looks like an urgent issue to me.. Are you planning to work on this in the near future? (I would also be happy to look into the problem if you could share some hints.)<|||||>> But the AutoModel problem shown above looks like an urgent issue to me Which of the AutoModel problems are you referring to? If it's the pickle issue, then one needs some kind of `to_json` workaround for the `torch.dtype` class. It should be easy to just comment out that code as well, if it gets in the way and it's urgent as you say. Until it's resolved. By all means if you can solve it, it'd be super helpful. If it's the auto-detection failing because it checks the first key entry, then before solving it, as suggested we need to discuss what to do if the model has mixed dtypes. I suppose with just fp16/fp32 it obviously should be auto=fp32, but now we are going to have other types like bf16, so hardcoding is going to be an issue. I'm going to be offline for the next 3 days and can follow up next on Friday.<|||||>> I am using Megatron-LM by Nvidia. As you may know, this code trains a billion scale language model using various parallelism techniques. One thing to note is that this library does not rely on apex amp to achieve mixed precision training, and it has a complicated and self-contained code to deal with fp16 -- so I would say that models with various data types are not a usual case and is not a higher priority. Running on the official checkpoint: ``` wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O checkpoint.zip python3 /hf/transformers-master/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py checkpoint.zip python -c "from transformers import MegatronBertForMaskedLM; m = MegatronBertForMaskedLM.from_pretrained('.'); d = {p.dtype():1 for p in m.parameters() }; print(d.keys())" ``` prints: `dict_keys([torch.float32])` so there are only fp32 keys in that official checkpoint. But that's just that checkpoint. Which keys do you get when you run the quick check from above (last line of code with `from_pretrained('.')` adjusted to point to your model. <|||||>Ah, of course, the above test is wrong, because it relies on transformers, which by default loads in fp32, need to recode to do it based on the checkpoint. here you go: ``` python -c "import torch; sd=torch.load('pytorch_model.bin'); d = {p.dtype:1 for p in sd.values() }; print(d.keys())" dict_keys([torch.float16]) ``` so it's all fp16. not mixed. but again, this is just this checkpoint.<|||||>> so there are only fp32 keys in that official checkpoint. But that's just that checkpoint. When I opened the official checkpoint with `torch.load`, it seems like it mostly has float16. ``` wget --content-disposition https://api.ngc.nvidia.com/v2/models/nvidia/megatron_bert_345m/versions/v0.1_uncased/zip -O checkpoint.zip unzip checkpoint.zip python -c "import torch; sd = torch.load('model_optim_rng.pt', map_location='cpu'); print([v.dtype for v in sd['model']['language_model']['transformer'].values()])` [torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16, torch.float16] ``` In my case, I get a mixture of `float32`, `float16`, `uint8`. Most of the params are `float16` with masked_bias being `float32` and bias being `uint8`. I am not 100% sure but I guess this has to do with Megatron version issue.. <|||||>As you pointed out, dealing with mixed data type is complicated and needs further discussion. On the other hand, I think `AutoModel`'s pickle issue is orthogonal to this, and I will look into it when I have time (perhaps this weekend) and get back to you with if I find a solution :) > If it's the pickle issue, then one needs some kind of to_json workaround for the torch.dtype class. It should be easy to just comment out that code as well, if it gets in the way and it's urgent as you say. Until it's resolved. Thanks for the quick workaround! <|||||>Right, so my 2nd attempt was potentially wrong too, since the original checkpoint went through a conversion and I guess it could have ignored the original dtypes and made it fp16 all. However doing it the right way hopefully inspecting the original and based on your code: ``` python -c "import torch; sd=torch.load('release/mp_rank_00/model_optim_rng.pt'); d = {p.dtype:1 for p in sd['model']['language_model']['transformer'].values() }; print(d.keys())" dict_keys([torch.float16]) ``` is still fp16 (for this checkpoint). Perhaps when the model is mixed, `from_pretrained()` should assert and tell the user to choose one? The problem is not `transformers` by torch which loads the weights under a fixed dtype. Unless we change the dtype context for each key perhaps? <|||||>> As you pointed out, dealing with mixed data type is complicated and needs further discussion. Perhaps let's open a new Issue that focuses just on this separate issue and please tag me, sgugger and LysandreJik on it. Thank you! You can use the above one liner to show us the mixed keys your model contains and then it'd be easier to understand what's going on. <|||||>> Right, so my 2nd attempt was potentially wrong too since the original checkpoint went through a conversion and I guess it could have ignored the original dtypes and made it fp16 all. Oh, I double-checked and confirmed that the weights in Megatron-LM checkpoint are all in fp16. It was the [conversion script](https://github.com/huggingface/transformers/blob/master/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py) that made the checkpoint have mixed data type. Specifically, [this line](https://github.com/huggingface/transformers/blob/91ff480e2693f36b11aaebc4e9cc79e4e3c049da/src/transformers/models/megatron_gpt2/convert_megatron_gpt2_checkpoint.py#L160) produces `uint8` and this line `float32`. I'll open a new issue to address this. So at least in my case, my model is not a mixed data type -- are there any cases where data types are mixed? If not, I think a new issue is not necessary?<|||||>> So at least in my case, my model is not a mixed data type -- are there any cases where data types are mixed? If not, I think a new issue is not necessary? I asked the same question when working on the original feature and those who followed up, said they didn't think they saw such cases. I can only think of a registered buffer which can be of whatever dtype and be different from the weights. That's said perhaps down the road we should check that indeed all the weights have the same dtype, so we don't accidentally set a dtype that is not like the rest. But let's worry about it if it becomes a problem.
transformers
13,075
closed
Custom Seq2Seq translation model training exits with error
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0.dev0 - Platform: Linux-4.14.238-182.422.amzn2.x86_64-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes, using deepspeed ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten @sgugger ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ X] my own task or dataset: (give details below) I am training a custom language translation model with pre-trained Roberta as the source model and custom-trained GPT-Neo as the target model. The training process quickly exits with an error and the error stack trace is pasted below. The custom language translation model has been developed based on the HF example: https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16 ## To reproduce Steps to reproduce the behavior: 1. Run an HF Trainer with deepspeed (see below deepspeed script) 2. The Trainer process exits with error (see below stack trace) 3. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ### Error Stack Trace ``` Traceback (most recent call last): File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 303, in <module> tr_model.train(model_dir=model_output_dir, epochs=epochs) File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 194, in train Traceback (most recent call last): File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 303, in <module> train_results = trainer.train() File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1267, in train Traceback (most recent call last): File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 303, in <module> tr_model.train(model_dir=model_output_dir, epochs=epochs) File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 194, in train train_results = trainer.train() File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1267, in train Traceback (most recent call last): File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 303, in <module> tr_model.train(model_dir=model_output_dir, epochs=epochs) File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 194, in train tr_loss += self.training_step(model, inputs) train_results = trainer.train() File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1760, in training_step File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1267, in train tr_model.train(model_dir=model_output_dir, epochs=epochs) File "src/phase3/language_model/luna_llayla_translator_model_trainer.py", line 194, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1760, in training_step train_results = trainer.train() File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1267, in train tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1760, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1794, in compute_loss tr_loss += self.training_step(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1760, in training_step loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1794, in compute_loss loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1794, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl loss = self.compute_loss(model, inputs) File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 1794, in compute_loss outputs = model(**inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) outputs = model(**inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 799, in forward File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 799, in forward outputs = model(**inputs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl output = self.module(*inputs[0], **kwargs[0]) output = self.module(*inputs[0], **kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 799, in forward return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/torch/nn/parallel/distributed.py", line 799, in forward output = self.module(*inputs[0], **kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 450, in forward return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 450, in forward **kwargs_decoder, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl output = self.module(*inputs[0], **kwargs[0]) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl **kwargs_decoder, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 450, in forward **kwargs_decoder, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states' return forward_call(*input, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py", line 450, in forward return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states' **kwargs_decoder, File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states' return forward_call(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'encoder_hidden_states' ``` ### deepspeed script ```bash export TOKENIZERS_PARALLELISM=false export NCCL_IB_DISABLE=1 export NCCL_SOCKET_IFNAME=eth0 export NCCL_DEBUG=INFO deepspeed --include localhost:0,1,2,3 src/phase3/language_model/luna_llayla_translator_model_trainer.py ``` ### Trainer snippet ```python def train(self, model_dir, epochs=10, learning_rate=5e-05, clear_cuda_cache=True, metric_name='rouge'): if clear_cuda_cache and torch.cuda.is_available(): torch.cuda.empty_cache() print('Loading dataset ...') if not self.train_raw_dataset: self.load_raw_dataset(self.dataset_path) print('Pre-processing dataset ...') self.preprocess_data() print('Loading metric data ...') if not self.metric: self.load_metric(metric_name=metric_name) train_config = LunaLlaylaTranslatorTrainingConfig.config if epochs: train_config['num_train_epochs'] = epochs if learning_rate: train_config['learning_rate'] = learning_rate if model_dir: train_config['output_dir'] = model_dir if self.deepspeed_config: train_config['deepspeed'] = self.deepspeed_config train_arguments = Seq2SeqTrainingArguments(**train_config) print('Training ...') trainer = Seq2SeqTrainer( model=self.translator_model, args=train_arguments, data_collator=default_data_collator, train_dataset=self.train_processed_dataset, eval_dataset=self.validation_processed_dataset, compute_metrics=self.compute_metrics, ) train_results = trainer.train() print('Training completed.') print('Evaluating model ...') train_metrics = train_results.metrics trainer.log_metrics("train", train_metrics) trainer.save_metrics("train", train_metrics) print('*** Train metrics ***') print(train_metrics) eval_metrics = trainer.evaluate() try: perplexity_score = math.exp(eval_metrics['eval_loss']) except OverflowError: perplexity_score = float('inf') eval_metrics['perplexity_score'] = perplexity_score trainer.log_metrics("eval", eval_metrics) trainer.save_metrics("eval", eval_metrics) print('*** Eval metrics ***') print(eval_metrics) print('Saving trained model ...') trainer.save_state() trainer.save_model(output_dir=model_dir) print('Evaluating model ...') self.evaluate() ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> I expect the training process to be complete without errors.
08-10-2021 21:19:21
08-10-2021 21:19:21
Please use the [forums](https://discuss.huggingface.co/) for help to debug your scripts, and provide all relevant code. The error indicates you are attempting to pass a `encoder_hidden_states` to a model that don't accept that but we don't see our your dataset is or how your model is, so no one can help you understand why.<|||||>I am not explicitly passing any state to the model at all. I can't share the dataset as it could be proprietary to our organization, but can I at-least know if Roberta to GPT-Neo language translation is possible? I glanced at the code base https://github.com/huggingface/transformers/blob/master/src/transformers/models/gpt_neo/modeling_gpt_neo.py and it looks like the forward() method does not accept encoder_hidden_states as a parameter?<|||||>The trainer script works fine when a pre-trained Roberta model is used as the source and a custom-trained Roberta model is used as the target model. It only fails when a pre-trained Roberta model is the source and custom-trained GPT-Neo model is the target.<|||||>Here is the list of combinations tried out for seq2seq translation: 1) Roberta to Roberta: No issues faced with the seq2seq model training 2) Roberta to GPT-2: No issues faced with the seq2seq model training 3) Roberta to GPT-Neo: Non-recoverable errors during the seq2seq model training<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,074
closed
Change a parameter name in FlaxBartForConditionalGeneration.decode()
# What does this PR do? In short: Change a parameter name in `FlaxBartForConditionalGeneration.decode()`: `deterministic` -> `train`. In the current version of `FlaxBartForConditionalGeneration.decode()` method, it takes an argument `deterministic`, while `FlaxBartPreTrainedModel.decode()`, `FlaxT5PreTrainedModel.decode()`, `FlaxT5ForConditionalGeneration.decode()`, and similar places in `FlaxGPT2`, they all use `train` as the argument. It seems to me that there is a (implicit?) convention that, in Flax models, we use `deterministic` parameter for `nn.Module` and parameter `train` for models inheriting from `FlaxPreTrainedModel`. This PR fix this small inconsistency in `FlaxBartForConditionalGeneration.decode()`. I hope this PR makes sense, despite the change is really small. ## Before submitting - [ x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) ## Who can review? @patrickvonplaten @patil-suraj
08-10-2021 19:56:24
08-10-2021 19:56:24
I made the same change to flax marian and mbart as suggested.<|||||>Thanks a lot for fixing this! Merging, the failing test is un-related.
transformers
13,073
closed
t5 base not found.
- 't5-base' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-base' is the correct path to a directory containing a config.json file
08-10-2021 17:45:26
08-10-2021 17:45:26
Hey @s4sarath, https://huggingface.co/t5-base looks normal to me - what exactly is your error?<|||||>Hi @patrickvonplaten , t5-small is working fine. Transformers version is **4.9.0.dev0** ``` from transformers import TFT5Model model = TFT5Model.from_pretrained("t5-base") ``` OSError: file t5-base/config.json not found OSError: Can't load config for 't5-base'. Make sure that: - 't5-base' is a correct model identifier listed on 'https://huggingface.co/models' - or 't5-base' is the correct path to a directory containing a config.json file<|||||>Do you have a folder called `t5-base` in your working directory? The `TFT5Model` may be trying to load from that directory rather than from the hub.<|||||>Hi Lysandre. You were true. My bad. Closing this ticket. Thanks for your help. On Wed, 11 Aug, 2021, 6:58 pm Lysandre Debut, ***@***.***> wrote: > Do you have a folder called t5-base in your working directory? The > TFT5Model may be trying to load from that directory rather than from the > hub. > > — > You are receiving this because you were mentioned. > Reply to this email directly, view it on GitHub > <https://github.com/huggingface/transformers/issues/13073#issuecomment-896827708>, > or unsubscribe > <https://github.com/notifications/unsubscribe-auth/ACRE6KG7EFEL3Y4QQQCD4Q3T4J3H7ANCNFSM5B4SPOOA> > . > Triage notifications on the go with GitHub Mobile for iOS > <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> > or Android > <https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email> > . > <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,072
closed
Revert to all tests whil we debug what's wrong
# What does this PR do? Remove the use of test_fetcher for GPU and multiGPU tests for now.
08-10-2021 16:35:42
08-10-2021 16:35:42
transformers
13,071
closed
Fix fallback of test_fetcher
# What does this PR do? When the `test_fetcher` util fails, it falls back to all tests, which is fine when we have not set any filters, but not fine if we did.
08-10-2021 14:13:50
08-10-2021 14:13:50
transformers
13,070
closed
top-k sampling for Flax models
# 🚀 Feature request We're using a custom (`Flax`) Seq2Seq model (`Bart`) in our `DALL·E mini` project to generate the image tokens, currently, we're doing the following to generate samples (`encoded image`) given a tokenized prompt to the Seq2Seq model: ``` model.generate( **tokenized_prompt, do_sample=True, num_beams=1, prng_key=subkey ) ``` But, now we're trying to experiment with the generation method (for example `top_k` sampling) to see if it improves our generated samples. We've tried doing the following: ``` model.generate( **tokenized_prompt, do_sample=True, top_k=50, prng_key=key, params=params ) ``` Which throws- `NotImplementedError: Beam sampling is currently not implemented.` After looking at the source-code for `generation_flax_utils`, it's indeed the case that it hasn't been implemented yet ([here](https://huggingface.co/transformers/_modules/transformers/generation_flax_utils.html)). I hope the sampling feature will be integrated into the flax models, our project and other (Flax ones) will benefit greatly from the feature (and I'm sure it's already in the todo-list of the amazing HF dev team). [_project repo_](https://github.com/borisdayma/dalle-mini) <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation We believe that the `top_k` sampling will improve our generations (quite a bit if not a lot), and even in the taming transformers paper, they've used `top_k` sampling for generating tokens with their autoregressive models and have shown higher `top_k` (subjective to the dataset) improves image generations by increasing variance (in a sense). So we just want to experiment with this sampling method and see if we can find something similar for our model.
08-10-2021 14:10:49
08-10-2021 14:10:49
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,069
closed
MultiBerts in Huggingface
# 🚀 Feature request It would be nice to have a script/converter of all the [multiberts](https://arxiv.org/pdf/2106.16163.pdf) checkpoints that were released about a month ago. It was [released](https://github.com/google-research/language/tree/master/language/multiberts) by Google, thus using tensorflow, and it would be nice to have it under the huggingface library/hub. <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> These checkpoints may be useful to study training dynamics and hypotheses over multiple seeds that were used to train the model.
08-10-2021 13:14:59
08-10-2021 13:14:59
transformers
13,068
closed
Can not instantiate `PreTrainedTokenizerFast` from instantiated tokenizer object
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Windows - Python version: 3.6.6 - PyTorch version (GPU?): N/A - Tensorflow version (GPU?): 2.3.0 - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Run the following script: ``` from tokenizers import Tokenizer tok = Tokenizer(BPE(unk_token="[UNK]")) trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]) tok.pre_tokenizer = Whitespace() def it(): for t in ['one', 'two', 'three']: yield t tok.train_from_iterator(it(), trainer) from transformers import PreTrainedTokenizerFast tok = PreTrainedTokenizerFast(tokenizer_object=tok) ``` I am getting error: > Exception has occurred: ValueError (note: full exception trace is shown but execution is paused at: _run_module_as_main) > Couldn't instantiate the backend tokenizer from one of: (1) a `tokenizers` library serialization file, (2) a slow tokenizer instance to convert or (3) an equivalent slow tokenizer class to instantiate and convert. You need to have sentencepiece installed to convert a slow tokenizer to a fast one. > I have sentencepiece installed (0.1.91). ## Expected behavior No error
08-10-2021 13:02:12
08-10-2021 13:02:12
The ability to load a tokenizer object directly was introduced in version v4.5.x! Could you upgrade your `transformers` version to a more recent one?
transformers
13,067
closed
Fix ModelOutput instantiation form dictionaries
# What does this PR do? Currently, instantiating a `ModelOutput` from a dictionary does not yield proper results. It nests the dictionary in the first field instead of populating the fields with the content of the dictionary. This PR fixes that and adds a regression test to make sure this behavior is not accidentally removed.
08-10-2021 09:44:58
08-10-2021 09:44:58
transformers
13,066
closed
Model output dict
# What does this PR do? Currently, instantiating a `ModelOutput` from a dictionary does not yield proper results. It nests the dictionary in the first field instead of populating the fields with the content of the dictionary. This PR fixes that and adds a regression test to make sure this behavior is not accidentally removed.
08-10-2021 09:40:01
08-10-2021 09:40:01
Arg, branched from the wrong point. CLosing and reopening.
transformers
13,065
closed
[WIP] Add Japanese RoBERTa Model
# Add Japanese RoBERTa Model <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> I have recently trained a Japanese version of RoBERTa-base, and would like to make our model publicly available via this wonderful library. The files are available [here](https://www.dropbox.com/sh/tu5v06ge4hgo2c4/AADdtnrBxh73076onmpt9gUva?dl=0) I made two major changes: 1. Added a new tokenizer file `tokenization_roberta_japanese.py` - If one can nicely merge this tokenizer to `tokenization_bert_japanese.py`, that would be the best. (as adding a new file is not ideal) 2. Added `do_zenkaku` option to `tokenization_bert_japanese.py` - This is because I normalized every hankaku character to zenkaku character in preprocessing ## Background I have recently trained a Japanese version of RoBERTa-base using [fairseq codebase](https://github.com/pytorch/fairseq/blob/master/examples/roberta/README.pretraining.md). I believe that our Japanese RoBERTa is better than the existing Japanese pre-trained language models (i.e., BERTs) that are publicly available for the following reasons: - **More data**: Existing BERT models (the one from [NICT](https://alaginrc.nict.go.jp/nict-bert/index.html) and [cl-tohoku](https://huggingface.co/cl-tohoku/bert-base-japanese)) use Wikipedia dump for training. We also use CC-100 corpus, which is about 17.5 times larger than Wikipedia. - **Better model**: We trained RoBERTa that is empirically better than vanilla BERT. - **More compute**: In order to take advantage of large training data, we trained our RoBERTa longer than vanilla BERT. The training took 1 month on DGX-2 (V100 32GB x 16). In fact, my colleagues have conducted experiments on multiple Japanese benchmark datasets, and confirmed that our RoBERTa is indeed superior: ### [amazon_reviews dataset](https://huggingface.co/datasets/amazon_reviews_multi) | model | accuracy | | ---- | ---- | | [NICT_BERT-base](https://alaginrc.nict.go.jp/nict-bert/index.html) | 0.6014 | | [bert_bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese) | 0.5980 | | Our RoBERTa | **0.6198** | ### [paws-x dataset](https://github.com/google-research-datasets/paws/tree/master/pawsx) | model | accuracy | | ---- | ---- | | [NICT_BERT-base](https://alaginrc.nict.go.jp/nict-bert/index.html) | 0.8285 | | [bert_bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese) | 0.8305 | | Our RoBERTa | **0.8440** | ### [JRTE dataset (Macro F1)](https://github.com/megagonlabs/jrte-corpus) | Model | BASE | ME | MLM | | ------------- | ------------- | ------------- | ------------- | | [NICT_BERT-base](https://alaginrc.nict.go.jp/nict-bert/index.html) | 90.3 | **80.0** | 55.5 | | [bert_bert-base-japanese-v2](https://huggingface.co/cl-tohoku/bert-base-japanese)| 86.1 | 75.2 | 53.8 | | Our RoBERTa| **92.3** | 77.8 | **58.0** | ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - I read the guideline and ran the `make test` command. However, I am struggling with so many FAILED tests. - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? - I have not written any test yet. (I am not familiar with writing a test, so it may take a while.) ## Who can review? I would really appreciate if @LysandreJik could review the code.
08-10-2021 08:08:55
08-10-2021 08:08:55
Also it seems that the CircleCI jobs were not triggered - do you mind pushing a new commit (can be empty) to trigger the jobs? Thank you!<|||||>Hi, @LysandreJik . Thank you very much for your helpful review. I would really like to work on your suggestions (I have a lot to learn from your comments), but right now I have to deal with my paper deadline. So please forgive me that it will take some time to start working on it. <|||||>No worries @butsugiri, looking forward to it!<|||||>I am sorry that it took me so long to address the review comments. Currently, the following tests are not passing on my laptop (and they should fail on CI, too): - `test_pickle_subword_regularization_tokenizer` - `test_subword_regularization_tokenizer` - `test_tokenizer_slow_store_full_signature` I am not familiar enough with the internals of the library; so I am not sure how I should deal with them. In addition, I think I have to write document for this new tokenizer (`make quality` command gave me an error). <|||||>Hi @butsugiri - I'm currently in the process of checking with CircleCI why the runs don't trigger on your PRs. Will resolve this problem and work with you to resolve the issues above. Thanks for your patience!<|||||>I see. Thank you!<|||||>Hello @butsugiri! It seems that the CircleCI tests aren't run because they try to run with the user @lalitpagaria's credentials. Do you know why that might be so? @lalitpagaria, if you receive a notification, from what I understand the CircleCI authentication system tries to authenticate you as a runner for this PR - but your credentials have gone stale and CircleCI cannot manage to run the tests. It was mentioned you should do a full CircleCI [re-authentication](https://support.circleci.com/hc/en-us/articles/360051228052-How-to-perform-a-full-re-authentication). Please let me know if this is expected or not - I don't see any commits from @lalitpagaria so I'm unaware of what might be causing this. I would expect the Hugging Face auth to be used to run these tests.<|||||>@LysandreJik oh man. I dont have any idea why this happened. In past I added RAG related PRs (around 10 months back) that time to debug CI failure I opened Circles CI. Not sure of that caused this. But how come it automatically update credentials when I don't have permission 🤷🏽‍♂️ Not receiving any notification from circleCI. I have removed them from my approved app list long time back<|||||>That's helpful, thank you for sharing @lalitpagaria! Will report back to CircleCI's customer support.<|||||>@LysandreJik I just enabled circleCI. See if this help at least should unblock existing failing CI meanwhile you work with them to get this sorted. I really apologize if any of my actions caused this.<|||||>@butsugiri Thank you for making the effort to publish the Japanese Roberta model. If it's not a problem, I would like to know the details of how you trained tokenizer. I would like to know what kind of pre-processing you did on the pre-trained corpus and the Mecab dictionary you used.<|||||>@butsugiri, sorry this slipped through the cracks - could you close the PR and open a new one without touching your branch? Hopefully the CI should trigger, otherwise I'll take care of it. Thank you!<|||||>@kambehmw Thank you for your interest. My colleagues and I are currently preparing the release page with the details of pretraining including tokenization. I would appreciate if you could wait for it, Thanks!<|||||>@LysandreJik Absolutely no problem. Thank you for your continuous support, I really appreciate it. Close this PR and will open new one.<|||||>@butsugiri Thanks for your reply. I understand that you are preparing a release page, and I look forward to the official release of the Japanese Roberta Model. Thanks.
transformers
13,064
closed
How to extract the encoded data of feed & forward layer in tfbertmodel?
env:tf2.2 model:TFBertModel.from_pretrained('hfl/chinese-bert-wwm-ext') i'm working on an information extraction project. First, I predict the “**subject**” through Bert CRF, then **tf. Gather ()** the coding layer of shared Bert and the location information corresponding to the “**subject**”, and then predict the “**object**“, but I can't extract the **feed & forward** layer of Bert now I want to extract the output of the **feed & forward** layer of the Bert model as the shared coding layer, but I can't find the corresponding method. I want to obtain the output similar to the following: # Tensor("Transformer-11-FeedForward-Add/add:0", shape=(None, None, 768), dtype=float32) I tried through ”**model.trainable_weights[- 5]**”layer, but the extracted output is obviously not what I need, and I don't want to directly use "model (ids, masks, tokens) [0]", because Bert's last layer is processed by "layerNormal"
08-10-2021 06:40:56
08-10-2021 06:40:56
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks! cc @Rocketknight1 <|||||>I asked questions where you said, but no one replied to me. I hope you can tell me the answer to this question. Thank you! https://discuss.huggingface.co/t/how-to-extract-the-encoded-data-of-feed-forward-layer-in-tfbertmodel/9320<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,063
closed
[WIP] Correct wav2vec2 flax
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-09-2021 23:49:36
08-09-2021 23:49:36
transformers
13,062
closed
Cannot import name 'BEiTForImageClassification' from 'transformers'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> Hi trying to run the BEiTForImageClassification in Google Colab and got the following error Cannot import name 'BEiTForImageClassification' from 'transformers'. Any suggestion on how to fix it? - `transformers` version: 4.9.2 - Platform: Google Colab Models: - nielsr/beit-base-patch16-224 ## To reproduce Steps to reproduce the behavior: Based on https://huggingface.co/nielsr/beit-base-patch16-224. 1. Run and using the following code ` ` ` from transformers import AutoTokenizer, BEiTForImageClassification tokenizer = AutoTokenizer.from_pretrained("nielsr/beit-base-patch16-224") model = BEiTForImageClassification.from_pretrained("nielsr/beit-base-patch16-224") ` ` ` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> An error "Cannot import name 'BEiTForImageClassification' from 'transformers' "
08-09-2021 22:36:49
08-09-2021 22:36:49
I have encountered the same problem, and solved it by re-installing `transformers` from the source via the following command: `pip install git+https://github.com/huggingface/transformers` (for further details please see https://huggingface.co/transformers/installation.html). Installing from the source resulted in a `transformers ` version of 4.10.0, and then I could import the Beit models. Another thing that caught my attention in your code sample is that there is a little typo that might also be causing the problem, the module to be imported is written as `BEiTForImageClassification`, however it is supposed to be `BeitForImageClassification`.<|||||>Thanks it solves the problem. Also for the clarification.
transformers
13,061
closed
Fix small typo in M2M100 doc
# What does this PR do? Reading the M2M100 documentation, I think a little typo has crept into the code snippet. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed and @patil-suraj :slightly_smiling_face: <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-09-2021 17:01:55
08-09-2021 17:01:55
transformers
13,060
closed
Exporting Fine tuned T5ForConditionalGeneration model to TF-Serving using ONNX
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.1 - Platform: Linux-5.4.0-1049-gcp-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.9.0 (False) - Tensorflow version (GPU?): 2.5.0 (False) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help @patrickvonplaten, @patil-suraj ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: * my own modified scripts: I use a fine tuned version of t5 (fine tuned using Huggingfqce and PyTorch), trained on a custom dataset for summarization. Since PyTorch Serving is no longer an option because of unrelated reasons, I require TF-Serving for a production optimized setting. I'm using the ONNX pipeline detailed here : https://huggingface.co/transformers/serialization.html#converting-an-onnx-model-using-the-transformers-onnx-package, with necessary changes to the paths. When I serve this model and do inference, it seems the model being loaded isn't the fine tuned one, as it gives output of the following nature : `In In In In In auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf auf` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: Summarization ## To reproduce Steps to reproduce the behavior: ``` def load_ckp(checkpoint_fpath, model, optimizer): checkpoint = torch.load(checkpoint_fpath, map_location=torch.device('cpu')) model.load_state_dict(checkpoint['state_dict']) optimizer.load_state_dict(checkpoint['optimizer']) return model, optimizer, checkpoint['epoch'] import os tokenizer = T5Tokenizer.from_pretrained("t5-base") model = T5ForConditionalGeneration.from_pretrained("t5-base") from torch import cuda device = 'cuda' if cuda.is_available() else 'cpu' model = model.to(device) optimizer = torch.optim.Adam(params = model.parameters(), lr=1e-4) ckp_path = '/checkpoint_dir/checkpoint.pt' model, optimizer, start_epoch = load_ckp(ckp_path, model, optimizer) model.save_pretrained('fine-tuned') ##saves it to pytorch_model.bin format and config.json, which is needed for onnx tokenizer.save_pretrained('fine-tuned') tokenizer = T5Tokenizer.from_pretrained("fine-tuned") model = T5ForConditionalGeneration.from_pretrained("fine-tuned") ==== COMMAND LINE==== python -m transformers.onnx --model=fine-tuned onnx/t5-tf-serving/ ``` At this point, I get the following warning : `Some weights of the model checkpoint at fine-tuned were not used when initializing T5Model: ['lm_head.weight']`, but the process completes with the following message : `All good, model saved at: onnx/t5-tf-serving/model.onnx`. Post this I use `onnx-tf convert -i onnx/t5-tf-serving/model.onnx -o output.pb` to get the corresponding Tensorflow SavedModel, and use standard docker based procedure for deploying it with TF-Serving. ## Expected behavior I'm able to serve the model using the proper request formats, but the outputs are way off, as shown above. I'm guessing it has to do with the warning message that was displayed when converting the pytorch model to onnx. Fwiw, I tested out normal inference on the .bin formatted pytorch model that was obtained using the `model.save_pretrained('fine-tuned')` function, and it was generating expected outputs. Can you please suggest workarounds? @patrickvonplaten, @patil-suraj
08-09-2021 16:48:02
08-09-2021 16:48:02
Also pinging @mfuntowicz here as the expert for everything onnx related. It might be the case that the `lm_head.weight` are not correctly ported to ONNX here. Could you try to do the following after training After line: ``` model, optimizer, start_epoch = load_ckp(ckp_path, model, optimizer) ``` add those two lines ``` model.config.tie_word_embeddings =False # sets `config.tie_word_embeddings=False` model.lm_head.weight = torch.nn.Parameter(model.shared.weight.T.clone()) # clones the correct parameters ``` and try if this works. Not at all sure, but this should at least remove the warning<|||||>Hey @patrickvonplaten, thanks for getting back. I tried running the pipeline with the additional lines of code you'd provided. It gave me a shape mismatch initially in the line where the `model.lm_head.weight` parameter is set, but I was able to get around that by removing the `.T` in the RHS. Even after doing this, the `lm_head.weight not initialized` warning still appeared, but I decided to try out the remainder anyway. Curiously, I did notice that the `config.json` file had an additional key-value pair this time, specifically `'decoder_start_token_id': 0`, which I'm hoping was a step in the right direction. I converted the onnx file that was generated to the TF SavedModel format with the same command in my original post, and inspected the inputs and outputs to the model with the `saved_model_cli show` command. The outputs of the same are as follows : ``` signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['attention_mask'] tensor_info: dtype: DT_INT64 shape: (-1, -1) name: serving_default_attention_mask:0 inputs['decoder_attention_mask'] tensor_info: dtype: DT_INT64 shape: (-1, 1) name: serving_default_decoder_attention_mask:0 inputs['decoder_input_ids'] tensor_info: dtype: DT_INT64 shape: (-1, 1) name: serving_default_decoder_input_ids:0 inputs['input_ids'] tensor_info: dtype: DT_INT64 shape: (-1, -1) name: serving_default_input_ids:0 The given SavedModel SignatureDef contains the following output(s): outputs['output_0'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, 768) name: StatefulPartitionedCall:0 outputs['output_1'] tensor_info: dtype: DT_FLOAT shape: (-1, -1, 768) name: StatefulPartitionedCall:1 ``` The outputs section indicates that these are the outputs of a couple of hidden layers which has 768 units (I'm guessing it's the final encoder/decoder hidden states), but what is actually required is the softmax distribution over ~32k tokens for summarization. Just out of curiosity, I did try exporting the fine tuned PyTorch model through the standard `torch.onnx.export` pipeline, with the following command: `torch.onnx.export(model, (model.dummy_inputs['input_ids'], model.dummy_inputs['decoder_input_ids'], model.dummy_inputs['decoder_attention_mask']), 't5-for-serve.onnx')`, and it exited successfully with just a `TracerWarning`. On converting this to the TF SavedModel format, I get these input and output signatures : ``` signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['0'] tensor_info: dtype: DT_INT64 shape: (3, 5) name: serving_default_0:0 inputs['2'] tensor_info: dtype: DT_INT64 shape: (3, 5) name: serving_default_2:0 inputs['attention_mask.1'] tensor_info: dtype: DT_INT64 shape: (3, 5) name: serving_default_attention_mask.1:0 The given SavedModel SignatureDef contains the following output(s): outputs['output_0'] tensor_info: dtype: DT_FLOAT shape: (3, 5, 32128) name: PartitionedCall:0 outputs['output_1'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:1 outputs['output_10'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:2 outputs['output_11'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:3 outputs['output_12'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:4 outputs['output_13'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:5 outputs['output_14'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:6 outputs['output_15'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:7 outputs['output_16'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:8 outputs['output_17'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:9 outputs['output_18'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:10 outputs['output_19'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:11 outputs['output_2'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:12 outputs['output_20'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:13 outputs['output_21'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:14 outputs['output_22'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:15 outputs['output_23'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:16 outputs['output_24'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:17 outputs['output_25'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:18 outputs['output_26'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:19 outputs['output_27'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:20 outputs['output_28'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:21 outputs['output_29'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:22 outputs['output_3'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:23 outputs['output_30'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:24 outputs['output_31'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:25 outputs['output_32'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:26 outputs['output_33'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:27 outputs['output_34'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:28 outputs['output_35'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:29 outputs['output_36'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:30 outputs['output_37'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:31 outputs['output_38'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:32 outputs['output_39'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:33 outputs['output_4'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:34 outputs['output_40'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:35 outputs['output_41'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:36 outputs['output_42'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:37 outputs['output_43'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:38 outputs['output_44'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:39 outputs['output_45'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:40 outputs['output_46'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:41 outputs['output_47'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:42 outputs['output_48'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:43 outputs['output_49'] tensor_info: dtype: DT_FLOAT shape: (3, 5, 768) name: PartitionedCall:44 outputs['output_5'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:45 outputs['output_6'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:46 outputs['output_7'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:47 outputs['output_8'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:48 outputs['output_9'] tensor_info: dtype: DT_FLOAT shape: (3, 12, 5, 64) name: PartitionedCall:49 ``` As can be seen, the inputs and outputs differ significantly. The number of inputs have gone down by 1, compared to the `transformers.onnx` export having suggestions from @patrickvonplaten. Plus, the sizes of the inputs and outputs are fixed, which I'm guessing can be overriden with the `dynamic_axes` parameter in the export step. But the first output in this does have what appears to be the softmax distribution over the ~32k tokens. Is this the right approach @patrickvonplaten @mfuntowicz ? <|||||>Hmm ok :-/ Think we'll have to wait here until @mfuntowicz is back from vacation I'm afraid...he knows better how to debug onnx + tf<|||||>Sure @patrickvonplaten thanks for the quick turnaround though.<|||||>Hey @patrickvonplaten, just checking in to see if @mfuntowicz is back :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @patrickvonplaten @mfuntowicz , any updates on this?<|||||>Hi @sekharvth, It seems that you are interested in doing seq2seq language modeling (`T5ForConditionalGeneration`), but you do not specify that when exporting to ONNX, so what seems to happen is that you are only exporting a `T5Model`, not a `T5ForConditionalGeneration`, hence the issue. For each architecture, you can find which feature is supported for the ONNX export [here](https://github.com/huggingface/transformers/blob/master/src/transformers/onnx/features.py#L60). The command line utilty takes a feature parameter that defaults to `default` when left unspecified, in your case you want `seq2seq-lm`, could you try this and tell me if this solves the issue: ``` python -m transformers.onnx --feature="seq2seq-lm" --model=fine-tuned onnx/t5-tf-serving/ output_dir ``` <|||||>Hey @michaelbenayoun . Aaah, there's a param to be set then. I'll try this out and let you know :) Thank you!<|||||>Hey @michaelbenayoun , thank you so much for pointing out the solution, it works like a charm :) Side note : If anyone is else is trying to serve a generative model, you need to make calls to the served model repeatedly with the latest decoded array, until the stop token is generated (much like the usual inference logic). Takeaway being that from limited initial testing, serving the model doesn't necessarily speed up the inference process. Closing this issue :)
transformers
13,059
closed
RAG: building my own dataset
@shamanez Hello! My question is about building our own dataset to train RAG from scratch. It was mentioned [here](https://github.com/huggingface/transformers/tree/master/examples/research_projects/rag#finetuning) that we can build our dataset to finetune the model, but the links do not open. Could you please provide us with the steps and examples?
08-09-2021 16:43:33
08-09-2021 16:43:33
Check this script. https://www.github.com/huggingface/transformers/tree/master/examples%2Fresearch_projects%2Frag%2Fuse_own_knowledge_dataset.py<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,058
closed
Non-English characters not fully supported by GPT-2 HF model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: Ubuntu 20.04 - Python version: 3.7.11 - PyTorch version (GPU?): - Tensorflow version (GPU?): yes - Using GPU in script?: - Using distributed or parallel set-up in script?: ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): GPT-2 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. @patrickvonplaten, @LysandreJik When I try to use the bad_words_ids argument in the generate method, it only takes in a list of ints (per the documentation: transformers.NoBadWordsLogitsProcessor). But if I've trained the GPT-2 model using any characters higher than UTF-8, the encoding for those characters requires more than just an int to represent. Is there any way to use Huggingface transformers for a language other than English and prevent the output from containing those characters as well? Or is there any alternative way to use GPT-2 with UTF-32 that you know of? <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> GPT-2 would be more usable if it could be fully supporting of UTF-32 characters, and not just UTF-8.
08-09-2021 16:32:46
08-09-2021 16:32:46
The `bad_word_ids` argument takes as input the token IDs, it's unrelated to the encoding. If you've trained a GPT-2 model on a larger vocabulary containing characters from other languages, then you should be able to specify the ID of the words you'd like to ignore in your vocabulary.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,057
closed
Use original key for label in DataCollatorForTokenClassification
DataCollatorForTokenClassification accepts either `label` or `labels` as key for the labels in it's input. However after padding the label it assigns the padded labels to key `labels`. If originally `label` was used as key than the original upadded labels still remains in the batch. Then at line 192 when we try to convert the batch elements to torch tensor than these original unpadded labels cannot be converted as the labels for different samples have different lengths. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-09-2021 16:30:23
08-09-2021 16:30:23
Sure. I will get to it tonight.<|||||>Failure is unrelated to this PR (linked to sacrebleu release today), so merging. Thanks again!
transformers
13,056
closed
Change how "additional_special_tokens" argument in the ".from_pretrained" method of the tokenizer is taken into account
# What does this PR do? This PR is a proposal for the issue #12533. ## Motivation This change in behavior is motivated in particular by the [`test_special_tokens_initialization` test in `test_tokenization_common`](https://github.com/huggingface/transformers/blob/v4.9.2/tests/test_tokenization_common.py#L3132). This test consists in loading thanks to the method `from_pretrained` a tokenizer (on the hub) and adding in the arguments `additional_special_tokens=[AddedToken("<special>", lstrip=True)]` then to check that this new special token `"<special>"` was well added. ```python added_tokens = [AddedToken("<special>", lstrip=True)] tokenizer_r = self.rust_tokenizer_class.from_pretrained( pretrained_name, additional_special_tokens=added_tokens, **kwargs r_output = tokenizer_r.encode("Hey this is a <special> token") special_token_id = tokenizer_r.encode("<special>", add_special_tokens=False)[0] self.assertTrue(special_token_id in r_output) ``` This test does not currently work in one case: if the repository on the hub contains a `"special_tokens_map.json"` file that defines the value of `"additional_special_tokens"` then the new argument `additional_special_tokens=[AddedToken("<special>", lstrip=True)]` added in the `.from_pretrained` method is ignored. ## New behavior introduce by this PR This PR introduces a change that gives priority to arguments provided in the `.from_pretrained` method. Understand that if there is a competition between the definition of an argument in the `.from_pretrained` method and in the repository files, then the definition of the argument in the method will be chosen. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. In particular: @LysandreJik and @sgugger - It would be awesome if you could validate the new behavior and give your opinion on the PR @NielsRogge - It would be great if you could look at the proposed change in the tokenization of Luke and possibly the test of Canine @patil-suraj - It would be great if you could look at the proposed change in the tokenization of M2M-100 and MBart50. For these 2 models, I have proposed changes so that the user can still define additional special tokens even if these templates already define additional special tokens corresponding to the language codes. This behavior was already present in `TokenizerMBart` but not in the tokenizers of these last models. If these are not desirable behaviors for these models, I can reverse the changes and just disable the corresponding tests for these models @patrickvonplaten - it would be great if you could take a look at the change in the Wave2Vec2 processor test (if you agree with this change, there is the same function used in `test_tokenization_wave2vec2.py` that I didn't change) Thank you for your time ## Failing Test One test is currently failing `test_run_seq2seq_no_dist` in `test_trainer_ext.py`. At the moment I'm not sure that I see the link with the changes I introduced, but I'm still investigating. EDIT: I think this test is failing do to a change in [sacrebleu](https://github.com/mjpost/sacrebleu/) which affects :hugs: datasets library. I understand that the problem is in process to be solved on the datasets side ([PR](https://github.com/huggingface/datasets/pull/2778))
08-09-2021 16:26:33
08-09-2021 16:26:33
LGTM! Thank you for making this cleaner.<|||||>Feel free to merge after solving the code quality issues!
transformers
13,055
closed
Roll out the test fetcher on push tests
# What does this PR do? This PR rolls out the test_fetcher script on the merged PR in master, making it so that: - circleCI only runs the tests that are affected by the diff between the master branch and the last commit - non slow single and multi GPU tests only runs the tests that are affected by the diff between the master branch and the last commit The last thing to add is a scheduled job to run all those tests daily to make sure we don't miss anything.
08-09-2021 14:58:53
08-09-2021 14:58:53
transformers
13,054
closed
Is there any convenient way to train a transformer from scratch ?
Hello guys! `Huggingface/transformers` is such a convenient library to use when it comes to all sorts of pertained model. But I am wondering is there a convenient way to train a model from scratch ? If I want to rebuild the model in `Attention is all you need` , the first thought came into my mind is change modeling_bart.py to adapt to`Attention is all you need` setting like `three way weight tying`,,and do not using `.from_pretrained`, Is there any better way to do it ? I know this is a **pertained_model** library , but isn't is be cool to do something more with it? I am looking forward to your reply
08-09-2021 14:51:44
08-09-2021 14:51:44
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? There are a few scripts related to pretraining: [language-modeling examples](https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling) Thanks!
transformers
13,053
closed
Is there any way to train a transformer model from scrat
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
08-09-2021 14:44:43
08-09-2021 14:44:43
transformers
13,052
closed
Fix omitted lazy import for xlm-prophetnet
# What does this PR do? Fixes import code for the xlm-prophetnet model to support lazy import. Continued from #13015. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
08-09-2021 12:09:53
08-09-2021 12:09:53
The failures are unrelated to this PR (and the underlying problem is fixed on master) so can be safely ignored.<|||||>One last thing: can you run `make style` on your branch to fix the quality issue? We should be good to go after.
transformers
13,051
closed
[Feature Processing Sequence] Remove duplicated code
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR removes redundant code which was overlooked in https://github.com/huggingface/transformers/pull/12804
08-09-2021 11:27:31
08-09-2021 11:27:31
transformers
13,050
closed
docs: add HuggingArtists to community notebooks
# What does this PR do? * Add [HuggingArtists](https://github.com/AlekseyKorshuk/huggingartists) project to [Community Notebooks](https://huggingface.co/transformers/master/community.html#community-notebooks) in documentation ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? https://discuss.huggingface.co/t/huggingartists-train-a-model-to-generate-lyrics/9045 - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-09-2021 10:27:53
08-09-2021 10:27:53
transformers
13,049
closed
Add MBART to models exportable with ONNX
Adds MBART to models exportable with ONNX
08-09-2021 10:21:42
08-09-2021 10:21:42
transformers
13,048
closed
Add to ONNX docs
Complete the ONNX docs with an example Closes https://github.com/huggingface/transformers/issues/12821#issuecomment-884009576
08-09-2021 10:05:54
08-09-2021 10:05:54
transformers
13,047
closed
How do i pre-train Bert_mlm model [Discussion]
Hi, i've been able to pre train a Bert model using the run_mlm.py file. Thanks to huggingface team ofc. :D I do have one question tho, if i want to further pre train a bert_mlm model, do i use the same pre training script such as run_mlm.py for the mlm model. Is that the "correct" way or is there another type of training script i should use. Any help is much appreciated
08-09-2021 09:55:38
08-09-2021 09:55:38
@NielsRogge @sgugger Any thoughts? Kind regards, Mosleh<|||||>Hi there! Please use the [forums](https://discuss.huggingface.co/) for questions like this, as we keep the issues for bugs and feature requests only. As for your question, you can fine-tune any existing model with that `run_mlm` script.<|||||>Hi, Alright thanks, i'll post in forums next time. /Mosleh
transformers
13,046
closed
TFBertPreTrainingLoss has something wrong
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: 3.8 - PyTorch version (GPU?): - Tensorflow version (GPU?): 2.5 - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): TFBertPreTrainingLoss The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. construct some inputs of MLM task 2. call TFBertForMaskedLM 3. while computing loss, something wrong happened. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ```python masked_lm_loss = loss_fn(y_true=masked_lm_labels, y_pred=masked_lm_reduced_logits) next_sentence_loss = loss_fn(y_true=next_sentence_label, y_pred=next_sentence_reduced_logits) masked_lm_loss = tf.reshape(tensor=masked_lm_loss, shape=(-1, shape_list(next_sentence_loss)[0])) masked_lm_loss = tf.reduce_mean(input_tensor=masked_lm_loss, axis=0) ``` The number of masked_labels is uncertain ,thus ops of "reshape" is unsuitable. Why not calculate the total loss of batches?
08-09-2021 09:13:17
08-09-2021 09:13:17
Hi, thanks for the issue but I'll need a little more info to investigate! Do you encounter an error when you run the code, or do you believe the outputted loss is incorrect? If you encounter an error, can you paste it here? If the loss is incorrect, can you upload a sample batch of data (e.g. a pickled dict of Numpy arrays) that gets different loss values on the PyTorch versus the TF version of the model, when both are initialized from the same checkpoint? All of that will help us track down the problem here. Thanks for helping!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,045
closed
Add FNet
# What does this PR do? This PR adds the [FNet](https://arxiv.org/abs/2105.03824) model in PyTorch. I was working on it in another PR #12335 which got closed due to inactivity ;-;. This PR closes issue #12411. ## Checklist - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? Requesting @LysandreJik to review. ~**Note**:This model uses a SentencePiece tokenizer. They have provided the sentence-piece .model file which can be loaded. While creating FNetTokenizer should I inherit from some other existing tokenizer? Alternatively, I can copy the tokenizer from `ALBERT` (which is what I am doing right now). Wdyt?~ **Note**: I am trying to make this model as similar to Bert is possible. The original implementation has slightly different layers. For example, `FNetIntermediate` and `FNetOutput` equivalents are combined into a single layer in original FNet code, but I keep them separate. Hope this is okay? EDIT 1: ------ I have made necessary changes for the model. And since the model compares against Bert, it makes sense to have (almost) all tasks - MultipleChoice, QuestionAnswering, etc. I am still working on: - [x] Tokenizer (regular and fast) - [x] Documentation - [x] Checkpoint Conversion - [x] Tests EDIT 2: ------ ~We also need to skip `attention_mask` totally from the tokenizer. The user, ideally, should not have an option to get the `attention_mask` using `FNetTokenizer`. I am using `model_input_names` for this.~ EDIT 3: ------ ~One more concern is that, since I am implementing in PyTorch, do we expect the user to run this on TPU? The reason is that the original implementation changes the way they calculate FFT on TPU, based on the sequence length (they found some optimal rules for faster processing). Currently, I have only used `torch.fft.fftn` directly (they use `jnp.fft.fftn` in the CPU/GPU case). Please let me know what you think.~ EDIT 4: ------ One more thing to consider is that the original code allows `type_vocab_size` of 4, which is used only for GLUE tasks. During pre-training they only use `0` and `1`. But, the checkpoints also have the shape of embedding weights as `(4, 768)` . Does that mean that the tokenizer might need to support something like: ```python tokenizer = FNetTokenizer.from_pretrained('fnet-base') inputs = tokenizer(text1, text2, text3, text4) ``` ? EDIT 5: ------ ~The colab link to outputs on checkpoint conversion: [Flax to PyTorch](https://colab.research.google.com/drive/1CxxDwaH4Tei9cUBHRaMYWPHCpS2El2He?usp=sharing). The model outputs, embedding layer, encoder layer 0 outputs match up to `1e-2`, except masked lm output for masked token, which matches to `1e-1`. Any idea on how I can improve this?~ ~One reason I can think of this reduction is precision is the difference in precision in `torch.fft.fftn` and `jnp.fft.fftn` which is atmost `1e-4`. From a difference of atmost `1e-6`, after applying the corresponding transforms, the difference becomes atmost `1e-3` in the real part. Over layers, this might accumulate. Just a guess, however.~ ~This was fixed by using `gelu_new` instead of `gelu`.~ EDIT 6: ------ They use a projection layer in the embeddings, and hence the embedding size and hidden size for the model are provided separately in the config. In their experiments, they keep it same, but the flexibility is still there. Do we want to keep both the sizes separate? EDIT 7: ------ ~The FastTokenizer requires a `tokenizer.json` file which I have created using `convert_slow_tokenizer`. I used `AlbertConverter` for this model. I don't know (in-detail) how SentencePiece and FastTokenizers work. Please let me know if I'm missing anything.~ EDIT 8: ------ Just realized that the original model is denoted as `f_net`. I am using `fnet` everywhere, is this acceptable? EDIT 9: ------ ~I am not sure about special tokens in the tokenizer. The original model gives some special tokens as empty string ``. Using the current tokenizer code to load these gives `<s>` and <\s> for those tokens (bos, eos), and <**unk**> for unknown and <**pad**> for pad tokens. Not sure which is the right way to go. Any suggestions?~
08-08-2021 20:38:33
08-08-2021 20:38:33
Not sure why this test fails: ```python =========================== short test summary info ============================ FAILED tests/extended/test_trainer_ext.py::TestTrainerExt::test_run_seq2seq_no_dist ==== 1 failed, 7169 passed, 3466 skipped, 708 warnings in 832.52s (0:13:52) ==== ```<|||||>The failure is due to the new release of sacrebleu. If you rebase on master to get the commit that pins it to < 2.0.0, the failure will go away (but it's not necessary for this PR to be merged as we know it has nothing to do with it).<|||||>Thanks for reviews @sgugger @patil-suraj I'll address them quickly. One more concern apart from the ones I have mentioned above: ~I have removed the slow integration testing from tokenization tests as it expects `attention_mask`. I'll take a look and update the test accordingly.~ EDIT: ------ This test has been updated. <|||||>Hey @gchhablani :-) We've just added you to the Google org, so that you can move the model weights there. If you find some time, it would also be very nice to add some model cards (I can definitely help you with that). Regarding the failing doc test, you can just rebase to current master and it'll be fixed<|||||>I found two issues with the fourier transform. ### Issue 1 The actual implementation uses `jax.vmap` on the `self.fourier_transform`. I made a mistake earlier in the implementation and do it for all dimensions - `hidden_size`, `sequence_length`, and `batch_size`, but it is just `sequence_length` and `batch_size`. This leads to a mismatch issue when using `batch_size` more than one. I have fixed this issue by passing in the correct dimensions to `torch.fft.fftn` and using `functools.partial`. Please check the Flax/Torch output match for `batch_size=2` [here](https://colab.research.google.com/drive/13cqOgP4DNrYbBdjwD0NwSxORCUTRxiZ-?usp=sharing). Unfortunately, there is no `vmap` in torch as of now in the stable version, but only in the nightly version [here](https://pytorch.org/tutorials/prototype/vmap_recipe.html). ### Issue 2 Following @sgugger's suggestion to add the optimizations for TPUs, I tried adding the `einsum` version of fourier transform where they use DFT matrix multiplication and the axis-wise FFT. I have had to make changes and few additions to support them in PyTorch. Currently, the outputs from those don't match (but they should, at least to some extent). So I am fixing that as well. EDIT: ------ I understand the issue with the `einsum` implementation. The original code uses the maximum sequence length possible as their sequence length during training - 512. Hence, during the initialization, they specify this maximum sequence length, and then use this variable to initialize the `DFT` matrix for sequence length. While that may have made sense for them, I'm not sure if it makes sense here? I think we can take in another parameter `sequence_length` during config initialization. This will be used to specify the sequence length (because `max_position_embeddings` is used to initialize the `self.position_embeddings`, so that shouldn't be changed). Along with this, a check that throws an error if the `sequence_length` does not match sequence length passed to the model. <|||||>With the latest changes, an error occurs: ```python ImportError while importing test module '/home/circleci/transformers/tests/test_modeling_fnet.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/local/lib/python3.7/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_modeling_fnet.py:23: in <module> from transformers.models.fnet.modeling_fnet import FNetBasicFourierTransform src/transformers/models/fnet/modeling_fnet.py:28: in <module> from scipy import linalg E ModuleNotFoundError: No module named 'scipy' ``` I am trying to use `scipy.linalg.dft` to get `DFT matrix`. Any chance this can be a dependency? EDIT ------ I have added a variable called `_scipy_available` which is used when initializing the fourier transform, and if it is not available, I add a warning. The users can install SciPy if they want?<|||||>I don't see a problem with using `scipy` as an optional dependency for this specific model<|||||>Let me know if need help making the tests pass with the dependency - I can fix this in your PR if you want :-)<|||||>Hi @patrickvonplaten I have pushed the code where I used the global variable `_scipy_available`, does that seem okay? The tests are working fine locally. Also, in model tests I'm verifying whether `fourier_transform` implementations match or not in the test: `create_and_check_fourier_transform` for which I need to access `modeling_fnet`. I get this error on CircleCI: ```python _________________ ERROR collecting tests/test_modeling_fnet.py _________________ ImportError while importing test module '/home/circleci/transformers/tests/test_modeling_fnet.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: /usr/local/lib/python3.7/importlib/__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) tests/test_modeling_fnet.py:23: in <module> from transformers.models.fnet.modeling_fnet import FNetBasicFourierTransform, _scipy_available src/transformers/models/fnet/modeling_fnet.py:22: in <module> import torch E ModuleNotFoundError: No module named 'torch' ``` Any idea how do I fix this? EDIT ------ Test is fixed. I followed `fsmt` tests. Had to add the imports under `is_torch_available()`. <|||||>I have updated the checkpoints and added basic model cards. The model performance isn't great on MLM, not sure why. The accuracy scores are low, though. Checkpoints - [fnet-base](https://huggingface.co/google/fnet-base) - [fnet-large](https://huggingface.co/google/fnet-large)<|||||>Also, just to check :-) The reported eval metrics on GLUE - did you run them once with `run_glue.py` or is it a copy-paste of the paper? <|||||>@patrickvonplaten No, I just copy pasted from the paper 🙈. Should I try fine-tuning it? Maybe, that itself can be the demo?<|||||>I am checking the checkpoint conversion. Ideally, there should be less than `1e-3`/`1e-4` differences in the outputs. I'm not sure how to exactly fix this, but the arg-sorted order of the predictions is different for the PyTorch and the Flax model. :/ For different fourier transforms, I matched them against `np.fft.fftn` and `jnp.fft.fftn`, both give at best `1e-4` match, which means the problem is not the fourier transform. I'll do a layer-wise debugging and update here. Nonetheless, the original masked LM weights lead to similar predictions, so fine-tuning example will be helpful. EDIT ------ The issue was that the original implementation uses gelu from BERT, which is equivalent to `gelu_new`, I suppose. Changing the activation to `gelu_new` leads to a `1e-4` match on all logits and sequence output ^_^ I am still working on verifying model outputs.<|||||>The original MLM model performs decently for the following: "the man worked as a [MASK]." The masked token top-10 predictions are: ``` man person use guide work example reason source one right ``` I had to modify the tokens as expected by the model. The tokenizer is having issues. The original one gives this output for the text above: ```python [13, 283, 2479, 106, 8, 16657, 6, 16678] [ '▁the', '▁man', '▁worked', '▁as', '▁a', '▁', '[MASK]', '.' ] ``` The tokenizer I wrote is returning this: ```python [13, 283, 2479, 106, 8, 6, 845] ['▁the', '▁man', '▁worked', '▁as', '▁a', '[MASK]', '▁.'] ``` Notice how the space - `▁` is missing and that the period is actually `.` but becomes `▁.` in our tokenizer. Any ideas on why this might be happening? When I change `[MASK]` to `mask`, both lead to same output: ```python [13, 283, 2479, 106, 8, 10469, 16678] ['▁the', '▁man', '▁worked', '▁as', '▁a', '▁mask', '.'] ``` In their [input_pipeline](https://github.com/google-research/google-research/blob/master/f_net/input_pipeline.py#L258), they add the mask, cls and sep ids manually. Hence, they never use `[MASK]` in the text input. So, maybe, it's okay if we get `▁a`, `[MASK]`? But in either case, we shouldn't get `▁.`? How do I handle this? The problem happens in `tokenize`, where we split based on the `[MASK]` token. But if we don't do that, then `[MASK]` is broken into several tokens. `tokenize('.')` results in `▁.` <|||||>I tried fixing the issue using a fix which basically skips first character after a mask token, only if it is not a `no_split_token`. I'm not sure if this is 100% correct. Also, there is an error with `FNetTokenizerFast`, the `[MASK]` token is not working as expected: ```python [4, 13, 283, 2479, 106, 8, 1932, 2594, 16681, 6154, 5] ['[CLS]', '▁the', '▁man', '▁worked', '▁as', '▁a', '▁[', 'mas', 'k', '].', '[SEP]'] ```<|||||>@patrickvonplaten @LysandreJik @sgugger Can you please check the tokenizer once when you get a chance? Once that is working, I can proceed with the fine-tuning without any issues.<|||||>Checking now!<|||||>Here @gchhablani, I looked into it and the tokenizer actually looks correct to me. See this colab: https://colab.research.google.com/drive/1QC4yvSHk0DSOObD6U2fbUE-9-6W3D3_F?usp=sharing Note that in the original code tokens are just "manually" replaced by the "[MASK]" token. So in the colab above, if the token for "guide" (3106) is replaced by the mask token id in the original code then the current tokenizer would be correct I'm wondering whether the model is actually the same. Checking this now...<|||||>@patrickvonplaten The tokenizer is working as expected because of the fixed I pushed in the previous commit. It handles the mask token, but I am not 100% sure if it is correct or if there is a better way to deal with this.<|||||>BTW, to fix the pipelines torch tests I think you just have to rebase to current master (or merge master into your branch :-) ) <|||||>@patrickvonplaten There is an issue with `FNetTokenizerFast`: ```python >>> from src.transformers.models.fnet.tokenization_fnet_fast import FNetTokenizerFast 2021-08-30 20:30:09.070691: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory 2021-08-30 20:30:09.070758: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. >>> tokenizer = FNetTokenizerFast.from_pretrained('google/fnet-base') >>> text = "the man worked as a [MASK]." >>> tokenizer.encode(text) [4, 13, 283, 2479, 106, 8, 1932, 2594, 16681, 6154, 5] >>> tokenizer.tokenize(text) ['▁the', '▁man', '▁worked', '▁as', '▁a', '▁[', 'mas', 'k', '].'] ``` The `[MASK]` should not get tokenized. Any idea why this might be happening?<|||||>@gchhablani it seems that FNet was trained with a SPM vocab, so the corect masking token should be `<mask>` :)<|||||>@stefan-it I haven't worked with sentencepiece before so I'm not sure. But, in the [original code](https://github.com/google-research/google-research/blob/8077479d91cca79b16417055511b7744c155c344/f_net/input_pipeline.py#L256-L258), they specify `[CLS], [SEP], [MASK]` explicitly. However, they do not use the `[MASK]` string token anywhere, but only the id. What do you think about this? If changing to `<mask>` will fix things, then we can go with it. I will try it out.<|||||>Hi @gchhablani , oh I'm sorry I haven't yet read the official implementation. But it seems that they're really using `[MASK]` as the masking token (as previously done in [ALBERT](https://github.com/google-research/albert#sentencepiece)).<|||||>> @patrickvonplaten > > There is an issue with `FNetTokenizerFast`: > > ```python > >>> from src.transformers.models.fnet.tokenization_fnet_fast import FNetTokenizerFast > 2021-08-30 20:30:09.070691: W tensorflow/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory > 2021-08-30 20:30:09.070758: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. > >>> tokenizer = FNetTokenizerFast.from_pretrained('google/fnet-base') > >>> text = "the man worked as a [MASK]." > >>> tokenizer.encode(text) > [4, 13, 283, 2479, 106, 8, 1932, 2594, 16681, 6154, 5] > >>> tokenizer.tokenize(text) > ['▁the', '▁man', '▁worked', '▁as', '▁a', '▁[', 'mas', 'k', '].'] > ``` > > The `[MASK]` should not get tokenized. Any idea why this might be happening? I can fix this once the changes to how `token_type_ids` are generated are applied :-)
transformers
13,044
closed
MLM example not able to run_mlm_flax.py
I am going through this mlm exaxmple on Google TPU VM instance v3-8 https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling I have defined MODEL_DIR with: export MODEL_DIR="./norwegian-roberta-base" I have defined symbolic link with: ln -s home/Admin/Research/transformers/examples/flax/language-modeling/run_mlm_flax.py norwegian-roberta-base/run_mlm_flax.py I am running with remove VS code and am able to run first 2 steps. now at run_mlm_flax_py if I run referring to symbolic link I am getting: ![image](https://user-images.githubusercontent.com/25264037/128644570-b7b04397-2077-4420-9f25-8e9277473939.png) If I run directly the original script I am getting: ![image](https://user-images.githubusercontent.com/25264037/128644580-68089b5c-d50a-4a90-8501-5b4945ada394.png) ![image](https://user-images.githubusercontent.com/25264037/128644591-fd843934-a67f-4714-b47e-5847404ac4a9.png) Do you have some idea what I have done wrong?
08-08-2021 20:19:59
08-08-2021 20:19:59
Hey @R4ZZ3, Can you run: ```transformers-cli env``` And post the output here?<|||||>Sure, I will run it late this evening and post output here (UTC +3)<|||||>Ok, found time in between the day @patrickvonplaten - `transformers` version: 4.3.3 - Platform: Linux-5.4.0-1043-gcp-x86_64-with-debian-bullseye-sid - Python version: 3.7.11 - PyTorch version (GPU?): not installed (NA) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: To my knowledge this script should only use TPUs - Using distributed or parallel set-up in script?: To my knowledge processing is spread out to 8 tpu cores<|||||>I did pull changes. Tokenizer saving gives error (Doing this from norwegian-roberta-base folder) ![image](https://user-images.githubusercontent.com/25264037/128717682-4484f027-2598-4ff5-848d-ecfe4c43ea51.png) tokenizer.save(./tokenizer.json) works<|||||>I was able to fix symbolic link issue with by giving full paths but still have the same error with. Also FYI installed Pytorch 1.9 as I remember from Flax event that for some things it was necessary to have for some processing but no change to error ![image](https://user-images.githubusercontent.com/25264037/128721317-330a80c4-d9c1-46f9-9ca2-794ab76f4726.png) <|||||>Hey @R4ZZ3, Could you please update your transformer version to a newer one? Ideally master for Flax examples as they have been added very recently?<|||||>Sure thing, ill try<|||||>Ok, now seems to move further @patrickvonplaten Thanks! Still had to save tokenizer with tokenizer.save(./tokenizer.json) though ![image](https://user-images.githubusercontent.com/25264037/128925529-ced5f775-e9a4-43dc-bc0e-58085de28386.png)
transformers
13,043
closed
[DeepSpeed] DeepSpeed 0.4.4 does not run with Wav2Vec2 pretraining script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.10.0.dev0 - Platform: Linux-5.11.0-25-generic-x86_64-with-glibc2.33 - Python version: 3.9.1 - PyTorch version (GPU?): 1.9.0.dev20210217 (True) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: yes - Deepspeed: 0.4.4 - CUDA Version: 11.2 - GPU: 4 x TITAN RTX ### Who can help @stas00 ## To reproduce Running the Wav2Vec2 pre-training script: https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/README.md#pretraining-wav2vec2 with the versions as defined above (on vorace) yields the following error: <details> <summary>Click for error message</summary> <br> ``` Using amp fp16 backend [2021-08-08 14:05:34,113] [INFO] [logging.py:68:log_dist] [Rank 0] DeepSpeed info: version=0.4.4, git-hash=unknown, git-branch=unknown [2021-08-08 14:05:35,930] [INFO] [utils.py:11:_initialize_parameter_parallel_groups] data_parallel_size: 4, parameter_parallel_size: 4 [2021-08-08 14:57:51,866] [INFO] [engine.py:179:__init__] DeepSpeed Flops Profiler Enabled: False Using /home/patrick/.cache/torch_extensions as PyTorch extensions root... Creating extension directory /home/patrick/.cache/torch_extensions/cpu_adam... Using /home/patrick/.cache/torch_extensions as PyTorch extensions root... Using /home/patrick/.cache/torch_extensions as PyTorch extensions root... Using /home/patrick/.cache/torch_extensions as PyTorch extensions root... Detected CUDA files, patching ldflags Emitting ninja build file /home/patrick/.cache/torch_extensions/cpu_adam/build.ninja... Building extension module cpu_adam... Allowing ninja to set a default number of workers... (overridable by setting the environment variable MAX_JOBS=N) [1/3] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output custom_cuda_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/patrick/anaconda3/envs/hu gging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/usr/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/ patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/TH -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/THC -isystem /home/patrick/anaconda3/envs/hugging_face/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D_ _CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERS IONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 -c /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o FAILED: custom_cuda_kernel.cuda.o /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output custom_cuda_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/patrick/anaconda3/envs/hugging_ face/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/usr/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/patric k/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/TH -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/THC -isystem /home/patrick/anaconda3/envs/hugging_face/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_ NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 -c /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o /usr/include/c++/10/chrono: In substitution of ‘template<class _Rep, class _Period> template<class _Period2> using __is_harmonic = std::__bool_constant<(std::ratio<((_Period2::num / std::chrono::duration<_Rep, _Period>::_S_gcd(_Period2::num, _Period::num)) * (_Period::den / std::chrono::duration<_Rep, _Pe riod>::_S_gcd(_Period2::den, _Period::den))), ((_Period2::den / std::chrono::duration<_Rep, _Period>::_S_gcd(_Period2::den, _Period::den)) * (_Period::num / std::chrono::duration<_Rep, _Period>::_S_gcd(_Period2::num, _Period::num)))>::den == 1)> [with _Period2 = _Period2; _Rep = _Rep; _Period = _Period]’: /usr/include/c++/10/chrono:473:154: required from here /usr/include/c++/10/chrono:428:27: internal compiler error: Segmentation fault 428 | _S_gcd(intmax_t __m, intmax_t __n) noexcept | ^~~~~~ Please submit a full bug report, with preprocessed source if appropriate. See <file:///usr/share/doc/gcc-10/README.Bugs> for instructions. [2/3] c++ -MMD -MF cpu_adam.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/usr /include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/inc lude/TH -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/THC -isystem /home/patrick/anaconda3/envs/hugging_face/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -fPIC -std=c++14 -O3 -std=c++14 -L/usr/lib64 -lcudart -lcublas -g -Wno-reorder -march=native -fopenmp -D_ _AVX512__ -c /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/cpu_adam.cpp -o cpu_adam.o ninja: build stopped: subcommand failed. Traceback (most recent call last): File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1662, in _run_ninja_build subprocess.run( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/subprocess.py", line 524, in run raise CalledProcessError(retcode, process.args, subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1. The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/home/patrick/hugging_face/transformers/examples/research_projects/wav2vec2/run_pretrain.py", line 394, in <module> main() File "/home/patrick/hugging_face/transformers/examples/research_projects/wav2vec2/run_pretrain.py", line 390, in main trainer.train() File "/home/patrick/hugging_face/transformers/src/transformers/trainer.py", line 1136, in train deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( File "/home/patrick/hugging_face/transformers/src/transformers/deepspeed.py", line 370, in deepspeed_init model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/__init__.py", line 126, in initialize engine = DeepSpeedEngine(args=args, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 194, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 709, in _configure_optimizer basic_optimizer = self._configure_basic_optimizer(model_parameters) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 778, in _configure_basic_optimizer optimizer = DeepSpeedCPUAdam(model_parameters, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 79, in __init__ self.ds_opt_adam = CPUAdamBuilder().load() File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 347, in load return self.jit_load(verbose) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 379, in jit_load op_module = load( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1074, in load return _jit_compile( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1287, in _jit_compile _write_ninja_file_and_build_library( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1399, in _write_ninja_file_and_build_library _run_ninja_build( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1678, in _run_ninja_build raise RuntimeError(message) from e RuntimeError: Error building extension 'cpu_adam' Loading extension module cpu_adam... Traceback (most recent call last): File "/home/patrick/hugging_face/transformers/examples/research_projects/wav2vec2/run_pretrain.py", line 394, in <module> main() File "/home/patrick/hugging_face/transformers/examples/research_projects/wav2vec2/run_pretrain.py", line 390, in main trainer.train() File "/home/patrick/hugging_face/transformers/src/transformers/trainer.py", line 1136, in train deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( File "/home/patrick/hugging_face/transformers/src/transformers/deepspeed.py", line 370, in deepspeed_init model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/__init__.py", line 126, in initialize engine = DeepSpeedEngine(args=args, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 194, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 709, in _configure_optimizer basic_optimizer = self._configure_basic_optimizer(model_parameters) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 778, in _configure_basic_optimizer optimizer = DeepSpeedCPUAdam(model_parameters, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 79, in __init__ self.ds_opt_adam = CPUAdamBuilder().load() File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 347, in load return self.jit_load(verbose) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 379, in jit_load op_module = load( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1074, in load return _jit_compile( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1312, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1694, in _import_module_from_library file, path, description = imp.find_module(module_name, [path]) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/imp.py", line 296, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named 'cpu_adam' Loading extension module cpu_adam... Traceback (most recent call last): File "/home/patrick/hugging_face/transformers/examples/research_projects/wav2vec2/run_pretrain.py", line 394, in <module> main() File "/home/patrick/hugging_face/transformers/examples/research_projects/wav2vec2/run_pretrain.py", line 390, in main trainer.train() File "/home/patrick/hugging_face/transformers/src/transformers/trainer.py", line 1136, in train deepspeed_engine, optimizer, lr_scheduler = deepspeed_init( File "/home/patrick/hugging_face/transformers/src/transformers/deepspeed.py", line 370, in deepspeed_init model, optimizer, _, lr_scheduler = deepspeed.initialize( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/__init__.py", line 126, in initialize engine = DeepSpeedEngine(args=args, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 194, in __init__ self._configure_optimizer(optimizer, model_parameters) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 709, in _configure_optimizer basic_optimizer = self._configure_basic_optimizer(model_parameters) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/runtime/engine.py", line 778, in _configure_basic_optimizer optimizer = DeepSpeedCPUAdam(model_parameters, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 79, in __init__ self.ds_opt_adam = CPUAdamBuilder().load() File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 347, in load return self.jit_load(verbose) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/op_builder/builder.py", line 379, in jit_load op_module = load( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1074, in load return _jit_compile( File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1312, in _jit_compile return _import_module_from_library(name, build_directory, is_python_module) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/utils/cpp_extension.py", line 1694, in _import_module_from_library file, path, description = imp.find_module(module_name, [path]) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/imp.py", line 296, in find_module raise ImportError(_ERR_MSG.format(name), name=name) ImportError: No module named 'cpu_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f1a372f01f0> Traceback (most recent call last): File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 93, in __del__ self.ds_opt_adam.destroy_adam(self.opt_id) AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f961a8561f0> Traceback (most recent call last): File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 93, in __del__ self.ds_opt_adam.destroy_adam(self.opt_id) AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7fcc1f2be1f0> Traceback (most recent call last): File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 93, in __del__ self.ds_opt_adam.destroy_adam(self.opt_id) AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Exception ignored in: <function DeepSpeedCPUAdam.__del__ at 0x7f5bbf5cc1f0> Traceback (most recent call last): File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/adam/cpu_adam.py", line 93, in __del__ AttributeError: 'DeepSpeedCPUAdam' object has no attribute 'ds_opt_adam' Killing subprocess 1135563 Killing subprocess 1135564 Killing subprocess 1135565 Killing subprocess 1135566 Traceback (most recent call last): File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/runpy.py", line 197, in _run_module_as_main return _run_code(code, main_globals, None, File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/runpy.py", line 87, in _run_code exec(code, run_globals) File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/launcher/launch.py", line 171, in <module> main() File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/launcher/launch.py", line 161, in main sigkill_handler(signal.SIGTERM, None) # not coming back File "/home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deep ``` </details> ## Expected behavior The script should run without problem.
08-08-2021 15:05:22
08-08-2021 15:05:22
@patrickvonplaten, This is definitely something for Deepspeed and not our integration since you have a segfault in building the kernels: ``` [1/3] /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output custom_cuda_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/patrick/anaconda3/envs/hu gging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/usr/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/ patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/TH -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/THC -isystem /home/patrick/anaconda3/envs/hugging_face/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D_ _CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERS IONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 -c /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o FAILED: custom_cuda_kernel.cuda.o /usr/bin/nvcc --generate-dependencies-with-compile --dependency-output custom_cuda_kernel.cuda.o.d -DTORCH_EXTENSION_NAME=cpu_adam -DTORCH_API_INCLUDE_EXTENSION_H -DPYBIND11_COMPILER_TYPE=\"_gcc\" -DPYBIND11_STDLIB=\"_libstdcpp\" -DPYBIND11_BUILD_ABI=\"_cxxabi1011\" -I/home/patrick/anaconda3/envs/hugging_ face/lib/python3.9/site-packages/deepspeed/ops/csrc/includes -I/usr/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/torch/csrc/api/include -isystem /home/patric k/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/TH -isystem /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/torch/include/THC -isystem /home/patrick/anaconda3/envs/hugging_face/include/python3.9 -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HALF_OPERATORS__ -D__CUDA_ NO_HALF_CONVERSIONS__ -D__CUDA_NO_BFLOAT16_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=compute_75 -gencode=arch=compute_75,code=sm_75 --compiler-options '-fPIC' -O3 --use_fast_math -std=c++14 -U__CUDA_NO_HALF_OPERATORS__ -U__CUDA_NO_HALF_CONVERSIONS__ -U__CUDA_NO_HALF2_OPERATORS__ -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_75,code=compute_75 -c /home/patrick/anaconda3/envs/hugging_face/lib/python3.9/site-packages/deepspeed/ops/csrc/adam/custom_cuda_kernel.cu -o custom_cuda_kernel.cuda.o /usr/include/c++/10/chrono: In substitution of ‘template<class _Rep, class _Period> template<class _Period2> using __is_harmonic = std::__bool_constant<(std::ratio<((_Period2::num / std::chrono::duration<_Rep, _Period>::_S_gcd(_Period2::num, _Period::num)) * (_Period::den / std::chrono::duration<_Rep, _Pe riod>::_S_gcd(_Period2::den, _Period::den))), ((_Period2::den / std::chrono::duration<_Rep, _Period>::_S_gcd(_Period2::den, _Period::den)) * (_Period::num / std::chrono::duration<_Rep, _Period>::_S_gcd(_Period2::num, _Period::num)))>::den == 1)> [with _Period2 = _Period2; _Rep = _Rep; _Period = _Period]’: /usr/include/c++/10/chrono:473:154: required from here /usr/include/c++/10/chrono:428:27: internal compiler error: Segmentation fault 428 | _S_gcd(intmax_t __m, intmax_t __n) noexcept ``` Could you please file a bug report at https://github.com/microsoft/DeepSpeed and tag @RezaYazdaniAminabadi It probably has something to do with your specific environment, since deepspeed==0.4.4 passes all wav2vec2 tests on my setup: ``` $ RUN_SLOW=1 pyt examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py ====================================================================== test session starts ====================================================================== platform linux -- Python 3.8.10, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 rootdir: /mnt/nvme1/code/huggingface, configfile: pytest.ini plugins: dash-1.20.0, forked-1.3.0, xdist-2.3.0, instafail-0.4.2 collected 16 items examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py ................ [100%] ==================================================================== short test summary info ==================================================================== PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero2_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero2_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero3_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_distributed_zero3_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero2_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero2_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero3_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp16_non_distributed_zero3_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero2_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero2_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero3_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_distributed_zero3_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero2_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero2_robust PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero3_base PASSED examples/research_projects/wav2vec2/test_wav2vec2_deepspeed.py::TestDeepSpeedWav2Vec2::test_fp32_non_distributed_zero3_robust ================================================================ 16 passed in 491.63s (0:08:11) ================================================================= ``` Also yours is python-3.9, do you have access to 3.8 by chance to validate if perhaps it's a py39 incompatibility? Mine is 3.8. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It's actually solved - thanks for the help :-)
transformers
13,042
closed
Squad bert
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @LysandreJik Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
08-08-2021 14:04:06
08-08-2021 14:04:06
Please disregard.
transformers
13,041
closed
Script to convert the bart model from pytorch checkpoint to tensorflow checkpoint
# Feature request Request for a script to convert the bart model from pytorch checkpoint to tensorflow checkpoint # Solution https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py
08-08-2021 12:34:46
08-08-2021 12:34:46
transformers
13,040
closed
Add try-except for torch_scatter
Add an error message for the CUDA version mismatch of `torch_scatter`.
08-08-2021 10:18:04
08-08-2021 10:18:04
I don't think this will work. It's a `RuntimeError` being raised by `torch_scatter`, not an `OSError`. See the specific code at [line 59 of `__init__.py`](https://github.com/rusty1s/pytorch_scatter/blob/2.0.8/torch_scatter/__init__.py#L59). Also, this replaces the existing informative error message from `torch_scatter` with a less informative one.<|||||>@aphedges Thanks for the note - I have edited the description that does not indicate the association with your issue anymore. Also, the intention of this PR is to simply circumvent the error since in most cases, people just don't use TAPAS but still get blocked by this error. Also, the original `torch_scatter` error message is not informative at all. It just says some file cannot be located and after some googling, I realize it's due to the CUDA version. So I'm basically replacing that with my googled solution.<|||||>@JetRunner, thanks for editing the description! Sorry about my note about `RuntimeError` vs. `OSError` earlier. I think I got confused by the fact that `torch-scatter` explicitly throws a runtime error for some CUDA version mismatches, but the error you're logging here is for a different CUDA version mismatch that doesn't have a good error message. I think I had to google this one, too, so your error message is definitely an improvement.
transformers
13,039
closed
Remove usage of local variables related with model parallel and move …
# What does this PR do? This PR is related with [model parallel integration from Parallelformers](https://github.com/huggingface/transformers/issues/12772). You can check detail of PR here: https://github.com/tunib-ai/parallelformers/issues/11#issuecomment-894719918 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? @stas00
08-08-2021 08:10:08
08-08-2021 08:10:08
Most of these modifications are encoder-decoder models from Bart's code and encoder models that has token type id. As you said, it is difficult to work on all models at once, so I will exclude the case where the model needs to be modified. I also agree that modifying multiple models at the same time makes it difficult to test. First, let's start with one decoder model like GPT-Neo. I will close this PR and upload a new one soon.<|||||>One more note: besides the dozens of models we also have a template. In this case it's mostly: https://github.com/huggingface/transformers/blob/master/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/modeling_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py so when all is happy here, please let's not forget to apply the changes there as well. <|||||>I would like an approach that that does one model first, so we can clearly comment on the design, then all models after (unless it's very different for each model in which case, similar models by similar models if that makes sense). As for the changes in themselves, I would need a clear explanation as to why the `token_type_ids` device need to be changed from the position_ids device. That kind of code should not be present in the modeling files as is, as people adding or tweaking models won't need/understand it. We can abstract away things in `PreTrainedModel` as you suggest @stas00, that seems like a better approach. Or maybe a method that creates those `token_type_ids` properly, at the very least.
transformers
13,038
closed
Check in PreTrainedTokenizer can cause incorrect tokenization
## Environment info - `transformers` version: 4.5.1 - Platform: Linux-5.4.0-1047-azure-x86_64-with-glibc2.10 - Python version: 3.8.8 - PyTorch version (GPU?): 1.9.0a0+2ecb2c7 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: NA - Using distributed or parallel set-up in script?: NA ### Who can help @LysandreJik ## Information [This check](https://github.com/huggingface/transformers/blob/7fcee113c163a95d1b125ef35dc49a0a1aa13a50/src/transformers/tokenization_utils.py#L336) in `PreTrainedTokenizer` can cause incorrect tokenization (and subsequent encoding) for space only sequences (or sequences with leading and trailing spaces). This can be problematic for byte only models (byT5 etc.), can cause inconsistent tokenizations between `Tokenzer` and `TokenizerFast` classes and can cause issues wherever the code assumes non-destructive behaviour of a tokenizer. ## To reproduce Steps to reproduce the behavior: ```python from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("roberta-base", use_fast=False) tokenizer_fast = AutoTokenizer.from_pretrained("roberta-base") # Correct Tokenization out = tokenizer_fast.tokenize(' ') # The above results in ['Ġ'], which is correct # Incorrect Tokenization out = tokenizer.tokenize(' ') # The above results in [], which is incorrect # Example 2. assert ' ' == tokenizer.decode(tokenizer.encode(' ', add_special_tokens=False)) # This will fail, since '' != ' ' ``` ## Expected behavior Leading and trailing spaces should be considered during tokenization, especially for non-destructive tokenizers. ## Proposed Solution Changing the check from ```python if not text.strip(): return [] ``` To ```python if len(text) == 0: # or if not text: return [] ``` should be okay. Alternatively, having a flag (eg: remove_extra_whitespaces), and enabling the current behaviour only for the case when the flag is passed as True would also work.
08-08-2021 06:01:39
08-08-2021 06:01:39
May be of interest to @SaulLu <|||||>Thank you very much for the detailed issue @codedecde ! This check had been integrated to solve non-deterministic tokenization problems and I think this solution had been retained because we did not see a use case at the time to tokenize a sentence containing only spaces (see [issue](https://github.com/huggingface/transformers/issues/2027) and [PR](https://github.com/huggingface/transformers/pull/2081)). Could you please explain in which case you need to tokenize a sentence containing only a space? Thank you very much in advance!<|||||>Hi @SaulLu. Thank you for responding, and really sorry for the late response. My use-case is a little niche. I am training byte level encoder models. In order to do the masking, I am using a BPE tokenizer with dropout, and remapping it back to the byte level. Eg: ```[python] tokenized = tokenizer.tokenize("Huggingface is awesome") # ['Hug', 'ging', 'face', 'Ġ', 'is', 'Ġawesome'] inputs_with_mask, masked_tokens = mask_function(tokenized) # ['Hug', 'ging', <mask>, **<mask>**, 'is', 'Ġawesome'], [<pad>, <pad>, 'face', **'Ġ',** <pad>, <pad>] # The marked 'Ġ' token will get destroyed later because of the issue decoded_text = byte_tokenizer.decode(inputs_with_mask) # Hugging<mask><**mask>**is awesome model_inputs, model_outputs = byte_tokenizer.encode(decoded_text, masked_tokens) # ['H', 'u', 'g', 'g', 'i', 'n', 'g', <mask>, <mask>, <mask>, <mask>, **<mask>**, 'i', 's', ' ', 'a', 'w', 'e', 's', 'o', 'm', 'e'] # model_outputs = [<pad>,<pad>,<pad>,<pad>,<pad>,<pad>,<pad>, 'f', 'a', 'c', 'e', **''**, <pad>, ...] ``` In the above example, the mask inclosed between ** ** and its associated label are impacted by the problem mentioned. Since it is a niche use-case, having this as a kwarg flag enabled behaviour would be quite helpful (eg: by default, trailing and leading spaces are always stripped out, except when the flag is set to true ). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,037
closed
Spanish NER bad extraction
## Environment info - `transformers` version: 4.9.1 - Platform: linux-ubuntu 20.04.2 LTS x86_64 - Python version: 3.7.6 - PyTorch version (GPU?): No - Tensorflow version (GPU?): No - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Details I used this hugginface model for NER extraction https://huggingface.co/mrm8488/bert-spanish-cased-finetuned-ner Input: "Efrain Avella" Expected output : { "entity_group": "PER", "score": 0.9992852807044983, "word": "Efrain Avella", "start": 0, "end": 12 } Transformers output: { "entity_group": "PER", "score": 0.9990411400794983, "word": "E", "start": 0, "end": 1 }, { "entity_group": "PER", "score": 0.8103020787239075, "word": "##frain Avella", "start": 1, "end": 13 }
08-08-2021 00:44:54
08-08-2021 00:44:54
Hello! Did you try with the `aggregation_strategy` parameter as mentioned in the [docs](https://huggingface.co/transformers/main_classes/pipelines.html#tokenclassificationpipeline)?<|||||>No, I used grouped_entities but I saw that it is deprecated, thanks, I will try it with the aggregation_strategy
transformers
13,036
closed
Do the Trainer docs need an update?
On [this](https://huggingface.co/transformers/main_classes/trainer.html) documentation page regarding `Trainer`, `torch.utils.data.dataset.Dataset` is mentioned. However, I can only seem to find `torch.utils.data.Dataset` [here](https://pytorch.org/docs/stable/data.html#torch.utils.data.Dataset). Do the docs require an update? The same goes for `IterableDataset`, on the same page.
08-07-2021 21:43:58
08-07-2021 21:43:58
cc @sgugger <|||||>Sure, would you mind making a PR with that?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,035
closed
Rotate checkpoint `shutil.rmtree(checkpoint)` fails
I was training `google/mt5-xl` model with `deepspeed` by huggingface trainer. The training was done in an aws `p3dn.24xlarge` node with 8V100 GPUs. Trainer fails when [_rotate_checkpoints](https://github.com/huggingface/transformers/blob/7fcee113c163a95d1b125ef35dc49a0a1aa13a50/src/transformers/trainer.py#L1982) is called. Specifically in this [line](https://github.com/huggingface/transformers/blob/7fcee113c163a95d1b125ef35dc49a0a1aa13a50/src/transformers/trainer.py#L2005). Apparently `shutil.rmtree` has this known [issue](https://github.com/ansible/ansible/issues/34335). Error Traceback: ``` Traceback (most recent call last): File "src/train.py", line 93, in <module> main() File "src/train.py", line 74, in main resume_from_checkpoint=checkpoint, File "transformers/src/transformers/trainer.py", line 1328, in train self._maybe_log_save_evaluate(tr_loss, model, trial, epoch) File "transformers/src/transformers/trainer.py", line 1409, in _maybe_log_save_evaluate self._save_checkpoint(model, trial, metrics=metrics) File "transformers/src/transformers/trainer.py", line 1528, in _save_checkpoint self._rotate_checkpoints(use_mtime=True, output_dir=run_dir) File "transformers/src/transformers/trainer.py", line 1954, in _rotate_checkpoints shutil.rmtree(checkpoint) File "/usr/lib/python3.6/shutil.py", line 490, in rmtree onerror(os.rmdir, path, sys.exc_info()) File "/usr/lib/python3.6/shutil.py", line 488, in rmtree os.rmdir(path) OSError: [Errno 39] Directory not empty: '/en-google_mt5-xl-1e-5-1234/checkpoint-320' ``` May be a `distributed barrier` is required for this to work properly. @stas00 @sgugger
08-07-2021 05:46:03
08-07-2021 05:46:03
This code is only called from the main process (controlled by [this test](https://github.com/huggingface/transformers/blob/24cbf6bc5a0b6a9bb5afdda6bb1a329ac980fa4b/src/transformers/trainer.py#L1593)) so it's a not a distributed barrier issue. Unless you are using AWS SageMaker with the model parallel extension, that `should_save` is only True for the local main process (or main process). Could you please give us more information to reproduce the bug?<|||||>I didn't use `AWS SageMaker`. Since you requested for more information, I started to think about a minimal example. The codebase I was working with is too large and contains redundant codes. I took the script `run_summarization.py` and tried running with my environment. What happened that my transformers version was `4.6.0` from this [branch](https://github.com/huggingface/transformers/tree/t5-fp16-no-nans). I change my `tranformers` to `4.10.0.dev0` and the problem goes away. I could not reproduce the error. Closing the issue for the time being. If I face the same error, I will open the issue again. <|||||>Ah, if you were on an older version, the barrier may not have been there, yes.
transformers
13,034
closed
transformers-cli depends on torchaudio optional deps
Looks like we somewhere load some imports that shouldn't be imported when invoking `transformers-cli` Traceback: ``` transformers-cli login 2021-08-06 01:24:22.009842: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Successfully opened dynamic library libcudart.so.11.0 Traceback (most recent call last): File "/gpfswork/rech/six/commun/conda/hf-prod/bin/transformers-cli", line 33, in <module> sys.exit(load_entry_point('transformers', 'console_scripts', 'transformers-cli')()) File "/gpfswork/rech/six/commun/conda/hf-prod/bin/transformers-cli", line 25, in importlib_load_entry_point return next(matches).load() File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/importlib/metadata.py", line 77, in load module = import_module(match.group('module')) File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 671, in _load_unlocked File "<frozen importlib._bootstrap_external>", line 848, in exec_module File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/commands/transformers_cli.py", line 23, in <module> from .run import RunCommand File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/commands/run.py", line 17, in <module> from ..pipelines import SUPPORTED_TASKS, TASK_ALIASES, Pipeline, PipelineDataFormat, pipeline File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/pipelines/__init__.py", line 26, in <module> from ..models.auto.feature_extraction_auto import FEATURE_EXTRACTOR_MAPPING, AutoFeatureExtractor File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/models/auto/feature_extraction_auto.py", line 20, in <module> from transformers import DeiTFeatureExtractor, Speech2TextFeatureExtractor, ViTFeatureExtractor File "<frozen importlib._bootstrap>", line 1039, in _handle_fromlist File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/file_utils.py", line 1978, in __getattr__ value = getattr(module, name) File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/file_utils.py", line 1977, in __getattr__ module = self._get_module(self._class_to_module[name]) File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/file_utils.py", line 1986, in _get_module return importlib.import_module("." + module_name, self.__name__) File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "/gpfsssd/worksf/projects/rech/six/commun/code/transformers/src/transformers/models/speech_to_text/feature_extraction_speech_to_text.py", line 23, in <module> import torchaudio.compliance.kaldi as ta_kaldi File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/site-packages/torchaudio/__init__.py", line 15, in <module> from torchaudio.backend import ( File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/site-packages/torchaudio/backend/__init__.py", line 2, in <module> from . import utils File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/site-packages/torchaudio/backend/utils.py", line 7, in <module> from . import ( File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/site-packages/torchaudio/backend/soundfile_backend.py", line 15, in <module> import soundfile File "/gpfswork/rech/six/commun/conda/hf-prod/lib/python3.8/site-packages/soundfile.py", line 142, in <module> raise OSError('sndfile library not found') OSError: sndfile library not found ``` `sndfile` is an optional dependency of `torchaudio`, so it might not be installed. Thank you! I'm pretty sure it's a recent version, but it wasn't me who had this problem, so pasting it as it was given to me. This comes from JeanZay - I installed `libsndfile` to overcome it as a temp workfix. But this tool is a "long way" from needing `libsndfile` to function properly - functionality-wise. update: ``` python -c "import transformers; print(transformers.__version__)" 4.10.0.dev0 ``` @sgugger
08-06-2021 23:52:38
08-06-2021 23:52:38
This looks like `torchaudio` is installed without `sndfile`, in which case there is little we can do on our side. If `torchaudio` is not installed, this code is not executed and the command runs normally, just tried in an environment with or without it. I would need more to reproduce the problem if I misdiagnosed the issue.<|||||>right, it didn't get installed, because `sndfile` is not in `torchaudio` requirements. Only some of its modules require `sndfile` and that's why it's made optional. At least that's the answer I got on torchaudio slack, where I asked first. Specifically on JZ, I don't think we need `torchaudio` so it's probably safe to just remove it. But it might not be the case for other envs. If you can't think of a way to overcome this let's close this then. Thank you for looking into this, @sgugger <|||||>Ah understood then. Not sure how we can check a bit better for the `is_torch_audio_available` check. It should make sure "Soundfile" is installed too maybe? What do you think @patrickvonplaten ?<|||||>That's a great idea! It should probably include checks for all the optional torchaudio deps that transformers audio models use. But also why does `transformers-cli` need to load everything? It has nothing to do with the specific models.<|||||>Yeah I think it's a good idea as well!<|||||>cc https://github.com/huggingface/transformers/issues/12509<|||||>Yeah I don't think `transformers-cli` needs to load all optional packages either<|||||>I'll have a look as to why it's trying to load this model later today.<|||||>Ok I have dived into it a bit and here is the diagnostic. `transformers-cli env` in it self runs `transformers.commands.env` which doesn't import anything by itself (apart from the version and a few functions in `file_utils`). *But* `transformers-cli` in itself imports all the commands (even if we only use env), in particular `run` which requires the pipeline. Now since #13023 this import does not import all the models anymore, so the bug in itself is resolved (if you could try again on your env with the problem @stas00 that would be great).<|||||>Hi - So the underlying problem in torchaudio is that torchaudio assumed that `soundfile` is either installed fine or not, but apparently there is a third state where `soundfile` is installed yet the underlying `libsndfile` is not available. On our end, we will make sure this third state is in consideration so that `import torchaudio` would not raise an error in this case.<|||||>> Ok I have dived into it a bit and here is the diagnostic. `transformers-cli env` in it self runs `transformers.commands.env` which doesn't import anything by itself (apart from the version and a few functions in `file_utils`). > > _But_ `transformers-cli` in itself imports all the commands (even if we only use env), in particular `run` which requires the pipeline. Now since #13023 this import does not import all the models anymore, so the bug in itself is resolved (if you could try again on your env with the problem @stas00 that would be great). I updated to master and confirm that this solved the problem. Thanks a lot, @sgugger! > Hi - So the underlying problem in torchaudio is that torchaudio assumed that `soundfile` is either installed fine or not, but apparently there is a third state where `soundfile` is installed yet the underlying `libsndfile` is not available. On our end, we will make sure this third state is in consideration so that `import torchaudio` would not raise an error in this case. That's really useful, thank you for implementing this, @mthrok!
transformers
13,033
closed
Getting near constant training loss, T5 not learning anything?
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.9.0/4.9.1 - Platform: Colab/Kaggle - Python version: 3.7.11 - PyTorch version (GPU?): TPU - 1.8.0a0+56b43f4 - Tensorflow version (GPU?): TPU - 2.5.0 ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @Rocketknight1 Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger Model hub: - for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator. HF projects: - datasets: [different repo](https://github.com/huggingface/datasets) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patil-suraj, @sgugger ## Information Model I am using (Bert, XLNet ...): T5 I am trying to finetune T5 on XSum using TPU, but getting near constant training loss and constant validation loss. It's like the model is not learning anything. I tried `t5-small`, `t5-base`, `t5-large`(on kaggle), `google/t5-v1_1-small`, `google/t5-v1_1-base`, but all are giving constant training loss. I applied all the tips from [T5 Finetuning Tips](https://discuss.huggingface.co/t/t5-finetuning-tips/684) thread like using AdaFactor etc. Now, @patil-suraj was able to to train `t5-large` with `max_input_length=512`, `max_output_length=64` and `batch_size=8`. But, I was also able to train `t5-large` with `max_input_length=1024`, `max_output_length=128` and `batch_size=128` on kaggle. I don't know why this is happening. Is it because of some of the layers are frozen by default? Loss for `t5-small`: ![loss](https://user-images.githubusercontent.com/47216475/128595107-5fefef36-a8e7-4858-bb4b-e4382ce09978.jpeg) Eval Loss for 't5-small`: ![eval_loss](https://user-images.githubusercontent.com/47216475/128595211-9b15bcb1-d54b-4fad-b030-52e2575ce917.png) The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) I have modified the script The tasks I am working on is: * [x] an official GLUE/SQUaD task: XSUM * [ ] my own task or dataset: (give details below) ## To reproduce [Colab Link](https://colab.research.google.com/drive/1KEweUQA8LRk_5VyRfAt04a7g1g6GpgrP?usp=sharing) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> Code bits from Colab for overview: Dataset Creation: ```python class MyXSum(Dataset): def __init__(self, Config, tokenizer, split_type): main_ds = load_dataset("xsum") self.model_name = Config.model_checkpoint self.dataset = main_ds[split_type] self.tokenizer = tokenizer if split_type in set(["validation", "test"]): self.required_columns =["input_ids", "attention_mask", "labels"] if split_type == "validation": num_samples = 20 else: num_samples = 20 else: self.required_columns = ["input_ids", "attention_mask", #"decoder_input_ids", "decoder_attention_mask", "labels" ] num_samples = None if num_samples: self.dataset = self.dataset.select(list(range(0, num_samples))) def __len__(self): return self.dataset.shape[0] def preprocess_function(self, examples): _inputs = ["summarize: " + examples["document"]] _target = ["<pad>" + examples["summary"]] model_inputs = self.tokenizer(_inputs, max_length=512, truncation=True, padding="max_length", return_tensors="pt") # Setup the tokenizer for targets with self.tokenizer.as_target_tokenizer(): labels = self.tokenizer(_target, max_length=64, truncation=True, padding="max_length", return_tensors="pt") model_inputs = { "input_ids": model_inputs["input_ids"].squeeze(), "attention_mask": model_inputs["attention_mask"].squeeze(), "decoder_input_ids": labels["input_ids"].squeeze(), "decoder_attention_mask": labels["attention_mask"].squeeze(), "labels": labels["input_ids"].squeeze(), } model_inputs = {k: model_inputs[k] for k in self.required_columns} return model_inputs def __getitem__(self, index): return self.preprocess_function(self.dataset[index]) ``` Model Training: ```python @dataclass class T2TDataCollator(DataCollatorWithPadding): def collate_batch(self, batch: List) -> Dict[str, torch.Tensor]: """ Take a list of samples from a Dataset and collate them into a batch. Returns: A dictionary of tensors """ input_ids = torch.stack([example['input_ids'] for example in batch]) labels = torch.stack([example['decoder_input_ids'] for example in batch]) labels[labels[:, :] == 0] = -100 attention_mask = torch.stack([example['attention_mask'] for example in batch]) decoder_attention_mask = torch.stack([example['decoder_attention_mask'] for example in batch]) return { 'input_ids': input_ids.squeeze(), 'attention_mask': attention_mask.squeeze(), 'labels': labels.squeeze(), 'decoder_attention_mask': decoder_attention_mask.squeeze() } model = AutoModelForSeq2SeqLM.from_pretrained(Config.model_checkpoint) model.train() WRAPPED_MODEL = xmp.MpModelWrapper(model) optimizer = Adafactor(model.parameters(), scale_parameter=False, relative_step=False, warmup_init=False, lr=1e-3) lr_scheduler = AdafactorSchedule(optimizer) data_collator = T2TDataCollator(tokenizer=tokenizer) train_ds = torch.load(Config.train_ds_path) valid_ds = torch.load(Config.valid_ds_path) test_ds = torch.load(Config.test_ds_path) def _mp_fn(index): device = xm.xla_device() model = WRAPPED_MODEL.to(device) print("Loading datasets... ", end="") training_args = TrainingArguments( output_dir="./results", num_train_epochs=3, warmup_steps=0, evaluation_strategy="epoch", save_strategy="no", weight_decay=0.0, logging_dir="./log", #eval_steps=Config.eval_steps, logging_steps=50, per_device_train_batch_size=128, per_device_eval_batch_size=4, ) #trainer = Seq2SeqTrainer( trainer = Trainer( model=model, args=training_args, data_collator=data_collator, train_dataset=train_ds, eval_dataset=valid_ds, optimizers=(optimizer, lr_scheduler), ) trainer.place_model_on_device = False trainer.train() xmp.spawn(_mp_fn, start_method="fork") ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> Proper Finetuning of T5
08-06-2021 20:13:38
08-06-2021 20:13:38
In my case, TPU's BF16 datatype caused a fixed loss value. did you use BF16 for training?<|||||>> In my case, TPU's BF16 datatype caused a fixed loss value. did you use BF16 for training? Hey @CryptoSalamander, thanks for your reply. I finally found out the issue. My LR was 0.0, I was under the impression that, `AdaSchedule` would use the `lr` in optimizer and change with every step. But, when we use AdaSchedule, we have to pass in the `initial_lr` or it will default to 0.0 and since relative updates were false (as per the recommendation), the lr remained constant at 0.0.
transformers
13,032
closed
Masked word prediction in new language with mBERT/XLM
Hello, Is there a way to easily predict a masked word in a new language (a language other than the source language) using multilingual models like BERT/XLM/XLM-R? Ideally, if my masked sentence is: `My [MASK] is Vivek` Given a target language, say French, I would want the output for [MASK] to be: `nom` (`name` in French) Is it possible to somehow exploit cross-lingual representations for this purpose?
08-06-2021 19:51:32
08-06-2021 19:51:32
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discuss.huggingface.co) instead? Thanks!<|||||>Hello, Apologies! I did try to create it on the forum, but unfortunately I don't see any option to "Create Topic", or even reply to posts. I am unsure what the issue is, since I have earned the "Basic" badge and as per my understanding, should be allowed to create topics. Can you please help me out with this - how should I go about creating topics on the Forum? Thanks a lot in advance!<|||||>If you have earned the "Basic" badge, you should see a "+ New Topic" button on the home page, on the left directly under the blue banner.<|||||>I'm sorry, I don't see any blue banner or "add new topic" button. This is the website as it is visible to me: <img width="1325" alt="Screenshot 2021-08-16 at 3 21 40 PM" src="https://user-images.githubusercontent.com/26062692/129578973-ccf35f37-dfa7-4a14-9ffc-f2e00f109a28.png"> I have earned the Basic badge, though: <img width="1217" alt="Screenshot 2021-08-16 at 3 23 52 PM" src="https://user-images.githubusercontent.com/26062692/129579180-ae63cc0a-1bb1-487b-8711-6b05d8436ce3.png"> I feel like I am missing something really obvious here, but not sure what. I have used other forums quite a lot previously and am familiar with the typical interface, but I can't find the create topic button though I have looked everywhere. What's also interesting is that originally, when I didn't have the "Basic" badge and had just created the account, I _could_ see the create topic button. It didn't allow me to post, of course, and gave me an error message saying I didn't have the permissions to do so. Later, when I acquired the Basic badge by reading some topics **on another tab**, I went back to the same tab from earlier and tried re-clicking on "Create topic". It then gave me a different error message, something like "View not allowed". Really strange, but thought it might be worth mentioning. And when I refreshed the page, sure enough, the Create Topic button had disappeared.<|||||>Maybe @Pierrci will have an idea.<|||||>Sorry for the late reply, @Remorax you had been wrongly "slienced" by the system, which was preventing you from creating new topics - should be fixed now!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,031
closed
How can I convert a `checkpoint.pth` (a model trained with pytorch-pretrained-bert) to huggingface model with `config.json` and `pytorch_model.bin` file?
# 📚 Migration ## Information <!-- Important information --> Model I am using (Bert, XLNet ...): Language I am using the model on (English, Chinese ...): The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## Details <!-- A clear and concise description of the migration issue. If you have code snippets, please provide it here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code. --> ## Environment info <!-- You can run the command `python transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - Python version: - PyTorch version (GPU?): - Tensorflow version (GPU?): - Using GPU in script?: - Using distributed or parallel set-up in script?: <!-- IMPORTANT: which version of the former library do you use? --> * `pytorch-transformers` or `pytorch-pretrained-bert` version (or branch): ## Checklist - [x] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [ ] I checked if a related official extension example runs on my machine.
08-06-2021 18:37:07
08-06-2021 18:37:07
Hello! I believe that `pytorch_pretrained_BERT` followed the same approach of having a pytorch checkpoint and a configuration. Do you have no configuration accompanying your `checkpoint.pth`? What is contained in that file once you load it with `torch.load`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,030
closed
Tpu tie weights
# What does this PR do? When the model is moved to an XLA device (like a TPU) its tied weights get disconnected. This PR fixes that.
08-06-2021 15:43:09
08-06-2021 15:43:09
transformers
13,029
closed
supporting t5 for question answering
# 🚀 Feature request Hi, this would be great and appreciated to support t5 model for run_qa.py script, currently this does not support it. ## Motivation T5 is the state of the art model, for which there are a lot of motivation for people in NLP community to use this model, specially it can handle multiple datasets. ## Your contribution <!-- Is there any way that you could help, e.g. by submitting a PR? Make sure to read the CONTRIBUTING.MD readme: https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
08-06-2021 13:55:36
08-06-2021 13:55:36
Hey @dorooddorood606, Since T5 is essentially a text-to-text model, the question-answering task can simply be framed as a seq2seq task. I think we could add a `run_qa_seq2seq.py` to https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering that is very similar to https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py (we would just need to adapt the dataset to choose from I think). Would you be interested in adding such an example? Also pinging @sgugger here to hear his opinion :-)<|||||>We could definitely add this kind of example. We just need a proper dataset, as you mentioned.<|||||>Dear @sgugger @patrickvonplaten Thank you very much for considering my request. In T5 codebase, for superglue-record, they convert each example to multiple ones for each answer choice [1]. During evaluation though they consider all answer choices. I assume this is the case with most of the QA datasets. In T5 script, since we need seq2seq format, I am not sure how I can handle keeping a set of answers. thank you very much for your comment in advance. [1] https://github.com/google-research/text-to-text-transfer-transformer/blob/3c58859b8fe72c2dbca6a43bc775aa510ba7e706/t5/data/preprocessors.py#L918<|||||>Cool! @dorooddorood606 would you like to give it a try to make a `run_qa_seq2seq.py` example for SUPERGLUE? Happy to guide you through it :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale, currently wip at #13432 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
13,028
closed
Fix ONNX test: Put smaller ALBERT model
cc @mfuntowicz
08-06-2021 12:44:45
08-06-2021 12:44:45