repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
10,514
closed
Bug in Hosted inference API
Hello,I found that when I use the Hosted inference API, there will be some problems. Some inference results show the entire sentence, but other inference results only show the mask token. For example the model [uer/roberta-base-word-chinese-cluecorpussmall](https://huggingface.co/uer/roberta-base-word-chinese-cluecorpussmall). When I input "中国的首都是北[MASK]",the results are: ![image](https://user-images.githubusercontent.com/59219579/109969085-72079800-7d2e-11eb-876a-2f66e0fd23b2.png) ![image](https://user-images.githubusercontent.com/59219579/109969434-dcb8d380-7d2e-11eb-9f15-84b3229f97cb.png) Thanks
03-04-2021 12:48:40
03-04-2021 12:48:40
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,513
closed
Add Vision Transformer + ViTFeatureExtractor
# What does this PR do? This PR includes 2 things: * it adds the [Vision Transformer (ViT)](https://arxiv.org/abs/2010.11929) by Google Brain. ViT is a Transformer encoder trained on ImageNet. It is capable of classifying images, by placing a linear classification head on top of the final hidden state of the [CLS] token. I converted the weights from the [timm](https://github.com/rwightman/pytorch-image-models) repository, which already took care of converting the weights of the original implementation (which is written in JAX) into PyTorch. Once this model is added, we can also add [DeIT](https://ai.facebook.com/blog/data-efficient-image-transformers-a-promising-new-technique-for-image-classification/) (Data-efficient Image Transformers) by Facebook AI, which improve upon ViT. * it provides a design for the `ViTFeatureExtractor` class, which can be used to prepare images for the model. It inherits from `FeatureExtractionMixin` and defines a `__call__` method. It currently accepts 3 types of inputs: PIL images, Numpy arrays and PyTorch tensors. It defines 2 transformations using `torchvision`: resizing + normalization. It then returns a `BatchFeature` object with 1 key, namely `pixel_values`. Demo notebook of combination of `ViTForImageClassification` + `ViTFeatureExtractor`: https://colab.research.google.com/drive/16TCM-tJ1Mfhs00Qas063kWZmAtVJcOeP?usp=sharing Compared to NLP models (which accept `input_ids`, `attention_mask` and `token_type_ids`), this model only accepts `pixel_values`. The model itself then converts these pixel values into patches (in case of ViT) in the `ViTEmbeddings` class. ## Help needed Would be great if you can help me with the following tasks: - [x] Add and improve tests. Currently I have defined the following tests: `test_modeling_vit.py`, `test_feature_extraction_vit.py`. However, for the former, since ViT does not use `input_ids`/`input_embeds`, some tests are failing, so I wonder whether it should use all tests defined in `test_modeling_common.py`. For the latter, I also need some help in creating random inputs to test the feature extractor on. - [x] Add support for `head_mask` in the forward of `ViTModel`. Possibly remove `attention_mask`? - [x] Run `make fix-copies` (doesn't work right now for me on Windows) - [x] Remove the `is_decoder` logic from `modeling_vit.py` (since the model was created using the CookieCutter template). I assume that things such as `past_key_values` are not required for an encoder-only model. ## Who can review? @patrickvonplaten @LysandreJik @sgugger
03-04-2021 12:21:22
03-04-2021 12:21:22
Hey @NielsRogge > Add and improve tests. Currently I have defined the following tests: test_modeling_vit.py, test_feature_extraction_vit.py. However, for the former, since ViT does not use input_ids/input_embeds, some tests are failing, so I wonder whether it should use all tests defined in test_modeling_common.py. For the latter, I also need some help in creating random inputs to test the feature extractor on. Some common modeling test depend on the specific parameter names, (`input_ids`, `input_embeds`). You could just override such tests in your test class and use the correct parameter names. For example the `test_forward_signature` test expects `inputs_ids`, so it should be overridden in your class to expect `input_values`. Also, the tests for `input_embeds` (for example `test_inputs_embeds`) can be skipped since `ViT` does not use those. Agin just overrides the test and use `pass` in the method body. You could use the modeling tests of `Wav2Vec2` and `Speech2Text` for reference since those models also use different parameter names.<|||||>I like the overall design `ViTFeatureExtractor`. Regrading the import `ViTFeatureExtractor` I think it should be always imported in the __init__ files, and instead, `ViTFeatureExtractor` could check for `torchvision` and raise if it’s not installed. Otherwise, the TF tests on CI will fail because they won’t be able to import `ViTFeatureExtractor` as we don’t install `torchvision` in TF tests. We should also add the `torchvision` and `PIL` dependency in the setup.py file as `extras["vision"]` and also add it in `config.yaml` for CI<|||||>Thanks for the reviews, addressed most of the comments. To do: - [x] rename `self.self` to `self.attention` and update conversion script accordingly - [x] convert more models, place them under the google namespace - [x] add model cards - [x] add 1,000 ImageNet class names to config<|||||>I've addressed all comments. Most important updates: * moved the ImageNet id to classes dict to a new file under transformers.utils named `imagenet_classes`.py. * added a warning to the `__call__` method of `ViTFeatureExtractor` to indicate that NumPy arrays and PyTorch tensors are converted to PIL images when resizing, so it's most efficient to pass in PIL images. The remaining comments which are still open have to do with styling. I seem to have some issues with `make style`. The max_length is set to 119, so not sure what's causing this.
transformers
10,512
closed
Dynamic batch size for Seq2SeqTrainer
# 🚀 Feature request In Fairseq it is possible to forego setting a constant batch-size in favor of a dynamic batch size with --max_tokens. This ensures that a batch always consists of at max N=max_tokens tokens. Fairseq tries to get to max_tokens by adding samples to the batch until N = max_tokens or just below. I believe @sshleifer has implemented this for finetune.py here: #7030 Is it possible to add "--max_tokens_per_batch N" as a trainer argument to Seq2SeqTrainer? ## Motivation This would an invaluable help when training/fine-tuning large models on data sequences (like sentences) of varying length. Long sequences/sentences might lead to OOM-Errors with a fixed batch-size.
03-04-2021 10:35:24
03-04-2021 10:35:24
Pinging @patil-suraj and @sgugger <|||||>Hi @clang88 The goal of the examples scripts is to keep them minimal and simple and I'm not sure if we want to support this immediately. For now, you could use the `--group_by_length` argument which will group the long sequences together to avoid varying lengths and minimize the number of padding tokens. Also to train large models, I would recommend you take a look at `fairscale/deepspeed` integration. Check this [blog post](https://huggingface.co/blog/zero-deepspeed-fairscale) for how to use `fairscale/deepspeed` with `Trainer` @sgugger think we could add a `MaxTokensSampler` for this in case we want to support this. <|||||>Hi @patil-suraj, thank you for the quick reply! I will take a look at the `fairscale/deepspeed` integration! As for `--group_by_length`: this will only work correctly if I use trainer with a `data_collator`, am I correct? I have been already experimenting with that approach, but am having some trouble during the evaluation phase with `custom_metrics`. For whatever reason, the labels passed to the function by the trainer appear to be padded with the default of -100, even though I am passing `label_pad_token_id=` of 0 (for mT5) or 1 (for mBART) in the collator. I am aware this is a whole other issue, but maybe you are aware of any potential solutions for this? That said, I am sure `max_tokens_per_batch` would a be a great asset, as `group_by_length` does not fix the underlying issue of having batches with very long sentences that go OOM. For now I am just truncating my dataset with `max_length`, but that clearly leads to less than ideal performance of the fine-tuned model.<|||||>> think we could add a MaxTokensSampler for this in case we want to support this It's a whole batch sampler that would be needed, since it results in batch size not being constant. And it would need the same version as a distributed batch sampler. This is a lot of work for a very specific use case, so we could accept a PR on an example in a research project first. Of course there is still the possibility of one user using the implementation from FAIR as was done in the old `finetune` script.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,511
closed
Why the positional embeddings in bert are not inplemented by sin/cos as the original paper said? Are these embeddings trainable?
https://github.com/huggingface/transformers/blob/948b730f9777174335812cf76de2a9dd9e4cf20e/src/transformers/models/bert/modeling_bert.py#L172
03-04-2021 05:45:13
03-04-2021 05:45:13
BERT uses absolute position embeddings by default. The sin/cos embeddings are from the original Transformer paper IRRC. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,510
closed
Error in run_squad.py with BartForQuestionAnswering model
I am using the BartForQuestionAnswering model and getting the following error during evaluation `Evaluating: 0%| | 0/315 [00:10<?, ?it/s] Traceback (most recent call last): File "../run_squad.py", line 831, in <module> main() File "../run_squad.py", line 820, in main result = evaluate(args, model, tokenizer, prefix=global_step) File "../run_squad.py", line 325, in evaluate output = [to_list(output[i]) for output in outputs.to_tuple()] File "../run_squad.py", line 325, in <listcomp> output = [to_list(output[i]) for output in outputs.to_tuple()] File "../run_squad.py", line 73, in to_list return tensor.detach().cpu().tolist() AttributeError: 'tuple' object has no attribute 'detach'` It works perfectly fine with the bert-base-uncased model but fails in case of BART model. What could resolve this issue? Thanks
03-04-2021 05:11:50
03-04-2021 05:11:50
Hello! Could you provide all the information required in the issue template? Thank you! <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,509
closed
Stale Bot
Adds a stale bot based on GitHub Actions. This bot is slightly different than the previous one in that it comments that it is closing the issue and closes it immediately, rather than waiting 7 days. From what I've seen up to now, this shouldn't be an issue at all. I've commented out the code for now so that it doesn't actually close issues, but you can take a look at which would be closed [here](https://github.com/huggingface/transformers/runs/2027394567?check_suite_focus=true). Will un-comment the code before merging.
03-04-2021 02:06:30
03-04-2021 02:06:30
The `GITHUB_TOKEN` got rate-limited, unfortunately. The PR should be good to go, I'll try to run it again tomorrow night.
transformers
10,508
closed
Loading tapas model into pipeline from directory gives different result
Hi, I am using the following versions of these packages : transformers = 4.3.2 pytorch = 1.6.0 I am using the following code to download and save a pretrained model: ```py from transformers import TapasConfig,TapasTokenizer,TapasForQuestionAnswering import torch config = TapasConfig.from_pretrained('google/tapas-base-finetuned-wtq',from_pt=True) model = TapasForQuestionAnswering.from_pretrained('google/tapas-base', config=config) tokenizer=TapasTokenizer.from_pretrained("google/tapas-base-finetuned-wtq",from_pt=True) import sys outdir = sys.argv[1] model.save_pretrained(outdir) tokenizer.save_pretrained(outdir) config.save_pretrained(outdir) ``` When I then feed the model directory into pipeline, I don't get any result for the table illustrated in the documentation.... If I let pipeline download the model on-the-fly, I get results. Here is the code to feed model directory into pipeline... ```py import sys from transformers import pipeline nlp = pipeline(task="table-question-answering",framework="pt",model="tapas_model_dir") #nlp = pipeline(task="table-question-answering") import pandas as pd data= { "actors": ["brad pitt", "leonardo di caprio", "george clooney"], "age": ["56", "45", "59"], "number of movies": ["87", "53", "69"], "date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"] } import numpy as np table = pd.DataFrame.from_dict(data) print(np.shape(table)) result = nlp(query=["How many movies has Brad Pitt acted in","What is Leonardo di caprio's age"],table=table) print(result) ```
03-03-2021 23:48:08
03-03-2021 23:48:08
I noticed that the size of pytorch_model.bin that was downloaded into .cache is of size 442791751. When I save_pretrained() the model, the size is 442792154. If I copy the first model into the model directory, I get valid results....<|||||>Hello! You're using `'google/tapas-base'` in order to initialize weights, why don't you use the variant fine-tuned on WTQ like in the configuration and tokenizer? Unless you're using a fine-tuned version, you won't benefit from the best possible predictions, like you have seen here.<|||||>@LysandreJik , I fixed that typo and changed it to model = TapasForQuestionAnswering.from_pretrained('google/tapas-base-finetuned-wtq', config=config), but I get the same discrepancy in results. I am trying to figure out why I get no results when I load the model from a directory(that was generated as above).<|||||>I'm running your code and I get some sensible results: ```py from transformers import TapasConfig,TapasTokenizer,TapasForQuestionAnswering import torch config = TapasConfig.from_pretrained('google/tapas-base-finetuned-wtq',from_pt=True) model = TapasForQuestionAnswering.from_pretrained('google/tapas-base-finetuned-wtq', config=config) tokenizer=TapasTokenizer.from_pretrained("google/tapas-base-finetuned-wtq",from_pt=True) import sys outdir = "tmp" model.save_pretrained(outdir) tokenizer.save_pretrained(outdir) config.save_pretrained(outdir) from transformers import pipeline nlp = pipeline(task="table-question-answering",framework="pt",model=outdir, tokenizer=outdir) #nlp = pipeline(task="table-question-answering") import pandas as pd data= { "actors": ["brad pitt", "leonardo di caprio", "george clooney"], "age": ["56", "45", "59"], "number of movies": ["87", "53", "69"], "date of birth": ["7 february 1967", "10 june 1996", "28 november 1967"] } import numpy as np table = pd.DataFrame.from_dict(data) print(np.shape(table)) result = nlp(query=["How many movies has Brad Pitt acted in","What is Leonardo di caprio's age"],table=table) print(result) ``` Results in: ``` [{'answer': 'SUM > 87', 'coordinates': [(0, 2)], 'cells': ['87'], 'aggregator': 'SUM'}, {'answer': 'AVERAGE > 45', 'coordinates': [(1, 1)], 'cells': ['45'], 'aggregator': 'AVERAGE'}] ``` The aggregator are a bit off but the results are correct. Brad Pitt has played in 87 movies and Leonardo di Caprio is 45.<|||||>I did additionally specify `tokenizer=outdir`, could that be the source of your issue?<|||||>@LysandreJik , that was it ! I will look into the code, but any insight into why tokenizer=outdir needs to be specified (for this pipeline task only) ?<|||||>@LysandreJik , what is the size of your pytorch_model.bin in outdir ? <|||||>The tokenizer needs to be specified for every pipeline tasks. You should always specify the checkpoint for the tokenizer as well as for the model. The size is `442.79 MB`!
transformers
10,507
closed
'Trainer' object has no attribute 'log_metrics'
I was trying to use run_mlm.py to fine-tune roberta-large with a custom dataset on Google Colab. !python /content/transformers/examples/language-modeling/run_mlm.py \ --model_name_or_path roberta-large \ --train_file /content/traincorpus.txt \ --validation_file /content/devcorpus.txt \ --do_train \ --do_eval \ --per_device_train_batch_size 2 \ --output_dir finetunedmodel \ --overwrite_output_dir True I am able to train the model however at the end I can not see the evaluation due to an error: Traceback (most recent call last): File "/content/transformers/examples/language-modeling/run_mlm.py", line 442, in <module> main() File "/content/transformers/examples/language-modeling/run_mlm.py", line 416, in main trainer.log_metrics("train", metrics) AttributeError: 'Trainer' object has no attribute 'log_metrics' There is nothing extra on colab env. I wasn't getting this error a week ago or something.
03-03-2021 20:04:51
03-03-2021 20:04:51
Hi, this is a duplicate, see #10446 <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,506
closed
[WIP][BIGBIRD] Add new conversion
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2021 19:59:34
03-03-2021 19:59:34
transformers
10,505
closed
Remove unsupported methods from ModelOutput doc
# What does this PR do? As said in the title :-) Fixes #10469
03-03-2021 19:09:26
03-03-2021 19:09:26
transformers
10,504
closed
Rework TPU checkpointing in Trainer
# What does this PR do? This PR rewors a tiny bit the checkpointing mechanism in the `Trainer` and `PreTrainedModel` to get rid of the hack that stored something in the config (which was then forever present if the user decided to share their model on the hub). The main problem is that the save on TPU has to be called on all processes because there is a synchronization inside (so if we only execute it in one process, the other do not reach the synchronization point and everything hangs). At the same time, the config should only be saved on one process to avoid race conditions. So this PR solves the problem by adding two new arguments to `save_pretrained`: `is_main` and `_save_function`.
03-03-2021 18:52:35
03-03-2021 18:52:35
Tested on TPUs and all worked well (training, checkpointing, reloading from checkpoint, evaluating a saved model), so I will merge this.
transformers
10,503
closed
Fine tuning a pipeline
# 🚀 Feature request <!-- A clear and concise description of the feature proposal. Please provide a link to the paper and code in case they exist. --> [Already asked](https://github.com/huggingface/transformers/issues/8127) ## Motivation <!-- Please outline the motivation for the proposal. Is your feature request related to a problem? e.g., I'm always frustrated when [...]. If this is related to another GitHub issue, please link here too. --> I would like to fine-tune a pipeline using a model (pre-trained or fine-tuned).
03-03-2021 18:50:05
03-03-2021 18:50:05
Hi! I'm sorry, I don't understand what you mean exactly. A pipeline uses a model and a tokenizer under the hood, do you mean you want to fine-tune the model used underneath, and use that fine-tuned model? If that is so, it is simple: any model/tokenizer can be loaded in the pipeline via local path/hub identifier. I invite you to read the [documentation of the pipelines](https://huggingface.co/transformers/main_classes/pipelines.html#transformers.pipeline), especially the `model` and `tokenizer` arguments.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,502
closed
GLUE benchmark crashes with MNLI and STSB
## Environment info - `transformers` version: 4.4.0.dev0 - Platform: Linux-5.8.0-44-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: no, running only on CPU or single GPU ### Who can help Tags: @patrickvonplaten @LysandreJik ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: GLUE * [ ] my own task or dataset: ## To reproduce Steps to reproduce the behaviour: Create model with a very recent version of transformers: ``` >>> from transformers import RobertaForMaskedLM, RobertaConfig, RobertaTokenizer >>> tok = RobertaTokenizer.from_pretrained('roberta-base') >>> config = RobertaConfig.from_pretrained('roberta-base') >>> model = RobertaForMaskedLM(config=config) >>> tok.save_pretrained('tmp_model') ('tmp_model/tokenizer_config.json', 'tmp_model/special_tokens_map.json', 'tmp_model/vocab.json', 'tmp_model/merges.txt', 'tmp_model/added_tokens.json') >>> model.save_pretrained('tmp_model') ``` Run GLUE benchmark on MNLI: ``` python ./examples/text-classification/run_glue.py \ --model_name_or_path tmp_model \ --task_name MNLI \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 1e-5 \ --num_train_epochs 1 \ --output_dir TMP/ ``` or on STSB: ``` python ./examples/text-classification/run_glue.py \ --model_name_or_path tmp_model \ --task_name STSB \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --per_device_eval_batch_size 32 \ --learning_rate 1e-5 \ --num_train_epochs 1 \ --output_dir TMP/ ``` ### Error logs MNLI: > Traceback (most recent call last): > File "./examples/text-classification/run_glue.py", line 480, in <module> > main() > File "./examples/text-classification/run_glue.py", line 415, in main > train_result = trainer.train(resume_from_checkpoint=checkpoint) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/trainer.py", line 1048, in train > tr_loss += self.training_step(model, inputs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/trainer.py", line 1432, in training_step > loss = self.compute_loss(model, inputs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/trainer.py", line 1464, in compute_loss > outputs = model(**inputs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl > result = self.forward(*input, **kwargs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 1168, in forward > loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl > result = self.forward(*input, **kwargs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward > return F.cross_entropy(input, target, weight=self.weight, > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy > return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/functional.py", line 2264, in nll_loss > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) > IndexError: Target 2 is out of bounds. The problem seems to be related to the incorrect initialisation of the classification head. However, with some prints I noticed that the shape of the classification head was `hidden_size x 3`, so I do not really understand where the problem comes from... STSB: > Traceback (most recent call last): > File "./examples/text-classification/run_glue.py", line 480, in <module> > main() > File "./examples/text-classification/run_glue.py", line 415, in main > train_result = trainer.train(resume_from_checkpoint=checkpoint) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/trainer.py", line 1048, in train > tr_loss += self.training_step(model, inputs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/trainer.py", line 1432, in training_step > loss = self.compute_loss(model, inputs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/trainer.py", line 1464, in compute_loss > outputs = model(**inputs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl > result = self.forward(*input, **kwargs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/transformers/models/roberta/modeling_roberta.py", line 1168, in forward > loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl > result = self.forward(*input, **kwargs) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 961, in forward > return F.cross_entropy(input, target, weight=self.weight, > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/functional.py", line 2468, in cross_entropy > return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) > File "/home/lucadiliello/anaconda3/envs/tmp/lib/python3.8/site-packages/torch/nn/functional.py", line 2264, in nll_loss > ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) > RuntimeError: expected scalar type Long but found Float Here the problem seems to be related to the `dtype` of the targets. Interestingly, loading an old model like `bert-base-cased` or `roberta-base` does not raise errors....
03-03-2021 17:33:43
03-03-2021 17:33:43
Hi! Actually I think you touched the base of the issue for MNLI, I would guess it has to do with the number of labels. When you do the following: ```py >>> from transformers import RobertaForMaskedLM, RobertaConfig, RobertaTokenizer >>> tok = RobertaTokenizer.from_pretrained('roberta-base') >>> config = RobertaConfig.from_pretrained('roberta-base') >>> model = RobertaForMaskedLM(config=config) >>> model.save_pretrained('tmp_model') >>> from transformers import RobertaForSequenceClassification >>> model = RobertaForSequenceClassification.from_pretrained("tmp_model") ``` You'll see that the sequence classification model you've loaded has 2 labels, whereas MNLI is a 3-way classification: ```py >>> model.classifier.out_proj Linear(in_features=768, out_features=2, bias=True) ``` I don't get any such errors when changing the configuration initialization to the following: ```py >>> config = RobertaConfig.from_pretrained('roberta-base', num_labels=3) ``` Regarding the STS-B issue, I think this is an issue that was recently solved on `master`. Can you try pulling the `master` branch once again and letting me know if it fixes your issue? If not, I'll try and check what's happening.<|||||>> Hi! Actually I think you touched the base of the issue for MNLI, I would guess it has to do with the number of labels. When you do the following: > > ```python > >>> from transformers import RobertaForMaskedLM, RobertaConfig, RobertaTokenizer > >>> tok = RobertaTokenizer.from_pretrained('roberta-base') > >>> config = RobertaConfig.from_pretrained('roberta-base') > >>> model = RobertaForMaskedLM(config=config) > >>> model.save_pretrained('tmp_model') > >>> from transformers import RobertaForSequenceClassification > >>> model = RobertaForSequenceClassification.from_pretrained("tmp_model") > ``` > > You'll see that the sequence classification model you've loaded has 2 labels, whereas MNLI is a 3-way classification: > > ```python > >>> model.classifier.out_proj > Linear(in_features=768, out_features=2, bias=True) > ``` > > I don't get any such errors when changing the configuration initialization to the following: > > ```python > >>> config = RobertaConfig.from_pretrained('roberta-base', num_labels=3) > ``` I didn't have this problem on branch `v4.0.1-release` because the `num_labels` parameter was set automatically. I didn't have to set the exact number of label for each GLUE task. In fact, the script computes them automatically, but for some reason the error still happens. > Regarding the STS-B issue, I think this is an issue that was recently solved on `master`. Can you try pulling the `master` branch once again and letting me know if it fixes your issue? If not, I'll try and check what's happening. Error still present on master (`4.4.0.dev0`).<|||||>cc @sgugger for knowledge regarding the MNLI issue.
transformers
10,501
closed
[ProphetNet] Bart-like Refactor
# What does this PR do? This PR refactors ProphetNet similar to Bart in that it moves the time dimension to be always at the 2nd place and the batch dimensions always in the first place. Also, the cache is refactored to consists of tuples instead of a dict. The model is thereby very much aligned with Bart (I cannot really add any " # Copied from" statements though because the weight names are different). The PR is in spirit very similar to https://github.com/huggingface/transformers/pull/8900. I've verified that all slow tests pass. In the next step, I want to make a short notebook, verifying that ProphetNet can be trained since there have been some issues on training: #9804 # Benchmarking The PR doesn't change compute or memory complexity: On this PR: ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- microsoft/prophetnet-large-unc 8 8 0.029 microsoft/prophetnet-large-unc 8 32 0.044 microsoft/prophetnet-large-unc 8 128 0.175 microsoft/prophetnet-large-unc 8 512 N/A -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- microsoft/prophetnet-large-unc 8 8 2562 microsoft/prophetnet-large-unc 8 32 2756 microsoft/prophetnet-large-unc 8 128 3628 microsoft/prophetnet-large-unc 8 512 N/A -------------------------------------------------------------------------------- ``` on master: ``` ==================== INFERENCE - SPEED - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Time in s -------------------------------------------------------------------------------- microsoft/prophetnet-large-unc 8 8 0.027 microsoft/prophetnet-large-unc 8 32 0.044 microsoft/prophetnet-large-unc 8 128 0.172 microsoft/prophetnet-large-unc 8 512 N/A -------------------------------------------------------------------------------- ==================== INFERENCE - MEMORY - RESULT ==================== -------------------------------------------------------------------------------- Model Name Batch Size Seq Length Memory in MB -------------------------------------------------------------------------------- microsoft/prophetnet-large-unc 8 8 2562 microsoft/prophetnet-large-unc 8 32 2768 microsoft/prophetnet-large-unc 8 128 3740 microsoft/prophetnet-large-unc 8 512 N/A -------------------------------------------------------------------------------- ```
03-03-2021 16:29:35
03-03-2021 16:29:35
transformers
10,500
closed
Fine tune of speaker embeddings model
# 🚀 Feature request Provide a way to fine tune the X-vector speaker embeddings model using our own custom dataset. (https://huggingface.co/hbredin/SpeakerEmbedding-XVectorMFCC-VoxCeleb) ## Motivation This will help in finetuning the model for new domain/speakers.
03-03-2021 15:46:36
03-03-2021 15:46:36
Hi! I think this is more of a question for `pyannote` rather than for `transformers`?<|||||>cc'ing @hbredin for visibility :)<|||||>Thanks @julien-c for the ping. @karthikgali please open an issue or discussion in [pyannote.audio Github repo](https://github.com/pyannote/pyannote-audio) instead.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,499
closed
f"The model '{self.model.__class__.__name__}' is not supported for {self.task}. Supported models are {supported_models}",
## Context I have used the official example for Q&A [here](https://huggingface.co/dbmdz/bert-base-italian-cased), but slightly modified the `Pipeline` to use the `model` and `tokenizer` objects from pretrained model as exaplained [here](https://huggingface.co/transformers/main_classes/pipelines.html#the-pipeline-abstraction). This because I need a `cache_dir` that currently is not supported by `Pipeline` object. ## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: Linux-4.19.121-linuxkit-x86_64-with-debian-10.1 - Python version: 3.7.4 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: <NO> - Using distributed or parallel set-up in script?: <NO> ### Who can help Model: https://huggingface.co/dbmdz/bert-base-italian-cased @patrickvonplaten, @patil-suraj <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [X] an official GLUE/SQUaD task: (translation) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: ```python question = 'Quale filosofia seguì Marco Aurelio ?' context = 'Marco Aurelio era un imperatore romano che praticava lo stoicismo come filosofia di vita .' tokenizer = AutoTokenizer.from_pretrained( 'mrm8488/bert-italian-finedtuned-squadv1-it-alfa', cache_dir=os.getenv("cache_dir", "model")) model = AutoModel.from_pretrained( 'mrm8488/bert-italian-finedtuned-squadv1-it-alfa', cache_dir=os.getenv("cache_dir", "model")) nlp_qa_bert = pipeline( 'question-answering', model=model, tokenizer=tokenizer) out = nlp_qa_bert({ 'question': question, 'context': context }) print(out) ```` ERROR: ``` Traceback (most recent call last): File "qa/run.py", line 26, in <module> tokenizer=tokenizer) File "/usr/local/lib/python3.7/site-packages/transformers/pipelines/__init__.py", line 418, in pipeline return task_class(model=model, tokenizer=tokenizer, modelcard=modelcard, framework=framework, task=task, **kwargs) File "/usr/local/lib/python3.7/site-packages/transformers/pipelines/question_answering.py", line 135, in __init__ TF_MODEL_FOR_QUESTION_ANSWERING_MAPPING if self.framework == "tf" else MODEL_FOR_QUESTION_ANSWERING_MAPPING File "/usr/local/lib/python3.7/site-packages/transformers/pipelines/base.py", line 577, in check_model_type f"The model '{self.model.__class__.__name__}' is not supported for {self.task}. Supported models are {supported_models}", transformers.pipelines.base.PipelineException: The model 'BertModel' is not supported for question-answering. Supported models are ['ConvBertForQuestionAnswering', 'LEDForQuestionAnswering', 'DistilBertForQuestionAnswering', 'AlbertForQuestionAnswering', 'CamembertForQuestionAnswering', 'BartForQuestionAnswering', 'MBartForQuestionAnswering', 'LongformerForQuestionAnswering', 'XLMRobertaForQuestionAnswering', 'RobertaForQuestionAnswering', 'SqueezeBertForQuestionAnswering', 'BertForQuestionAnswering', 'XLNetForQuestionAnsweringSimple', 'FlaubertForQuestionAnsweringSimple', 'MobileBertForQuestionAnswering', 'XLMForQuestionAnsweringSimple', 'ElectraForQuestionAnswering', 'ReformerForQuestionAnswering', 'FunnelForQuestionAnswering', 'LxmertForQuestionAnswering', 'MPNetForQuestionAnswering', 'DebertaForQuestionAnswering', 'DebertaV2ForQuestionAnswering', 'IBertForQuestionAnswering'] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior no error <!-- A clear and concise description of what you would expect to happen. -->
03-03-2021 15:45:01
03-03-2021 15:45:01
Hi there, You should use the `AutoModelForQuestionAnswering` class to load a QA model, the `AutoModel` class just loads the base model and doesn't load the task-specific head, which is the reason for this error. In general, always use the task-specific auto classes to load task-specific architectures.<|||||>@patil-suraj confirmed, it works ```python tokenizer = AutoTokenizer.from_pretrained( 'mrm8488/bert-italian-finedtuned-squadv1-it-alfa', cache_dir=os.getenv("cache_dir", "model")) model = AutoModelForQuestionAnswering.from_pretrained( 'mrm8488/bert-italian-finedtuned-squadv1-it-alfa', cache_dir=os.getenv("cache_dir", "model")) nlp_qa_bert = pipeline( 'question-answering', model=model, tokenizer=tokenizer) ```
transformers
10,498
closed
DeBERTa Fast Tokenizer
Hi, I am interested in using the DeBERTa model that was recently implemented here and incorporating it into [FARM](https://github.com/deepset-ai/FARM) so that it can also be used in open-domain QA settings through [Haystack](https://github.com/deepset-ai/haystack). Just wondering why there's only a Slow Tokenizer implemented for DeBERTa and wondering if there are plans to create the Fast Tokenizer too. Thanks in advance! Hi @stefan-it! Wondering if you might have any insight on this?
03-03-2021 10:46:22
03-03-2021 10:46:22
Hi @brandenchan , I think it should be easier with version 2 of DeBERTa, because they use a "normal" sentence piece model now: https://github.com/huggingface/transformers/pull/10018 So having a fast alternative would be great. (The new 128k vocab size should really boost performance on QA tasks!) <|||||>Indeed, this would be a very nice addition and way easier to implement than for the first DeBERTa. I'm adding the `Good Second Issue` label so that a community member may work on it. @brandenchan or @stefan-it feel free to take it too if you feel like it!<|||||>Hi, I am looking for my first open source contribution. May I take this if its still available?<|||||>Yes, of course! Thank you!<|||||>@ShubhamSanghvi Maybe wait until #10703 is merged.<|||||>Hi, as far as I understand I will have to add tokenizer files for debarta_v2 to implement the fast tokenizer? May I know how could I get the tokenizer files for deberta_v2 models and how to upload them to the intended destinations, which I believe should be (for deberta-v2-xlarge) : https://huggingface.co/microsoft/deberta-v2-xlarge/resolve/main/ Thanks, Shubham<|||||>@ShubhamSanghvi Do you only want to implement the fast tokenizer for DebertaV2 or also for Deberta? > May I know how could I get the tokenizer files for deberta_v2 models I think this is what you have to figure out. I would check the other models that have a slow sentencepiece tokenizer. > how to upload them to the intended destinations, which I believe should be (for deberta-v2-xlarge) You can not upload them there. Upload them to some kind of a public cloud and request an upload. <|||||>@ShubhamSanghvi Are you planning to create a PR for this issue soon? <|||||>Hi @mansimane, I am currently working on it. I am hoping to get it done by next week.
transformers
10,497
closed
Wav2Vec fine code
# 🚀 Feature request @patrickvonplaten Hi, I have the following data set I want to use to fine tune Wav2Vec: [cv-valid-train.zip](https://github.com/huggingface/transformers/files/6074839/cv-valid-train.zip) I'm using the current transformers library from github (4.4.0 dev). And I wrote the following code based on the code in the PR https://github.com/huggingface/transformers/pull/10145: 1. ctc_trainer.py ```python from typing import Dict, Union, Any import torch from transformers import Trainer class CTCTrainer(Trainer): def training_step(self, model: torch.nn.Module, inputs: Dict[str, Union[torch.Tensor, Any]]) -> torch.Tensor: """ Perform a training step on a batch of inputs. Subclass and override to inject custom behavior. Args: model (:obj:`nn.Module`): The model to train. inputs (:obj:`Dict[str, Union[torch.Tensor, Any]]`): The inputs and targets of the model. The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument :obj:`labels`. Check your model's documentation for all accepted arguments. Return: :obj:`torch.Tensor`: The tensor with training loss on this batch. """ model.train() inputs = self._prepare_inputs(inputs) loss = self.compute_loss(model, inputs) if self.args.n_gpu > 1: if model.module.config.ctc_loss_reduction == "mean": loss = loss.mean() elif model.module.config.ctc_loss_reduction == "sum": loss = loss.sum() / (inputs["labels"] >= 0).sum() else: raise ValueError(f"{model.config.ctc_loss_reduction} is not valid. Choose one of ['mean', 'sum']") if self.args.gradient_accumulation_steps > 1: loss = loss / self.args.gradient_accumulation_steps loss.backward() return loss.detach() ``` 2. data_collector.py ```python from dataclasses import dataclass from typing import Union, Optional, List, Dict import torch from transformers import Wav2Vec2Processor @dataclass class DataCollatorCTCWithPadding: """ Data collator that will dynamically pad the inputs received. Args: processor (:class:`~transformers.Wav2Vec2Processor`) The processor used for proccessing the data. padding (:obj:`bool`, :obj:`str` or :class:`~transformers.tokenization_utils_base.PaddingStrategy`, `optional`, defaults to :obj:`True`): Select a strategy to pad the returned sequences (according to the model's padding side and padding index) among: * :obj:`True` or :obj:`'longest'`: Pad to the longest sequence in the batch (or no padding if only a single sequence if provided). * :obj:`'max_length'`: Pad to a maximum length specified with the argument :obj:`max_length` or to the maximum acceptable input length for the model if that argument is not provided. * :obj:`False` or :obj:`'do_not_pad'` (default): No padding (i.e., can output a batch with sequences of different lengths). max_length (:obj:`int`, `optional`): Maximum length of the ``input_values`` of the returned list and optionally padding length (see above). max_length_labels (:obj:`int`, `optional`): Maximum length of the ``labels`` returned list and optionally padding length (see above). pad_to_multiple_of (:obj:`int`, `optional`): If set will pad the sequence to a multiple of the provided value. This is especially useful to enable the use of Tensor Cores on NVIDIA hardware with compute capability >= 7.5 (Volta). """ processor: Wav2Vec2Processor padding: Union[bool, str] = True max_length: Optional[int] = None max_length_labels: Optional[int] = None pad_to_multiple_of: Optional[int] = None pad_to_multiple_of_labels: Optional[int] = None def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]: # split inputs and labels since they have to be of different lenghts and need # different padding methods input_features = [{"input_values": feature["input_values"]} for feature in features] label_features = [{"input_ids": feature["labels"]} for feature in features] batch = self.processor.pad( input_features, padding=self.padding, max_length=self.max_length, pad_to_multiple_of=self.pad_to_multiple_of, return_tensors="pt", ) with self.processor.as_target_processor(): labels_batch = self.processor.pad( label_features, padding=self.padding, max_length=self.max_length_labels, pad_to_multiple_of=self.pad_to_multiple_of_labels, return_tensors="pt", ) # replace padding with -100 to ignore loss correctly labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100) batch["labels"] = labels return batch ``` 3. fine tune model.py ```python from pathlib import Path import datasets import librosa import numpy import pandas import torch from sklearn.model_selection import train_test_split from torch.utils.data import TensorDataset from tqdm import tqdm from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor, TrainingArguments from ctc_trainer import CTCTrainer from data_collector import DataCollatorCTCWithPadding def map_to_array(batch): input_audio, _ = librosa.load( Path("__file__").parents[0].joinpath(batch["filename"]), sr=16000) return input_audio def convert_to_dataset_torch(x: pandas.DataFrame, y: pandas.DataFrame) -> TensorDataset: input_values = [] labels = [] for _, row in tqdm(x.iterrows(), total=x.shape[0]): input_values.append(row["input_values"]) for _, row in tqdm(y.iterrows(), total=y.shape[0]): labels.append(row["labels"]) return TensorDataset(torch.cat(input_values, dim=0), torch.cat(labels, dim=0)) if __name__ == '__main__': dataset = pandas.read_csv(Path(__file__).parents[0].joinpath("cv-valid-train.csv")) X_train, X_test, y_train, y_test = train_test_split(dataset[["filename"]], dataset[["text"]], test_size=0.2, random_state=42) X_train, X_validation, y_train, y_validation = train_test_split(X_train, y_train, test_size=0.2, random_state=42) model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h") processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h") wer_metric = datasets.load_metric("wer") X_train["speech"] = X_train.apply(map_to_array, axis=1) X_train["input_values"] = X_train.apply(lambda row: processor(row["speech"], sampling_rate=16000).input_values, axis=1) X_validation["speech"] = X_validation.apply(map_to_array, axis=1) X_validation["input_values"] = X_validation.apply( lambda row: processor(row["speech"], sampling_rate=16000).input_values, axis=1) X_test["speech"] = X_test.apply(map_to_array, axis=1) X_test["input_values"] = X_test.apply(lambda row: processor(row["speech"], sampling_rate=16000).input_values, axis=1) with processor.as_target_processor(): y_train["labels"] = y_train.apply(lambda row: processor(row["text"]).input_ids, axis=1) y_validation["labels"] = y_validation.apply(lambda row: processor(row["text"]).input_ids, axis=1) y_test["labels"] = y_test.apply(lambda row: processor(row["text"]).input_ids, axis=1) data_collator = DataCollatorCTCWithPadding(processor=processor, padding=True) def compute_metrics(pred): pred_logits = pred.predictions pred_ids = numpy.argmax(pred_logits, axis=-1) pred.label_ids[pred.label_ids == -100] = 0 pred_str = processor.batch_decode(pred_ids) # we do not want to group tokens when computing the metrics label_str = processor.batch_decode(pred.label_ids, group_tokens=False) wer = wer_metric.compute(predictions=pred_str, references=label_str) return {"wer": wer} training_args = TrainingArguments( output_dir='./results', # output directory num_train_epochs=2, # total number of training epochs per_device_train_batch_size=16, # batch size per device during training per_device_eval_batch_size=64, # batch size for evaluation warmup_steps=500, # number of warmup steps for learning rate scheduler weight_decay=0.01, # strength of weight decay logging_dir='./logs', # directory for storing logs logging_steps=10, ) trainer = CTCTrainer( model=model, data_collator=data_collator, args=training_args, compute_metrics=compute_metrics, train_dataset=convert_to_dataset_torch(X_train, y_train), eval_dataset=convert_to_dataset_torch(X_validation, y_validation), tokenizer=processor.feature_extractor, ) trainer.train() ``` I'm unable in the method ```convert_to_dataset_torch``` to create TensorDataset. I get the following error: TypeError: expected Tensor as element 0 in argument 0, but got numpy.ndarray 1. How can I convert the 2d numpy to torch? 2. How can I control argument such n_gpu and gradient_accumulation_steps? 4. What is model.module.config.ctc_loss_reduction, How It can be controled, and what is best for ASR task? 3. Is there any remorks over the code?
03-03-2021 09:31:08
03-03-2021 09:31:08
Hey @idanmoradarthas, I will soon release a notebook, that will explain in-detail how to fine-tune a Wav2Vec2 model (~1week). It's quite time consuming for me to debug user-specific code, such as `convert_to_dataset_torch`, so I can only give you some tips here: - Try to convert your dataset to PyTorch tensors istead of `np.ndarray`'s. This means you should change all of your lines that do: ```processor(row["speech"], sampling_rate=16000)``` to ```processor(row["speech"], sampling_rate=16000, return_tensors="pt")```<|||||>Thanks =)<|||||>with self.processor.as_target_processor(): labels_batch = self.processor.pad( i use this command in order to encode labels but i got number 3 instead of all letters in sentence How can i solve this issue?<|||||>Hey @kasrasehat, Could you please open a new issue? <|||||>@patrickvonplaten sure
transformers
10,496
closed
[T5] Fix speed degradation bug t5
# What does this PR do? Checking every value of a tensor for `inf` is expensive. This was added to T5 to allow for fp16 training, but should then also be used when the model is in fp16 to not slow down normal fp32 mode. Using @dsgissin script: ```python device = torch.device('cuda:0') if torch.cuda.is_available() else torch.device('cpu') print(f"Using device: {device}") t5_tokenizer = T5TokenizerFast.from_pretrained('t5-base') t5_model = T5ForConditionalGeneration.from_pretrained('t5-base') t5_model = t5_model.to(device) t5_input_ids = t5_tokenizer("summarize: studies have shown that owning a dog is good for you ", return_tensors="pt").input_ids # Batch size 1 t5_input_ids = t5_input_ids.to(device) import time import numpy as np N = 100 times = [] for _ in range(N): start = time.time() t5_outputs = t5_model.generate(t5_input_ids) end = time.time() times.append(end-start) print(f"transformers version: {transformers_version}") print(f"torch version: {torch_version}") print(f"{1000*np.mean(times):.0f} ms \u00B1 {1000*np.std(times):.2f} ms per loop (mean \u00B1 std of {N} runs)") ``` with: - Python 3.8.5 - PyTorch 1.7.1 - CUDA 11.1 on a NVIDIA V100 GPU The time was improved from: ```441 ms ± 41.67 ms per loop (mean ± std of 100 runs)``` to ```388 ms ± 44.75 ms per loop (mean ± std of 100 runs)```
03-03-2021 08:48:54
03-03-2021 08:48:54
> Looks good to me! > > Some of the other library models also use this trick (BART-like models), we should also investigate those. Good point - yeah, let me fix this in this PR actually
transformers
10,495
closed
Albert quantized
I use onnxruntime to optimize and quantize transformers model 'albert-base-v2' ,but the quantized result is different from original result,so,Does it support transformers albert quantized right now?
03-03-2021 08:07:59
03-03-2021 08:07:59
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,494
closed
[Wav2Vec2] Improve SpecAugment function by converting numpy based fun…
…ction to pytorch based function Implements #10459 # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-03-2021 05:44:59
03-03-2021 05:44:59
We need to run benchmark tests to see by how much the speed improved both on CPU and GPU<|||||>@patrickvonplaten Can you please help with above comments ?<|||||>Hey @punitvara, At the moment, I sadly don't have the time to handle the big chunk of the PR. It would be great if you could try to: 1) Find a way to benchmark your new function on GPU and show that it yields a speed-up in the forward pass compared to the old function 2) Try out some advanced PyTorch indexing to replace the for loops. Taking a look at those PRs should help you: https://github.com/huggingface/transformers/pull/9600, https://github.com/huggingface/transformers/pull/9453, https://github.com/huggingface/transformers/pull/6064<|||||>Closing due to inactivity. Sorry @punitvara, I saw a lot of interest from other people to open a PR and this one seems to have stalled. Feel free to re-open it and give it a second shot if you want :-) <|||||>I got busy into some other work. I will try to work on different issue. If you get any PR, feel free to merge it
transformers
10,493
closed
Generate can return cross-attention weights too
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10335 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patrickvonplaten
03-03-2021 05:39:27
03-03-2021 05:39:27
Hi, I tried to keep the code changes to a minimum. Thus, I avoided adding another argument for returning cross-attention weights and used `output_attentions` to check instead. Also in docstrings, for `decoder_attentions` the shape is mentioned as `(batch_size*num_return_sequences, num_heads, generated_length, sequence_length)`, although it seems that the shape is always `(batch_size*num_return_sequences, num_heads, 1, 1)`. `generated_length` is 1 as the code returns tuples per generated token and `sequence_length` is 1 since it's decoder self-attentions. Similarly for `cross_attentions` the shape should be `(batch_size, num_heads, 1, input_sequence_length)`. Are there any examples where this is not True? Please let me know what you think. Thanks!<|||||>> Hi, I tried to keep the code changes to a minimum. Thus, I avoided adding another argument for returning cross-attention weights and used `output_attentions` to check instead. > Also in docstrings, for `decoder_attentions` the shape is mentioned as `(batch_size*num_return_sequences, num_heads, generated_length, sequence_length)`, although it seems that the shape is always `(batch_size*num_return_sequences, num_heads, 1, 1)`. `generated_length` is 1 as the code returns tuples per generated token and `sequence_length` is 1 since it's decoder self-attentions. > Similarly for `cross_attentions` the shape should be `(batch_size, num_heads, 1, input_sequence_length)`. Are there any examples where this is not True? > > Please let me know what you think. Thanks! Regarding the `decoder_attentions` shape -> it's just the first attentions that is of shape `batch_size, num_heads, 1, 1`. Then it goes up to `..., 2, 2`, `..., 3, 3` etc. However if `use_cache` is enabled, then the shape is `...1, 1`, `...1, 2`, `....1, 3`. Check this example: ```python from transformers import BartForConditionalGeneration import torch bart = BartForConditionalGeneration.from_pretrained("facebook/bart-large-cnn") bart.config.max_length = 5 bart.config.num_beams = 1 outputs_use_cache = bart.generate(torch.tensor([ 10 * [0]]), return_dict_in_generate=True, output_attentions=True) outputs_no_cache = bart.generate(torch.tensor([ 10 * [0]]), return_dict_in_generate=True, output_attentions=True, use_cache=False) # outputs_no_cache.decoder_attentions[1][0].shape gives (1, 16, 2, 2) # outputs_use_cache.decoder_attentions[1][0].shape gives (1, 16, 1, 2) ```<|||||>> Regarding the decoder_attentions shape -> it's just the first attentions that is of shape batch_size, num_heads, 1, 1. Then it goes up to ..., 2, 2, ..., 3, 3 etc. However if use_cache is enabled, then the shape is ...1, 1, ...1, 2, ....1, 3. Thanks for the example! It makes sense now.<|||||>Thanks for the great work. But I am still a bit confused about getting the cross-attention or encode-decoder attention if I am not mistaken the cross attention. Any hint would be great. Maybe it would be like ``` # outputs_no_cache.cross_attentions[1][0] # outputs_use_cache.cross_attentions[1][0] ``` Thanks! @Mehrad0711 <|||||>hi @xwuShirley Yes, you are right. You can access the cross attentions using `outputs. cross_attentions`
transformers
10,492
closed
Model Weights Fail to Load from Pre-Trained Model when Using `tf.name_scope`
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.0 - Platform: Darwin-19.2.0-x86_64-i386-64bit - Python version: 3.6.6 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.3.1 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @LysandreJik @jplu I can't load the pre-trained weights from BERT or Roberta when using a `tf.name_scope`. Without defining a name scope, the code runs as expected. ``` text_model= TFBertModel.from_pretrained('bert-base-uncased') Some layers from the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls'] - This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the layers of TFBertModel were initialized from the model checkpoint at bert-base-uncased. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training. ``` By adding a name_scope, however, I'll get a warning, indicating that the pre-trained weights are not loaded. ``` import tensorflow as tf from transformers import TFBertModel with tf.name_scope("Model"): text_model_2 = TFBertModel.from_pretrained('bert-base-uncased') Some layers from the model checkpoint at bert-base-uncased were not used when initializing TFBertModel: ['nsp___cls', 'mlm___cls', 'bert/encoder/layer_._2/attention/self/key/bias:0', 'bert/encoder/layer_._7/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._4/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._6/attention/output/dense/kernel:0', 'bert/encoder/layer_._9/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._6/attention/self/key/bias:0', 'bert/encoder/layer_._5/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._3/output/LayerNorm/gamma:0', 'bert/encoder/layer_._5/attention/self/value/kernel:0', 'bert/encoder/layer_._9/attention/self/key/bias:0', 'bert/encoder/layer_._6/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._7/intermediate/dense/bias:0', 'bert/encoder/layer_._4/attention/self/value/kernel:0', 'bert/encoder/layer_._11/attention/self/key/bias:0', 'bert/encoder/layer_._5/attention/self/key/bias:0', 'bert/encoder/layer_._4/attention/self/query/bias:0', 'bert/encoder/layer_._7/output/dense/kernel:0', 'bert/encoder/layer_._8/output/dense/bias:0', 'bert/encoder/layer_._8/intermediate/dense/bias:0', 'bert/encoder/layer_._9/output/LayerNorm/gamma:0', 'bert/encoder/layer_._2/attention/self/key/kernel:0', 'bert/encoder/layer_._0/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._1/intermediate/dense/bias:0', 'bert/encoder/layer_._4/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._1/attention/self/key/kernel:0', 'bert/encoder/layer_._1/attention/self/query/kernel:0', 'bert/encoder/layer_._1/attention/output/dense/bias:0', 'bert/encoder/layer_._6/output/dense/kernel:0', 'bert/encoder/layer_._8/attention/self/key/kernel:0', 'bert/encoder/layer_._8/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._5/output/LayerNorm/beta:0', 'bert/encoder/layer_._1/attention/self/value/bias:0', 'bert/encoder/layer_._11/output/dense/kernel:0', 'bert/encoder/layer_._8/attention/output/dense/bias:0', 'bert/encoder/layer_._6/intermediate/dense/kernel:0', 'bert/encoder/layer_._8/attention/self/value/kernel:0', 'bert/encoder/layer_._7/attention/self/value/kernel:0', 'bert/encoder/layer_._11/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._5/attention/self/value/bias:0', 'bert/encoder/layer_._9/attention/output/dense/kernel:0', 'bert/encoder/layer_._1/attention/self/key/bias:0', 'bert/encoder/layer_._4/attention/output/dense/bias:0', 'bert/encoder/layer_._1/intermediate/dense/kernel:0', 'bert/encoder/layer_._6/output/LayerNorm/gamma:0', 'bert/encoder/layer_._2/intermediate/dense/kernel:0', 'bert/encoder/layer_._4/attention/self/query/kernel:0', 'bert/encoder/layer_._4/output/LayerNorm/gamma:0', 'bert/encoder/layer_._0/attention/self/query/bias:0', 'bert/encoder/layer_._11/attention/output/dense/bias:0', 'bert/encoder/layer_._2/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._10/attention/self/value/bias:0', 'bert/encoder/layer_._2/intermediate/dense/bias:0', 'bert/encoder/layer_._9/output/LayerNorm/beta:0', 'bert/encoder/layer_._0/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._3/attention/output/dense/bias:0', 'bert/encoder/layer_._2/attention/self/value/kernel:0', 'bert/encoder/layer_._3/output/dense/kernel:0', 'bert/encoder/layer_._1/attention/self/query/bias:0', 'bert/encoder/layer_._6/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._10/attention/output/dense/kernel:0', 'bert/encoder/layer_._3/intermediate/dense/kernel:0', 'bert/encoder/layer_._4/output/dense/kernel:0', 'bert/encoder/layer_._6/attention/self/key/kernel:0', 'bert/encoder/layer_._0/output/LayerNorm/beta:0', 'bert/encoder/layer_._7/attention/self/query/kernel:0', 'bert/encoder/layer_._1/output/dense/kernel:0', 'bert/encoder/layer_._0/output/dense/bias:0', 'bert/encoder/layer_._8/attention/output/dense/kernel:0', 'bert/encoder/layer_._11/attention/self/query/kernel:0', 'bert/encoder/layer_._9/attention/self/value/kernel:0', 'bert/encoder/layer_._2/attention/self/query/kernel:0', 'bert/encoder/layer_._4/intermediate/dense/bias:0', 'bert/encoder/layer_._4/intermediate/dense/kernel:0', 'bert/encoder/layer_._0/attention/self/key/kernel:0', 'bert/encoder/layer_._8/attention/self/query/bias:0', 'bert/encoder/layer_._5/attention/output/dense/bias:0', 'bert/encoder/layer_._10/attention/output/dense/bias:0', 'bert/encoder/layer_._11/attention/self/key/kernel:0', 'bert/encoder/layer_._2/output/LayerNorm/gamma:0', 'bert/encoder/layer_._10/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._3/attention/self/query/kernel:0', 'bert/encoder/layer_._10/intermediate/dense/kernel:0', 'bert/encoder/layer_._10/attention/self/query/bias:0', 'bert/encoder/layer_._7/output/LayerNorm/beta:0', 'bert/encoder/layer_._6/intermediate/dense/bias:0', 'bert/encoder/layer_._3/attention/self/key/kernel:0', 'bert/encoder/layer_._8/intermediate/dense/kernel:0', 'bert/encoder/layer_._5/intermediate/dense/kernel:0', 'bert/encoder/layer_._6/output/dense/bias:0', 'bert/encoder/layer_._0/attention/self/query/kernel:0', 'bert/encoder/layer_._6/attention/self/query/bias:0', 'bert/encoder/layer_._7/attention/output/dense/kernel:0', 'bert/encoder/layer_._8/output/LayerNorm/beta:0', 'bert/encoder/layer_._9/attention/self/query/bias:0', 'bert/encoder/layer_._3/output/dense/bias:0', 'bert/encoder/layer_._11/intermediate/dense/bias:0', 'bert/encoder/layer_._4/attention/output/dense/kernel:0', 'bert/encoder/layer_._6/output/LayerNorm/beta:0', 'bert/encoder/layer_._5/output/dense/kernel:0', 'bert/encoder/layer_._3/attention/self/value/kernel:0', 'bert/encoder/layer_._8/output/LayerNorm/gamma:0', 'bert/encoder/layer_._1/attention/output/dense/kernel:0', 'bert/encoder/layer_._11/output/dense/bias:0', 'bert/encoder/layer_._0/output/LayerNorm/gamma:0', 'bert/encoder/layer_._10/output/LayerNorm/beta:0', 'bert/encoder/layer_._0/intermediate/dense/bias:0', 'bert/encoder/layer_._9/output/dense/bias:0', 'bert/encoder/layer_._2/attention/self/value/bias:0', 'bert/encoder/layer_._5/output/LayerNorm/gamma:0', 'bert/encoder/layer_._1/output/dense/bias:0', 'bert/encoder/layer_._0/attention/self/value/kernel:0', 'bert/encoder/layer_._7/attention/output/dense/bias:0', 'bert/encoder/layer_._10/output/dense/bias:0', 'bert/encoder/layer_._11/attention/self/value/kernel:0', 'bert/encoder/layer_._3/intermediate/dense/bias:0', 'bert/encoder/layer_._8/attention/self/query/kernel:0', 'bert/encoder/layer_._10/intermediate/dense/bias:0', 'bert/encoder/layer_._6/attention/self/value/kernel:0', 'bert/encoder/layer_._5/attention/output/dense/kernel:0', 'bert/encoder/layer_._9/intermediate/dense/kernel:0', 'bert/encoder/layer_._4/attention/self/value/bias:0', 'bert/encoder/layer_._4/output/dense/bias:0', 'bert/encoder/layer_._5/attention/output/LayerNorm/beta:0', 'bert/embeddings/LayerNorm/gamma:0', 'bert/embeddings/position_embeddings/embeddings:0', 'bert/encoder/layer_._4/attention/self/key/kernel:0', 'bert/encoder/layer_._7/attention/self/query/bias:0', 'bert/encoder/layer_._10/attention/self/query/kernel:0', 'bert/encoder/layer_._10/attention/self/key/kernel:0', 'bert/encoder/layer_._11/attention/self/value/bias:0', 'bert/encoder/layer_._2/attention/self/query/bias:0', 'bert/encoder/layer_._4/attention/self/key/bias:0', 'bert/encoder/layer_._7/attention/self/key/kernel:0', 'bert/encoder/layer_._11/intermediate/dense/kernel:0', 'bert/encoder/layer_._3/attention/self/value/bias:0', 'bert/pooler/dense/kernel:0', 'bert/encoder/layer_._2/output/dense/bias:0', 'bert/encoder/layer_._7/intermediate/dense/kernel:0', 'bert/encoder/layer_._8/attention/self/key/bias:0', 'bert/embeddings/word_embeddings/weight:0', 'bert/encoder/layer_._11/output/LayerNorm/beta:0', 'bert/encoder/layer_._9/attention/self/value/bias:0', 'bert/embeddings/token_type_embeddings/embeddings:0', 'bert/encoder/layer_._1/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._6/attention/self/value/bias:0', 'bert/pooler/dense/bias:0', 'bert/encoder/layer_._8/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._1/output/LayerNorm/gamma:0', 'bert/encoder/layer_._9/intermediate/dense/bias:0', 'bert/encoder/layer_._1/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._9/output/dense/kernel:0', 'bert/encoder/layer_._0/output/dense/kernel:0', 'bert/encoder/layer_._3/output/LayerNorm/beta:0', 'bert/encoder/layer_._7/output/LayerNorm/gamma:0', 'bert/encoder/layer_._11/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._3/attention/output/dense/kernel:0', 'bert/encoder/layer_._3/attention/self/key/bias:0', 'bert/encoder/layer_._10/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._2/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._6/attention/output/dense/bias:0', 'bert/encoder/layer_._7/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._0/intermediate/dense/kernel:0', 'bert/encoder/layer_._10/attention/self/value/kernel:0', 'bert/encoder/layer_._5/attention/self/key/kernel:0', 'bert/encoder/layer_._10/attention/self/key/bias:0', 'bert/encoder/layer_._9/attention/self/key/kernel:0', 'bert/encoder/layer_._1/attention/self/value/kernel:0', 'bert/encoder/layer_._7/output/dense/bias:0', 'bert/encoder/layer_._11/output/LayerNorm/gamma:0', 'bert/encoder/layer_._0/attention/self/key/bias:0', 'bert/encoder/layer_._3/attention/output/LayerNorm/beta:0', 'bert/encoder/layer_._11/attention/output/dense/kernel:0', 'bert/encoder/layer_._7/attention/self/key/bias:0', 'bert/encoder/layer_._3/attention/output/LayerNorm/gamma:0', 'bert/encoder/layer_._10/output/dense/kernel:0', 'bert/encoder/layer_._10/output/LayerNorm/gamma:0', 'bert/encoder/layer_._5/output/dense/bias:0', 'bert/encoder/layer_._9/attention/output/dense/bias:0', 'bert/encoder/layer_._11/attention/self/query/bias:0', 'bert/encoder/layer_._2/output/LayerNorm/beta:0', 'bert/encoder/layer_._7/attention/self/value/bias:0', 'bert/encoder/layer_._1/output/LayerNorm/beta:0', 'bert/encoder/layer_._5/intermediate/dense/bias:0', 'bert/embeddings/LayerNorm/beta:0', 'bert/encoder/layer_._9/attention/self/query/kernel:0', 'bert/encoder/layer_._2/output/dense/kernel:0', 'bert/encoder/layer_._0/attention/output/dense/kernel:0', 'bert/encoder/layer_._2/attention/output/dense/bias:0', 'bert/encoder/layer_._4/output/LayerNorm/beta:0', 'bert/encoder/layer_._0/attention/self/value/bias:0', 'bert/encoder/layer_._8/output/dense/kernel:0', 'bert/encoder/layer_._2/attention/output/dense/kernel:0', 'bert/encoder/layer_._6/attention/self/query/kernel:0', 'bert/encoder/layer_._0/attention/output/dense/bias:0', 'bert/encoder/layer_._3/attention/self/query/bias:0', 'bert/encoder/layer_._5/attention/self/query/bias:0', 'bert/encoder/layer_._5/attention/self/query/kernel:0', 'bert/encoder/layer_._8/attention/self/value/bias:0', 'bert/encoder/layer_._9/attention/output/LayerNorm/gamma:0'] - This IS expected if you are initializing TFBertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing TFBertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). All the layers of TFBertModel were initialized from the model checkpoint at bert-base-uncased. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFBertModel for predictions without further training. ``` The problem I think is rooted in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L487.
03-03-2021 05:29:32
03-03-2021 05:29:32
Hello! You cannot load a model inside a namescope. This is the expected behavior because all the names are forced insides the h5 file. To use the model you have to load it outside your defined namescope and then use it inside.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,491
closed
ONNX Training for Transformers
# 🚀 Feature request Is there a script for ONNX training of transformers on glue tasks ? If so, did anyone benchmark the training times ? If not, I can contribute. ## Motivation Faster training.
03-03-2021 03:06:27
03-03-2021 03:06:27
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is it possible to pretrain bert with ONNX?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,490
closed
Pipeline's QnA and run_qa predictions do not match
## Environment info - `transformers` version: 4.3.0 or 4.4.0dev (tested in both versions) - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help pipelines: @LysandreJik maintained examples (run_qa.py): @sgugger, @patil-suraj ## Information I have noticed that the `pipeline("question-answering)` and `run_qa.py `evaluation functions (prediction + post_process) not only are built differently but also yield different results affecting the precision/recall numbers, `pipeline` being the worse performer. I ran the test on 2 different environments and got the same results. Given the same exact model, the module and the script may affect precision/recall numbers up to 5%. The issue may manifest itself only in a set with negatives as I have not run a test on a set where negatives are not possible. The problem arises when using: * [X] the official example scripts: `run_qa.py` and `pipeline` The tasks I am working on is: * [X] an official GLUE/SQUaD task: SQuAD V2 ## To reproduce Steps to reproduce the behavior: 1. Take any QnA model and execute `run_qa.py`on the SQuAD V2 validation set 2. Get predictions on the same set via `pipeline` with the same model and parameters 3. Compare Here is my notebook to follow the logic and see the results on a particular QnA model. [Colab Notebook](https://colab.research.google.com/drive/1GMetvI6e0pkUPUHiNX6f46_TBO0LUKre?usp=sharing ) ## Expected behavior The results need to match. `run_qa.py` code appears to be optimized for training and evaluation at training. Most researchers report QnA model performance on a script like this. The inference module `pipeline` should match what `run_qa.py` is producing. ### more observations: 1. Pipeline produces more NULL answers than `run_qa` (Highest Concern) 2. About half the non-null answers do not match exactly (45%). Most of those are off by a character or so 3. More than negligible number of records do not match closely 4. pipeline produces shorter answers, 5. pipeline does not take care of apostrophe cases 6. pipeline answers may be in separate places in the context.
03-02-2021 23:45:02
03-02-2021 23:45:02
Just to be sure I understand completely the problem since I don't see the F1s for the full results obtained via pipeline, which is producing the best results? `pipeline` or `run_qa`? Or are they different but overall comparable?<|||||>`run_qa.py` produces the better results. They are different on about 3000 records, 2700 of those are off by a character or so, 300 are off by a larger margin, sometimes different answers, 12 records Null (pipeline) vs Not-Null (run_qa) (false negative). And that is on SQuAD. I see a wider gap on custom datasets. "Comparable" is a subjective word so I perhaps should not comment on that. In my opinion they need to match. What we get out of run_qa.py in batch for a dataset should be the same for record i, meaning when we pass the same query and context to the model, the script and the module should deterministically return the same answer. The problem might not be exactly in the predictions but rather in the post prediction functions. <|||||>@sgugger Hi, any update on this? Did you all get a chance to look into the problem? by the way, after I wrote the message above, I realized "better" is also subjective. I would prefer a 5 token answer to a 1 token answer for instance depending on the question and context. As I mentioned pipeline does not clean up the answer as well, punctuation gets carried over. Looking at custom validation sets I can tell the F1 score is lower for Pipelines, which I did it pretty much manually by writing custom code as I had a smaller set. If there is a way to take the pipeline predictions and calculate a F1 score against the ground truths in SQuAD, I would be happy to do that and return you a result. But again, that would be just SQuAD. better or worse, the answers don't match. Maybe the problem is in run_qa.py and this is an opportunity to get that fixed. Thanks in advance. <|||||>The problem is that neither is easily fixable. For backward compatibility reason we can't change the way the qa pipeline works out of the blue. We are trying to think of having some "pipeline configs" to be able to switch behaviors in pre or post processing but it won't be added overnight. Overall this is going to take some time to solve.<|||||>@sgugger thank you for the acknowledgement and clarification that there indeed is a difference. I was not at least dreaming stuff up. Here is what I ended up doing: I gutted out `run_qa.py` and created a custom inference code using the evaluation functions in that script. Now my model answers match obviously but I ended up with a different problem. When I pass **a single** query/context pair to my custom question answerer, the latency increased by about 10x. I profiled my code and lo and behold the problem is in the data prep part that is written with `datasets` and seems like is optimized for batch processing during training, meaning there is a cost incurred in the upstream processing before the prediction. So now I am caught between a rock and a hard place as the model does not produce the same answers if I use the `pipeline` and I have a latency issue for single inference with the `run_qa.py` code.. Is it possible to summarize the problem with `pipeline` so that maybe I can write some code to patch that configurable part into it? That would go a long way. Merci beaucoup! PS: I also noted 2 other main differences between these two inference codes. 1) I don't see where we can incorporate a null threshold in the `pipeline` function. I actually benefit up to 2-3 points for F1 on my custom dataset with a threshold. 2) The pipeline can take multiple query/context pairs but it loops through them one at a time, so if I pass a batch of 16, I get 16x latency. I am thinking about entering separate issues as I can't see any open issues regarding them. Please let me know if I have overlooked or you all have incorporated some of this recently. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @sgugger @oersoy1, I am facing the same issue. The pipeline predictions are not the same as run_qa.py predictions. Please let me know if there is any update or work around for the same.
transformers
10,489
closed
Fix typos
# What does this PR do? I fixed a couple of typos in comments. ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR.
03-02-2021 21:48:09
03-02-2021 21:48:09
transformers
10,488
closed
Smp grad accum
# What does this PR do? This PR adds support for gradient accumulation in `SageMakerTrainer`. It has been tested on the glue script with success (with and without gradient accumulation passed along).
03-02-2021 21:25:17
03-02-2021 21:25:17
transformers
10,487
closed
remap MODEL_FOR_QUESTION_ANSWERING_MAPPING classes to names auto-generated file
As discussed in https://github.com/huggingface/transformers/issues/10467 currently Trainer loads `models.auto.modeling_auto` which loads **all** modeling files, which is absolutely not needed most of the time. At times it could have unwanted side-effects, loading modeling files that require some 3rd party modules that could be a problem. Similar to the autogenerated 3rd party version numbers look up dict, this PR autogenerates the names of the classes dict for `MODEL_FOR_QUESTION_ANSWERING_MAPPING` which can then be quickly loaded. This of course can be extended in the future to generate other structures if need be. @sgugger, @LysandreJik Fixes: https://github.com/huggingface/transformers/issues/10467
03-02-2021 21:11:56
03-02-2021 21:11:56
transformers
10,486
closed
Trainer not logging to WandB in SageMaker
- `transformers` version: 4.3.0 - wandb version: 0.10.20 - Platform: SageMaker hosted training with PyTorch estimator. - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No @stas00 @sgugger I am using a SageMaker training environment to train `BertForSequenceClassification`. To do this, I'm passing the model into a `Trainer` instance and calling `trainer.train()`. To train in SageMaker, I am using a PyTorch estimator: ``` estimator = PyTorch( entry_point='train_classifier.py', source_dir='./', role=role, sagemaker_session=sagemaker_session, hyperparameters=hp, subnets=subnets, security_group_ids=sec_groups, framework_version='1.6.0', py_version='py3', instance_count=1, instance_type=instance_type, dependencies=[ '../lib', '../db_conn'], use_spot_instances=False, volume_size=100, #max_wait=max_wait_time_secs ) estimator.fit() ``` I have tried this with different p2 and p3 instances. In EC2 or in a SageMaker notebook, this does automated logging of training loss and evaluation loss and metrics to WandB. With the estimator, I get no training logs. Anything that I manually log to WandB appears in my dashboard. The only info that doesn't show up is whatever used to get logged by the Trainer. I tried `os.environ["WANDB_DISALBED"] = "false"` in my training script, no luck.
03-02-2021 19:55:30
03-02-2021 19:55:30
Are you sure `wandb` is installed in the container you are using for training? There can't be any report if it's not installed and initialized.<|||||>Yup, it's installed. Here is the requirements file that goes into the container ``` boto3==1.16.32 peewee==3.13.3 pandas==1.0.5 torch==1.6.0 numpy==1.18.2 transformers==4.3.0 wandb==0.10.20 botocore==1.19.37 srsly==1.0.2 psycopg2_binary==2.8.5 scikit_learn==0.23.2 uvloop==0.14.0 ./prodigy-1.10.5-cp36.cp37.cp38-cp36m.cp37m.cp38-linux_x86_64.whl https://github.com/explosion/spacy-models/releases/download/en_core_web_md-2.3.1/en_core_web_md-2.3.1.tar.gz ``` I also upload my API key from a separate file. Also, if I manually call `wandb.log({"Some metric":"This will appear"})` Then it shows up in the run's dashboard. The only information that doesn't show up is the automatic logging that the Trainer is supposed to do. <|||||>Could you share the training script you are using?<|||||>I can't share the full script, but it's something like ``` import wandb from transformers import ( AutoConfig, AutoModelForSequenceClassification, HfArgumentParser, Trainer, TrainingArguments, ) class ModelArguments: #Declare arguments here class DataTrainingArguments: #Declare more arguments parser = HfArgumentParser((ModelArguments, DataTrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() #Assuming I have the config already model = AutoModelForSequenceClassification.from_pretrained( model_args.model_name_or_path, config=config ) #train_data, eval_data = get data from a remote host trainer = Trainer( model=model, args=training_args, train_dataset=train_data, eval_dataset=eval_data, ) trainer.train() wandb.log({"Fake Metric":"This will show up"}) ``` When I run on SageMaker, I can see from the console logs and the post-training evaluation that it has been training. <|||||>It would be interesting to have the result of ``` from transformers.integrations import get_available_reporting_integrations print(get_available_reporting_integrations()) ``` to check whether or not `wandb` is listed. Another thing you can do is force the reporting to wandb with adding `report_to = ["wand"]` in your hyperparameters.<|||||>Got this from the integrations print `['wandb']` Also, tried to force the reporting to WandB. It's a bit tricky to pass lists into the SageMaker Estimator as a hyperparam, so I just tried to force the reporting in my script ``` parser = HfArgumentParser((ModelArguments, DataTrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() training_args.report_to = ["wandb"] data_args.report_to = ["wandb"] ``` Still getting the same result<|||||>Curiouser and curiouser. So it is detected an added but doesn't log anything. @borisdayma do you have some idea of why theWandbCallback would not log anything in a training launched on SageMaker?<|||||>I'm not sure but now I'm feeling the curiousest! There is actually some W&B documentation with specific setup instructions for Sagemaker: see [here](https://docs.wandb.ai/integrations/sagemaker)<|||||>I'm feeling pretty curious too...already followed the instructions with the secrents.env file. <|||||>Could you add somewhere in the methods of [`WandbCallback`](https://github.com/huggingface/transformers/blob/395ffcd757103ed2ccc888e48d90fd2ccb4d506f/src/transformers/integrations.py#L510) some `print` statements just to see if they are being called at all? You could just copy that file in your local folder and import from it.<|||||>I cloned the repo, added the` print `statements, imported the `WandbCallback` and added it to the `Trainer` callbacks. I am seeing some `print` statements, but...no logs to WandB dashboard :( <|||||>That's progress! Can you add some maybe just after the calls to `wandb.log` and maybe also print `wandb.run.id`?<|||||>Got some interesting results. I added ``` def on_train_begin(self, args, state, control, model=None, **kwargs): print("TESTING WANDB: BEGINNING TRAINING") wandb.log({"Begining training": "Please appear"}) ... ``` and ran it locally (in the CLI on a Notebook instance). I got: `NameError: name 'wandb' is not defined` If I run this code in the callbacks init function: ``` def __init__(self): print("TESTING WANDB: INITIALIZING CALLBACK") has_wandb = is_wandb_available() assert has_wandb, "WandbCallback requires wandb to be installed. Run `pip install wandb`." if not has_wandb: print("NO WEIGHTS AND BIASES!!!!!!!!") if has_wandb: import wandb wandb.ensure_configured() if wandb.api.api_key is None: has_wandb = False logger.warning( "W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable." ) self._wandb = None else: self._wandb = wandb self._initialized = False # log outputs self._log_model = os.getenv("WANDB_LOG_MODEL", "FALSE").upper() in ENV_VARS_TRUE_VALUES.union({"TRUE"}) ``` `"NO WEIGHTS AND BIASES!!!!!!!!"` does not print. If `wandb` gets imported, shouldn't the log statements work? Not sure if this is a separate issue from the SageMaker one. <|||||>Very interesting! What if you ignore the second check and just set `self._wandb = wandb`<|||||>Same error! <|||||>Does [this example](https://github.com/wandb/examples/tree/master/examples/pytorch/pytorch-cifar10-sagemaker) work for you?<|||||>Yup<|||||>Maybe can you try to log directly something to even before using the `Trainer`? ```python import wandb wandb.init() wandb.log({'val': 1}) ``` That way we will see if it comes from your wandb setup in Sagemaker or HF.<|||||>I do already do that actually, and it does appear! I successfully log a couple of charts before and after training. <|||||>In your code where you set `self._wandb`, can you try to do something like `self._wandb.log({'test_value': 1})`<|||||>This works! I have regular training logs and the test_value logged when the callback's int function is like this: ``` def __init__(self): print("TESTING WANDB: INITIALIZING CALLBACK") has_wandb = is_wandb_available() assert has_wandb, "WandbCallback requires wandb to be installed. Run `pip install wandb`." if has_wandb: import wandb wandb.ensure_configured() ''' if wandb.api.api_key is None: has_wandb = False logger.warning( "W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable." ) self._wandb = None else: ''' self._wandb = wandb self._wandb.log({'test_value': 1}) self._initialized = False # log outputs self._log_model = os.getenv("WANDB_LOG_MODEL", "FALSE").upper() in ENV_VARS_TRUE_VALUES.union({"TRUE"}) ``` However, if I do not pass in the modified version of WandbCallback the problem resumes. <|||||>This is very strange... So just adding the `log` statement makes it work and it stops working if you remove it?<|||||>It looks like what makes the difference is commenting out this portion ``` ''' if wandb.api.api_key is None: has_wandb = False logger.warning( "W&B installed but not logged in. Run `wandb login` or set the WANDB_API_KEY env variable." ) self._wandb = None else: ''' ``` And then passing the modified callback in explicitly in the Trainer. I believe that the NameError above was because I was calling `wandb.log`, instead of `self._wandb.log`<|||||>This should now be fixed on the master branch of `transformers`. Let me know if you still have any issue @alexf-a <|||||>It is still happening! It looks like the WandB API key is not getting properly loaded from the secrets.env. If I run this code, it works ``` with open("./secrets.env", "r") as secrets_f: wandb_api_key = secrets_f.read().replace('\n', '').split("=")[1] os.environ["WANDB_API_KEY"] = wandb_api_key ```<|||||>Hey @alexf-a we'll work on a patch in the next release that handles the sagemaker case. Until then if you just add this code before you instantiate your trainer it should work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,485
closed
Constrained decoding?
Is it possible to implement constraints on the beam during decoding using a seq2seq model? [NeuroLogic Decoding](https://arxiv.org/abs/2010.12884), [Constrained Abstractive Summarization](https://arxiv.org/abs/2010.12723) I see that there is a Callback feature in the library - but AFAIK it only lets modify the training state and not the beam.
03-02-2021 19:38:02
03-02-2021 19:38:02
I think you should take a look at `prefix_allowed_tokens_fn` on the [`generate` method](https://huggingface.co/transformers/main_classes/model.html?highlight=generate#transformers.generation_utils.GenerationMixin.generate). It let's you create a function to constrain the generation based on previously generated tokens. It is inspired by this paper: [Autoregressive Entity Retrieval](https://arxiv.org/abs/2010.00904).<|||||>thanks! Is there a way to have more generic callbacks that can possibly access the beams (for instance, if I want to do custom scoring/tracking/modification to the entries in the beam)? If not, would that be a useful PR to submit?<|||||>So you mean to access the different beams so that generations is conditioned to not just the current beam so far but also the others? You probably should look at transformers.LogitsProcessor in: https://github.com/huggingface/transformers/blob/1750e629006bb6989aef5b4e141f3477f891a098/src/transformers/generation_utils.py#L555-L629 which deals with the constraints and scoring of tokens at generation. Perhaps what you described could be introduced in a similar fashion as `prefix_allowed_tokens_fn`. Regarding a PR I am not the best to say, I would first make sure if what you aim for can be done within the existing functionality. <|||||>I did take a look at `LogitsProcessor` and might be able to work with that - thanks!<|||||>@kailashkarthiks Hi, I am also interested in performing constrained decoding with seq2seq language models like BART. I was wondering if you made any progress on this, and if yes, would you be able to share your findings or code? Thank you!
transformers
10,484
closed
Corrupted Relative Attention in T5 Decoder
## Environment info platform: Mac/Ubuntu 14 transformers==2.11.0 torch==1.4.0 (GPU) python 3.6 I know this is an old version but it supports important experiments in a paper under review. Would appreciate to know what's wrong. I checked the commit log and I don't think any following commits resolve it. ### Who can help @patrickvonplaten (through slack) @patil-suraj (mentioned below) Please let me know if there is anything else I can provide! Thank you! ## Information I made an artificial binary classification data where the input sequences are near-randomly generated tokens from the T5 vocab. The output sequences are balanced “`answer: correct/restaurant`” (two binary tag words randomly selected). A data sample [can be found here](https://github.com/Slash0BZ/t5-investigation/blob/main/sample_data.txt) in format (`input_seq \t output_seq`). The custom data reader parses this data with T5Tokenizer and is_pretokenized=True ([see here](https://github.com/Slash0BZ/t5-investigation/blob/main/train_t5.py#L129)) I feed the [T5ForConditionalGeneration model (v.2.11.0)](https://github.com/Slash0BZ/t5-investigation/blob/main/overrides.py#L60) with input_ids, lm_labels, and their corresponding attention_masks during training. The model should not learn anything because the sequences are near-random, but in reality, it converges to a zero loss, meaning that the lm_logits from decoder actually attend to future inputs (after `shift_right()`) and knows the label. During evaluation where I hide the binary tag, the model always predicts positive. ## To reproduce Steps to reproduce the behavior: 1. Use the code in this repo: https://github.com/Slash0BZ/t5-investigation 2. Ran with sample data. I have tried both pre-trained T5-large and also randomly initialized T5-Large ([written like this](https://github.com/Slash0BZ/t5-investigation/blob/main/train_t5.py#L258)) I am not sure if the training data size affects the result. I ran with a training size of 5M. I am happy to provide the full data and a trained model if actual experiments are needed. ## Expected behavior The training loss converges to near-zero and the lm_logits reflects predictions the same as the output sequence during training. However, in evaluation where the data reader hides the binary tag in the output sequence ([achieve through only providing "answer:" in decoder_input_ids](https://github.com/Slash0BZ/t5-investigation/blob/main/overrides.py#L46)), the prediction is uniform. I also tried to change the decoder_input_ids. When it is [0, 1525, 10, 2024], the prediction at position 2 is 2024. When it is [0, 1525, 10, 2062], the prediction at position 2 is 2062. Notes: 1525->"answer", 10->":", 2024->"correct", 2062->"restaurant"
03-02-2021 19:21:22
03-02-2021 19:21:22
Uploaded full dataset and trained model: https://drive.google.com/drive/u/1/folders/1A7PIG1E98uuGUi8mDA2m_6T_oQp8XDhF You can reproduce the issue by simply evaluating the test set using the trained model and observe the behavior with the aforementioned sets of decoder input ids. I suspect the issue is the same during the training process (which makes it converge to zero). I don't think I am doing anything wrong in the code, but please let me know. Thanks!<|||||>Hey @Slash0BZ, Hmm, this might actually be very difficult to debug since 2.11 is quite outdated by now :-/. 2 things: 1) I'm very confident that in the decoder the causal mask is always enabled, so that tokens have **no** access to future tokens -> they should not be able to learn to "cheat". See this line (in 2.11 version): https://github.com/huggingface/transformers/blob/b42586ea560a20dcadb78472a6b4596f579e9043/src/transformers/modeling_t5.py#L707 if you follow the function definition you see that a causal mask is generated if the model is a decoder `self.config.is_decoder is True` - see: https://github.com/huggingface/transformers/blob/b42586ea560a20dcadb78472a6b4596f579e9043/src/transformers/modeling_utils.py#L192 2) There was a bug in the relative positional encoding that was fixed in this PR: https://github.com/huggingface/transformers/pull/8518 . In this PR I also made sure that the original T5 and our T5 implementation give the exact same results.<|||||>Hi @patrickvonplaten, thank you for the quick response! Sorry about the version issue, 2.11.0 was the latest when I conducted all experiments for a paper under review. I understand how the causal mask is created, and I can confirm it is working, but it cannot explain what I see. Below is what I did (recap: 2024 and 2062 are two vocab ids I used for the binary tag, 1525 and 10 represent "answer:") with `decoder_input_ids = [0, 1525, 10, 2062]`, inside the decoder (T5Stack), I printed input_ids, which is of size [16, 31] and of content `[0, 1525, 10, 2062, 0, ... 0]`. The extended_attention_mask is of size [16, 1, 31, 31] and of content (at position 2) `[0, 0, 0, -10000, -10000, ... -10000]`. Is everything here behaving as expected (i.e., should the first few masks be 0?) Under this, the prediction of an instance using a trained model at position 2 is 2062. However, if I change the decoder_input_ids to `[0, 1525, 10, 2024]` (different binary vocab), the **same** model's prediction on the **same** instance at position 2 becomes 2024, showing that it sees what the input is at position 3, or at least it changed with different position 3 inputs. Below is how I got the prediction at position 2, using the lm_logits directly from the forward() function in a T5ForConditionalGeneration. Please let me know if you spot any issues with it. outputs = model( input_ids=inputs['input_ids'], attention_mask=inputs['attention_mask'], decoder_input_ids=inputs['decoder_input_ids'], # lm_labels=inputs['lm_labels'], decoder_attention_mask=inputs['decoder_attention_mask'], use_cache=False, )[0].cpu().numpy() ids = [] for output in outputs: arr = [] binary_tags = [2024, 2062] for val in binary_tags: arr.append(output[2][val]) argmax_idx = int(np.argmax(np.array(arr))) Thanks again for your help. I understand how difficult it is to look at previous versions, but I need to figure out if all experiments need to be re-done. <|||||>Hmm, the `extended_attention_mask` looks correct to me. Position 2 is allowed to attend to itself and to position 0 & 1. Also, I ran the following code snippet both on current master and on 2.11 and it passes -> showing that the attention mask works correctly: ```python from transformers import T5ForConditionalGeneration import torch model = T5ForConditionalGeneration.from_pretrained('t5-small') input_ids = torch.tensor([list(range(30))], dtype=torch.long) decoder_input_ids = torch.ones((1, 4), dtype=torch.long) # take output at position 2 logits_at_2 = model(input_ids, decoder_input_ids=decoder_input_ids)[0][:, 2] decoder_input_ids[:, 3] = 10 # take output at position 2 having changed the decoder_input_ids logits_at_2_same = model(input_ids, decoder_input_ids=decoder_input_ids)[0][:, 2] assert abs(logits_at_2.sum().item() - logits_at_2_same.sum().item()) < 1e-3, "Error" ```<|||||>Thanks, @patrickvonplaten . Following your snippet, this is how you can reproduce my issue (please give it a try, it has been bugging me for weeks): ``` from transformers import T5ForConditionalGeneration import torch import numpy as np model = T5ForConditionalGeneration.from_pretrained("trained_model") input_ids = torch.tensor([list(range(30))], dtype=torch.long) decoder_input_ids = torch.tensor([[0, 1525, 10, 2024]]) logits_at_2 = model(input_ids, decoder_input_ids=decoder_input_ids)[0][:, 2] print(np.argmax(logits_at_2[0].detach().cpu().numpy())) decoder_input_ids = torch.tensor([[0, 1525, 10, 2062]]) logits_at_2_same = model(input_ids, decoder_input_ids=decoder_input_ids)[0][:, 2] print(np.argmax(logits_at_2_same[0].detach().cpu().numpy())) assert abs(logits_at_2.sum().item() - logits_at_2_same.sum().item()) < 1e-3, "Error" ``` Where trained_model can be downloaded here: https://drive.google.com/drive/u/1/folders/1A7PIG1E98uuGUi8mDA2m_6T_oQp8XDhF It has the same config as a T5-large, just different learned weights. Under my local env (2.11.0), it prints 2024 and 2062, and triggers the assertion error. Given this, it seems that something corrupted during the training process, and somehow the learned weights let the model to look at future inputs. Do you have any suggestions?<|||||>@patrickvonplaten I just tried the latest Huggingface version with the snippet above using my trained model, and it also triggers the assertion error. Seems like something interesting is going on with certain model weights. <|||||>Interesting, so our causal mask actually doesn't fully force "attending-to-next-tokens" to be impossible -> it just gives it a very large negative number before softmax (-10000) so that after softmax this value should be zero. Maybe your model has incredibly high activations that can somehow overturn the -10000. Could you maybe try the following: - Replace our setting of -10000 with `-float("inf")`. Then the script above should definitely not yield an assertion error anymore<|||||>Thanks @patrickvonplaten . I tried what you said, -float("inf") doesn't work because it makes the logits "NaN". So I tried -999999 and the predictions are now valid. Now that we know what the issue is, here are some of my concerns: - Does this only affect some "extreme" experiments or it affects all experiments more or less? i.e., is that attention value fine as long as it stays below zero, or it's later used continuously? - In my personal view, something has to be done more than this method to create the masks. I found this issue initially when doing a valid experiment studying numerical relations, and the model easily found this "cheating" way out within the first 1k steps. I suspect this issue might have affected people's experiments such as studying scrambled word orders, adding noises, etc., just without them knowing. This applies to both pre-trained Google T5 and random-initialized T5 (tried both). Please let me know. Thanks again for your help!<|||||>Thanks for trying it out! Hmm, yeah I've never heard of such an issue before, so I assume that it will only affect the "extreme" experiments. But T5 tends to have very extreme values, which is also why we (so far) managed to run T5 only partly in fp16 mode. We usually like to use -10000 as the masking value because it makes the model `fp16` compatible...Not really sure what to do here -> we could change the masking values to `-inf` in general if such errors occur more often. Also pinging @LysandreJik @sgugger @patil-suraj here. Have you guys heard of a case before where the model learned to cheat the `-10000` masking value? <|||||>We use https://github.com/allenai/allennlp/blob/f091cb9cd92e767f55659b2b59f0ffb75bc613be/allennlp/nn/util.py#L239, which ultimately boils down to using this value: `torch.finfo(tensor.dtype).min`.<|||||>@patrickvonplaten, yes, the `-10000` can totally cheat the value. We've seen that in the past in cases where the output values are passed through an argmax while the probability distribution is very uniform. We've kept `-10000` to stay as close as possible to the original BERT implementation, and we recommend to use as few padding tokens as possible for this not to have an effect (while keeping in mind that the -10000 should keep values very very small and should have a minimal impact). @dirkgr's solution is definitely more robust and I don't think switching the -10000 value to be lower would change anyone's workflow, so I wouldn't be opposed to switching.<|||||>@patrickvonplaten I never faced this issue in my T5 experiments but it does seem possible that -10000 can cause some issues because while investigating the fp16 issue we have seen that T5 produces large activation values. And I agree with @dirkgr solution.<|||||>Hi @patil-suraj @patrickvonplaten @sgugger I am experiencing similar issues with mt5, and I am getting nan always with fp16 mode, you mentioned you partly made T5 work with fp16, do you mind telling me how you managed it? I am having really a hard time with mT5 model + fp16 thanks a lot all <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Putting this on my ToDo-List as it seems to be quite important actually<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,483
closed
add shift on BartForCausalLM
# What does this PR do? Fixes #10480 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @patil-suraj @patrickvonplaten <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-02-2021 18:37:36
03-02-2021 18:37:36
transformers
10,482
closed
[examples] should all examples support the predict stage?
This is part of the ongoing effort to sync the example scripts. In https://github.com/huggingface/transformers/issues/10437#issuecomment-789090858 it was flagged that some scripts have test/predict, whereas others don't. Should we: A. have all scripts have train/eval/predict B. only have predict where it's desired @sgugger, @patil-suraj, @LysandreJik
03-02-2021 18:06:49
03-02-2021 18:06:49
I think we should have it on all scripts except the language-modeling ones -> it doesn't make much sense there.<|||||>@bhadreshpsavani, please feel free to make this part of your project or not. Please do not feel obliged as you can see a small need quickly expands into a much bigger one. If not, then completing https://github.com/huggingface/transformers/issues/10437 would be a fantastic contribution. And I will then label it to be open to anybody else who would like to contribute. Thank you!<|||||>Sure @stas00, I will take this. Actually, I would love to contribute more. I really enjoy contributing to this community.<|||||>Excellent! If you run into any puzzles please don't hesitate to ask. Thank you!<|||||>Hi @stas00, In the QA examples for the test dataset, should we keep The preprocessing same as evaluate dataset preprocessing? Do we need to apply any additional post-processing as well at the end?<|||||>Yes, that would be needed.<|||||>Hi, @stas00 and @sgugger, Adding predict function for the `run_qa` example is slightly complicated. In the eval section itself, we are generating two files `predictions.json` and `nbest_predictions.json` using `postprocess_qa_predictions` from `utils_qa`. In Predict function also the same file will be generated and override the same files which will not be very good behavior. In the predict function currently `trainer.predict()` calculates metrics but metrics calculation might not require in case of `predict()` right? This issue might take a longer time for me to complete may be few weeks, is it fine?<|||||>> In Predict function also the same file will be generated and override the same files which will not be very good behavior. This is simple: add `eval_` prefix for eval outputs and `test_` for predict. related: eventually we will rename the latter prefix, but for now this is the convention used in all examples. Please see: https://github.com/huggingface/transformers/issues/10165 > In the predict function currently trainer.predict() calculates metrics but metrics calculation might not require in case of predict() right? We always want the metrics for each stage, since now we report speed and memory usage, so for the quality metrics see what makes sense to report there - typically similar to eval. > This issue might take a longer time for me to complete may be few weeks, is it fine? Yes, of course, thank you for giving us heads up and the clarity of your needs, @bhadreshpsavani <|||||>Hi @stas00, When I run the evaluation on [example for question answering](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/Check_Pretrained_Model_on_SQUAD2.ipynb), I was getting below error, ``` Traceback (most recent call last): File "./transformers/examples/question-answering/run_qa.py", line 546, in <module> main() File "./transformers/examples/question-answering/run_qa.py", line 531, in main metrics = trainer.evaluate() File "/content/transformers/examples/question-answering/trainer_qa.py", line 63, in evaluate metrics = self.compute_metrics(eval_preds) File "./transformers/examples/question-answering/run_qa.py", line 492, in compute_metrics return metric.compute(predictions=p.predictions, references=p.label_ids) File "/usr/local/lib/python3.7/dist-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/root/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/squad.py", line 109, in _compute score = evaluate(dataset=dataset, predictions=pred_dict) File "/root/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/evaluate.py", line 68, in evaluate exact_match += metric_max_over_ground_truths(exact_match_score, prediction, ground_truths) File "/root/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/evaluate.py", line 53, in metric_max_over_ground_truths return max(scores_for_ground_truths) ValueError: max() arg is an empty sequence ``` While calculating metrics at the end, I could not found anything to resolve this! <|||||>I am able to reproduce it - will have a look shortly. Thank you for the notebook - makes it super-easy to reproduce! BTW, I recommend you use `--max_val_samples 10` or similar to make your testing much faster ;)<|||||>OK, so we have a case of a broken dataset and the metrics evaluation code that doesn't check if the input data is valid. While ideally the eval code should be robust against an occasional corrupted input, the dataset should not have broken entries, observe the kind of data it evaluates against (`ground_truths`) ``` [{'answers': {'answer_start': [159, 159, 159, 159], 'text': ['France', 'France', 'France', 'France']}, 'id': '56ddde6b9a695914005b9628'}, {'answers': {'answer_start': [94, 87, 94, 94], 'text': ['10th and 11th centuries', 'in the 10th and 11th centuries', '10th and 11th centuries', '10th and 11th centuries']}, 'id': '56ddde6b9a695914005b9629'}, {'answers': {'answer_start': [], 'text': []}, 'id': '5ad39d53604f3c001a3fe8d1'}, {'answers': {'answer_start': [], 'text': []}, 'id': '5ad39d53604f3c001a3fe8d2'}, {'answers': {'answer_start': [], 'text': []}, 'id': '5ad39d53604f3c001a3fe8d3'}, {'answers': {'answer_start': [], 'text': []}, 'id': '5ad39d53604f3c001a3fe8d4'}, ``` - and it has a ton of those - so this is a big problem on the dataset-level. Having a quick look at the viewer: https://huggingface.co/datasets/viewer/?dataset=squad_v2 I don't immediately see missing answers (scroll horizontally to the right to see it), but it doesn't mean anything. Which most likely means that either the dataset conversion from the original to `datasets` format is borked, or there is some odd bug in the dataloader. But most likely it's the former. that is I'd debug this first and validate that it generates the answers correctly: https://github.com/huggingface/datasets/blob/b51cb81e736b86103089a584daa6e43db3c88bb5/datasets/squad_v2/squad_v2.py#L101 if so then proceed to where they are loaded in the script and debug there, and so on - until you find a place where some of the records disappear in what it appears more than half of samples. There you will find the bug. Please let me know if you'd like to try to investigate this and whether my suggestion at how to potentially approach this is clear. If it sounds too complicated please don't hesitate to say so and we will find another way to resolve this. Either way works. Meanwhile you can also try with a different model/dataset pair and see if it works there, which would also help isolate the problem. (if another dataset works then we know for sure the issue lies with `squad_v2`.<|||||>Hi @stas00, For `squad_v1` = `squad` it is working fine. But when i run the following script, ``` !python ./transformers/examples/question-answering/run_qa.py \ --model_name_or_path distilbert-base-uncased \ --train_file ./transformers/tests/fixtures/tests_samples/SQUAD/sample.json \ --validation_file ./transformers/tests/fixtures/tests_samples/SQUAD/sample.json \ --do_eval \ --max_val_sample 10 \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /tmp/debug_squad/ ``` which again data, I got the same error In the earlier error logs it is giving like this for this issue, ``` File "/root/.cache/huggingface/modules/datasets_modules/metrics/squad/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/squad.py", line 109, in _compute ``` But for the squad_v2 dataset, it should be like this, ``` "/root/.cache/huggingface/modules/datasets_modules/metrics/squad_v2/c0855591f1a2c2af8b7949e3146b9c86a6b7f536b4154019b03472639d310181/squad_v2.py", line 109, in _compute ``` i mean it is picking wrong metrics somehow<|||||>OK, I found this in `run_qa.py`: ``` version_2_with_negative: bool = field( default=False, metadata={"help": "If true, some of the examples do not have an answer."} ) ``` That is add: `--version_2_with_negative` to the cl args and it does: ``` metric = load_metric("squad_v2" if data_args.version_2_with_negative else "squad") ``` Except it's broken too: ``` examples/question-answering/run_qa.py --model_name_or_path ktrapeznikov/albert-xlarge-v2-squad-v2 --dataset_name squad_v2 --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/ --max_val_samples 10 --version_2_with_negative Traceback (most recent call last): | 0/11873 [00:00<?, ?it/s] File "examples/question-answering/run_qa.py", line 553, in <module> main() File "examples/question-answering/run_qa.py", line 538, in main metrics = trainer.evaluate() File "/mnt/nvme1/code/huggingface/transformers-master/examples/question-answering/trainer_qa.py", line 62, in evaluate eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions) File "examples/question-answering/run_qa.py", line 475, in post_processing_function predictions = postprocess_qa_predictions( File "/mnt/nvme1/code/huggingface/transformers-master/examples/question-answering/utils_qa.py", line 159, in postprocess_qa_predictions null_score = min_null_prediction["score"] TypeError: 'NoneType' object is not subscriptable ``` this fails with the same error: ``` python ./examples/question-answering/run_qa.py --model_name_or_path distilbert-base-uncased --train_file tests/fixtures/tests_samples/SQUAD/sample.json --validation_file tests/fixtures/tests_samples/SQUAD/sample.json --do_eval --max_val_sample 10 --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/ --version_2_with_negative ``` So it looks like this feature is half-baked, and ideally we should have a test for it. Let me know if you're ok to investigate this new issue. Plus ideally `run_qa.py` should signal the user that if the dataset has missing answers it should bail and recommend using `--version_2_with_negative`. (and the choice of this flag's name could probably be improved too to be more intuitive at what it does, but that's another story) <|||||>Sure @stas00, I can investigate this issue. One more thing currently `run_qa.py` only supports `squad_v1` and `squad_v2` datasets completely, right? Shouldn't it support all the different question-answering datasets of the reading comprehension task? I mean preprocessing and postprocessing might be different for all the tasks but the concept is the same for all the tasks. Please correct me if I am wrong. I tried narrativeQA it was not supporting that one.<|||||>Awesome! > One more thing currently run_qa.py only supports squad_v1 and squad_v2 datasets completely, right? Shouldn't it support all the different question-answering datasets of the reading comprehension task? I mean preprocessing and postprocessing might be different for all the tasks but the concept is the same for all the tasks. Please correct me if I am wrong. Honestly, I have no idea as I didn't write it. @sgugger, @patrickvonplaten, do we want `run_qa.py` to support more than `squad_v1` and `squad_v2` datasets? Thank you!<|||||>Like all other examples, the script is given as just that, an example. As said in the main README under "Why shouldn't I use Transformers?": ``` While we strive to present as many use cases as possible, the scripts in our examples folder are just that: examples. It is expected that they won't work out-of-the box on your specific problem and that you will be required to change a few lines of code to adapt them to your needs. ``` So like all other example scripts, the `run_qa` script will support any dataset that is structured the same way as the original dataset that was used with it (squad) but if the user wants the script to work on another dataset structured differently they will need to tweak it to their needs.<|||||>Hi @stas00 and @sgugger, I figured the cause for that squad2 issue while using `max_sample_*` arguments, I fixed it locally. The cause of the error was in the below line https://github.com/huggingface/transformers/blob/5f19c07a704eca4db376b56f950b729dcaa73039/examples/question-answering/run_qa.py#L493 It uses entire data for reference while it should use only `max_sample_*` data only. I tried to fix it by below code, ```python if training_args.do_eval: if "validation" not in datasets: raise ValueError("--do_eval requires a validation dataset") eval_examples = datasets["validation"] if data_args.max_val_samples is not None: # We will select sample from whole data eval_examples = eval_examples.select(range(data_args.max_val_samples)) # Validation Feature Creation eval_dataset = eval_examples.map( prepare_validation_features, batched=True, num_proc=data_args.preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=not data_args.overwrite_cache, ) if data_args.max_val_samples is not None: # During Feature creation dataset samples might increase, we will select required samples again eval_dataset = eval_dataset.select(range(data_args.max_val_samples)) ``` and I used `eval_examples ` instead `datasets["validation"]` in the above-mentioned line, This is working fine for data I tested, but as we know after applying `prepare_validation_features`, `eval_dataset ` and `eval_examples` might not have the same data (length will be same but because of sliding window example and feature might not be representing the same item) For `max_val_samples=10` these changes are working fine so I added predict/test method, It seems to be working fine. https://github.com/huggingface/transformers/blob/5f19c07a704eca4db376b56f950b729dcaa73039/examples/question-answering/utils_qa.py#L214-L226 i modified in below lines, ```python prediction_file = os.path.join( output_dir, "predictions.json" if prefix is None else f"{prefix}_predictions.json" ) nbest_file = os.path.join( output_dir, "nbest_predictions.json" if prefix is None else f"{prefix}_nbest_predictions.json" ) if version_2_with_negative: null_odds_file = os.path.join( output_dir, "null_odds.json" if prefix is None else f"{prefix}_null_odds_{prefix}.json" ``` because while passing prefix and running it I was getting error like `string can't have json attribute` it will save test and eval files like `test_predictions.json` and `eval_predictions.json` <|||||>Good job getting to the root of the issue, I hadn't thought of that when you added the `max_sample_xxx` but this task uses a subclass of the main `Trainer` that does require the original `eval_examples`. The fix you propose appears good to me and you should definitely make a PR with it as soon as you can :-) For the predicting stage, note that the subclass of the `Trainer` will require a test_dataset and test_examples to work (basically to interpret the predictions of the model as spans of the original texts, the `Trainer` needs the original texts). I do think adding a `--do_predict` to `run_qa` is going to be a bit complex so should be treated separately so my advise would be to: 1. make a PR with the fix for evaluation in run_qa/run_qa_beam_search when `max_val_samples` is passed 2. make a PR to add predict in all but run_qa/run_qa_beam_search scripts (when it makes sense of course) 3. make a PR to add predict in run_qa/run_qa_beam_search Let me know if that makes sense to you and if you need any help along the way (or don't want to do one of those steps yourself).<|||||>Really awesome, @bhadreshpsavani! Glad you were able to find out the cause!<|||||>I gone through the traceback(stack trace) by running example with different stage combination and figured the root. Since I wrote earlier code about max_sample and the error was related I was able to find it! Thanks @stas00 and @sgugger for your constant guidance, Now opensource contribution don't seems very difficult like earlier! Now I need to figure out for two more examples `run_swag.py` and `run_xlni.py`, all the other examples has predict if we ignore language modeling examples<|||||>Hi @sgugger, For `run_swag.py` and `run_xlni.py` changes are still remaining<|||||>Hi @stas00, I was working to update `run_swag.py` and I got logs like for test metrics, ``` INFO|trainer_pt_utils.py:656] 2021-03-21 20:56:40,939 >> ***** test metrics ***** [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,939 >> eval_accuracy = 1.0 [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,939 >> eval_loss = 0.2582 [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,939 >> eval_runtime = 4.1585 [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,941 >> eval_samples_per_second = 2.405 [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,941 >> test_mem_cpu_alloc_delta = 0MB [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,941 >> test_mem_cpu_peaked_delta = 0MB [INFO|trainer_pt_utils.py:661] 2021-03-21 20:56:40,941 >> test_samples = 10 ``` I think we need to write `trainer_swag.py` because even predictions files need to be saved like `test_prediction.json` / `eval_prediction.json`.<|||||>If I understood your correctly you suggest to write `MultipleChoiceTrainer` subclass. If so unlike the `question-answering` folder that has 2 scripts, `multiple-choice` has only one script so one way is to add it directly in `run_swag.py` or if there is a convention then as you suggest as `trainer_mc.py`. I think follow you instinct and then we can decide at PR point whether to have it in a separate file. <|||||>Hi, @stas00 @sgugger, The changes are ready for both examples, `run_xlni.py` works perfectly. `run_swag.py` has an issue after adding changes of predict, ``` python ./examples/multiple-choice/run_swag.py --model_name_or_path distilbert-base-uncased --do_train --do_eval --do_predict --max_train_samples 5 --max_val_samples 5 --max_test_samples 5 --learning_rate 5e-5 --num_train_epochs 3 --output_dir D:/tmp/swag_base --per_gpu_eval_batch_size=16 --per_device_train_batch_size=16 --overwrite_output ``` it gives the following error while prediction, ``` Traceback (most recent call last): File "./examples/multiple-choice/run_swag.py", line 481, in <module> main() File "./examples/multiple-choice/run_swag.py", line 466, in main predictions, labels, metrics = trainer.predict(test_dataset=test_dataset) File "d:\transformers\src\transformers\trainer.py", line 1762, in predict output = self.prediction_loop( File "d:\transformers\src\transformers\trainer.py", line 1829, in prediction_loop loss, logits, labels = self.prediction_step(model, inputs, prediction_loss_only, ignore_keys=ignore_keys) File "d:\transformers\src\transformers\trainer.py", line 1943, in prediction_step loss, outputs = self.compute_loss(model, inputs, return_outputs=True) File "d:\transformers\src\transformers\trainer.py", line 1504, in compute_loss outputs = model(**inputs) File "C:\Users\Bhadr\Miniconda3\envs\trans-env2\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "d:\transformers\src\transformers\models\distilbert\modeling_distilbert.py", line 929, in forward loss = loss_fct(reshaped_logits, labels) File "C:\Users\Bhadr\Miniconda3\envs\trans-env2\lib\site-packages\torch\nn\modules\module.py", line 889, in _call_impl result = self.forward(*input, **kwargs) File "C:\Users\Bhadr\Miniconda3\envs\trans-env2\lib\site-packages\torch\nn\modules\loss.py", line 1047, in forward return F.cross_entropy(input, target, weight=self.weight, File "C:\Users\Bhadr\Miniconda3\envs\trans-env2\lib\site-packages\torch\nn\functional.py", line 2690, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "C:\Users\Bhadr\Miniconda3\envs\trans-env2\lib\site-packages\torch\nn\functional.py", line 2385, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) IndexError: Target -1 is out of bounds. ``` The problem is with test dataset, ```python dataset = load_dataset("swag", "regular") pd.Series(dataset['test']['label']).value_counts() """ Output: -1 20005 dtype: int64 """ ``` It should have label In range(0, 3) like eval dataset not -1, ```python pd.Series(dataset['validation']['label']).value_counts() """ Output: 2 5038 1 5029 3 5006 0 4933 dtype: int64 """ ``` I am not sure how can we fix this since it's with the dataset and this example is for only swag. when we pass the sample data present in `tests/fixtures/tests_samples/swag/sample.json` it works fine since it has label 0. There is another small issue that needs to be fixed as well in the `trainer.py` https://github.com/huggingface/transformers/blob/a8d4d6776dd8a759324d0f57c60e8a738e7977a4/src/transformers/trainer.py#L1724-L1726 Because of this current test files are being saved with eval prefix, the issue that I mentioned in my previous comment. It should be `metric_key_prefix: str = "test"`. Please correct me if I am wrong. <|||||>I'm not sure we need the predict stage for `run_swag`, especially if the test dataset is "broken".<|||||>Filing an Issue with https://github.com/huggingface/datasets/issues and meanwhile using the local test file for the README.md? > There is another small issue that needs to be fixed as well in the trainer.py Indeed. In the default value and also the docstring - I'd say please make a separate PR for that as it's a standalone bug. And then we will eventually take care of https://github.com/huggingface/transformers/issues/10165 but for now let's use `test`.<|||||>Hi @stas00, The original [dataset](https://github.com/rowanz/swagaf/tree/master/data) doesn't have a label in the test dataset so it's using `-1` as a label. `meanwhile using the local test file for the README.md` - I don't understand this part, can you please tell me a bit more about this? I will create a PR with `run_xlni.py`, Please let me know if we need to add changes for `run_swag` (On sample test data predict is working fine)<|||||>I see, they made a real "test" split where the target is unknown ;) > meanwhile using the local test file for the README.md - I don't understand this part, can you please tell me a bit more about this? I meant that perhaps the example in `README.md` (and test if we add any) could use a test dataset that we know has the labels, (like tests/fixtures/tests_samples/swag/sample.json it) does it make sense? And indicate that the swag dataset's `test` split can't be used here because it doesn't have labels. This is just an idea. Alternatively, we could create a new dataset derived from the original swag, but whose test does have the labels. So basically take train+eval splits, merge them, re-splice them into train+eval+test. But perhaps this is a much larger project and perhaps it's not worth the effort. That's why I thought that for the exemplification feeding it a known to have labels test dataset split would be sufficient. The other approach suggested by @sgugger is not to have the predict stage in the first place. and perhaps documenting in the script why it's missing. Bottom line - if you can think of a way that works for you to make it happen, then please go for it. If not, let's just leave a comment in place of what could be the predict stage.<|||||>Hi @stas00, I made a typo in `qa_util.py` https://github.com/huggingface/transformers/blob/1c06240e1b3477728129bb58e7b6c7734bb5074e/examples/question-answering/utils_qa.py#L225 I forget to remove the second prefix, I will fix it. > Alternatively, we could create a new dataset derived from the original swag, but whose test does have the labels. So basically take train+eval splits, merge them, re-splice them into train+eval+test. But perhaps this is a much larger project and perhaps it's not worth the effort. I like this idea, I don't think this will take much time, It will hardly take few minutes. I worked on the dataset today at my work and it's really cool! The modifying dataset is very easy. Perhaps we can add one modified version of mrpc with your suggestion and add it to the huggingface datasets. <|||||>@sgugger, do you think it's a reasonable approach to re-make the dataset as I suggested so that it has a test split that has labels and then we can easier support the `predict` section in this example.<|||||>Helllo @sgugger and @stas00, I was facing an issue while running examples/tensorflow/text-classification/run_text_classification.py without the test stage. at this code https://github.com/huggingface/transformers/blob/ba0d50f2148f0db0e04a80cddb1f57ce0c91c182/examples/tensorflow/text-classification/run_text_classification.py#L523-L529 if the user doesn't pass the test file and run the script like this, ``` Traceback (most recent call last): File "transformers/examples/tensorflow/text-classification/run_text_classification.py", line 534, in <module> main() File "transformers/examples/tensorflow/text-classification/run_text_classification.py", line 525, in main if "label" in datasets["test"].features: KeyError: 'test' ``` I think the code should be like this ```python if "test" in datasets and "label" in datasets["test"].features: print("Computing prediction loss on test labels...") labels = datasets["test"]["label"] loss = float(loss_fn(labels, predictions).numpy()) print(f"Test loss: {loss:.4f}") ``` Ref: [Colab Notebook](https://github.com/bhadreshpsavani/HuggingfaceModelTraining/blob/main/SentimentDetection.ipynb) What's your suggestion?<|||||>cc @Rocketknight1 <|||||>You're completely correct - I overlooked that one while making a lot of changes to the script. I'll fix it now!
transformers
10,481
closed
feat(docs): navigate with left/right arrow keys
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Enables docs navigation with left/right arrow keys. It can be useful for the ones who navigate with keyboard a lot. More info : https://github.com/sphinx-doc/sphinx/pull/2064 You can try here : https://174105-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-02-2021 15:17:59
03-02-2021 15:17:59
transformers
10,480
closed
Different result in AutoModelForCausalLM
# 🚀 Feature request Models inside AutoModelForCausalLM have different behavior on loss calculation. In BartForCausalLM there is no shift in loss calculation https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/src/transformers/models/bart/modeling_bart.py#L1745 ``` loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.config.vocab_size), labels.view(-1)) ``` In RobertaForCausalLM A shift is applied before loss calculation https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/src/transformers/models/roberta/modeling_roberta.py#L944 ``` # we are doing next-token prediction; shift prediction scores and input ids by one shifted_prediction_scores = prediction_scores[:, :-1, :].contiguous() labels = labels[:, 1:].contiguous() loss_fct = CrossEntropyLoss() lm_loss = loss_fct(shifted_prediction_scores.view(-1, self.config.vocab_size), labels.view(-1)) ``` ## Motivation I found a mistake when I switched the config from Roberta to BART in AutoModelForCausalLM. It turns out to be different labeling in loss. So, It would be nice to make CausalLM models handle label in the same way, either shift or not. ## Your contribution I can make a PR to make sure that all the models will have a shift prediction.
03-02-2021 12:37:38
03-02-2021 12:37:38
Hi @voidful The reason we need to shift labels in roberta because the `labels` start with the `decoder_start_token_id` (`pad` or `bos`), which are then passed directly to the decoder as `decoder_input_ids`, which is the reason we need to shift the `labels` when calculating the loss Now in BART, `decoder_input_ids` are prepared inside the model by pretending the `labels` with `decoder_start_token_id`, so we don't need to shift the labels there https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/src/transformers/models/bart/modeling_bart.py#L1274 Hope this clears the difference.<|||||>> Hi @voidful > > The reason we need to shift labels in roberta because the `labels` start with the `decoder_start_token_id` (`pad` or `bos`), > which are then passed directly to the decoder as `decoder_input_ids`, which is the reason we need to shift the `labels` when calculating the loss > > Now in BART, `decoder_input_ids` are prepared inside the model by pretending the `labels` with `decoder_start_token_id`, so we don't need to shift the labels there > > https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/src/transformers/models/bart/modeling_bart.py#L1274 > > Hope this clears the difference. I got your point, but it seems to be apply on BartForConditionalGeneration, https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/src/transformers/models/bart/modeling_bart.py#L1272 Maybe we can apply same strategy on BartForCausalLM: https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/src/transformers/models/bart/modeling_bart.py#L1594<|||||>Oh, I missed your point, great catch! Feel free to open PR to add the same strategy to `BartForCausalLM`, i.e prepare `decoder_input_ids` using the `shift_tokens_right` and pass them as the `input_ids` to the decoder. cc @patrickvonplaten <|||||>Hmm, I'm not 100% whether everybody is on the same page here. `BartForCausalLM` was mostly created to be used in combination with `EncoderDecoderModel` and not as a standalone model. Also, Roberta requires both `input_ids` and `labels` as an input to correctly calculate the loss - the difference is just that that `input_ids` should be equal to `labels` with the labels being shifted under-the-hood. This is not the same thing as the `shift_tokens_right` function, which fully generates the `decoder_input_ids` from the labels... I think I would be fine with changing the behavior of `BartForCausalLM` so that `labels==input_ids` can be input to the function, even if this would be a slight breaking change. It would align `BartForCausalLM` closer with `RobertaForCausalm, GPT2LMHeadModel, ...` which would then also allow `EncoderDecoderModel` to have a general `shift_tokens` function. Does this make sense?<|||||>`BartForCausalLM` does accept `labels==input_id`, in general, all the decoders in `EncoderDecoder` accept that and that's what we have documented, pass the same input as `labels` and `decoder_input_ids`. The reason I suggested using `shift_tokens_right`, because BART uses `eos` as `decoder_start_token` which the `shift_tokens_right` function handles. This is different from `RobertaForCausalm, GPT2LMHeadModel ...`
transformers
10,479
closed
Question regarding training of BartForConditionalGeneration
Hello Guys, I am trying to fine-tune the BART summarization model but due to the lack of big dataset, having some difficulties with the fine-tuning. Thus, I decided to look at the trainig process of BartForConditionalGeneration model in detail. I came across this article, [Introducing BART](https://sshleifer.github.io/blog_v2/jupyter/2020/03/12/bart.html) from one of the engineers, @sshleifer, at HuggingFace. It says that BartModel was directly fine-tuned for the summarisation task without **any new randomly initialized heads**. **My question is about this fine-tuning process, especially on CNN-DailyMail dataset. Do you guys fine-tune the entire Bart model or only the decoder or something else?** I looked at the example [fine-tuning script](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) provided on the GitHub but I didn't find anything related to freezing some part of the model. &nbsp; I also tried to look at the source code of the BartForConditionalGeneration model and observed the following - Its just adds a [linear layer](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1207) on top of the BartModel (copy-pasting the `__init__` code here for quick reference). ``` self.model = BartModel(config) self.register_buffer("final_logits_bias", torch.zeros((1, self.model.shared.num_embeddings))) self.lm_head = nn.Linear(config.d_model, self.model.shared.num_embeddings, bias=False) ``` At first, I thought these are the new parameters that are being introduced and thus, being trained. Therefore, I tried the following code to check the number of trainable parameters while keeping the endoer and decoder fixed - ``` from transformers import BartModel, BartForConditionalGeneration, BartTokenizer def freeze_params(model): for par in model.parameters(): par.requires_grad = False model_sum = BartForConditionalGeneration.from_pretrained('facebook/bart-large') freeze_params(model_sum.get_encoder()) ## freeze the encoder freeze_params(model_sum.get_decoder()) ## freeze the decoder model_sum.train() ## set the train mode train_p = [p for p in model_sum.parameters() if p.requires_grad] ## get the trainable params print(f'Length of train params in Summarization Model : {len(train_p)}') ``` But this code shows that the list is empty. One thing I can do is to explictly set the `requires_grad=True` for the paramters in the `model_sum.lm_head` and only fine-tune these parameters. But I am curious to understand the original training/fine-tuning process. **It would be of great help to me if you guys could answer my question.** P.S. - Love the HuggingFace library. Thanks, <br /> Naman
03-02-2021 11:26:09
03-02-2021 11:26:09
~Please ask on discuss.huggingface.co so that others can see your answer!~<|||||>Thanks for the prompt reply, @sshleifer. I wasn't aware of the discussion page. In that case, should I close the issue since it is just a query question? Thanks, Naman<|||||>- I got confused. This is a reasonable place for this post. Sorry. - Bart fine-tuned with no weights frozen, but this can be very slow. `self.lm_head` is tied (the same parameter as) to self.encoder.embed_tokens and self.decoder.embed_tokens. - I would recommend fine-tuning on your dataset with only the encoder frozen. - `final-logits-bias` doesn't matter it's all 0s and frozen.<|||||>Thanks for your quick response. This is super-helpful to me. I have one more question related to the training process. My understanding is that BartModel (bart-base) was trained with two input sequences just like Google Bert (sadly, details are not given in the original paper. They only mention the difference in the objective function). On the other hand, BartForConditionalGeneration was trained (fine-tuned) for the summarization task i.e. single input and single input. In my current task, I am trying, to do *constraint summarization using multi-document*. Ideally, I would want my model to take **two different inputs** ``` input_encodings = self.tokenizer.batch_encode_plus(list(zip(example_batch['inp1'], example_batch['inp2'])), padding='max_length', truncation=True) ``` but I feel this kind of fine-tuning will be harder given my smaller dataset and the change in training regime. The other option is simply **concatenating the two documents** and passing it as one sequence ``` ## comb_inp is a list of concatenated inputs from example_batch input_encodings = self.tokenizer.batch_encode_plus(comb_inp, padding='max_length', truncation=True) ``` In your opinion, which one makes more sense? Your feedback and comments would be hugely appreciated in this case? P.S. - I am using your distilled-bart model for fine-tuning since it is smaller version 😇 Thanks, Naman <|||||>- I would try concatenating. - I would also grid search evaluation parameters (min_length, max_length, num_beam, length_penalty). - I would evaluate a few distillbart/distill-pegasus variants before any fine-tuning to decide which to start from. <|||||>Hey @sshleifer, I was thinking of using the default hyper-parameters used during the training as a starting point. I could find the default settings for BART model ([here](https://github.com/pytorch/fairseq/blob/master/examples/bart/README.summarization.md)) but not for distilled BART. I even looked at the paper but the values are missing over there as well. Would it be possible for you to provide me/refer me with/to the default settings that you used to train the model. Thanks, Naman <|||||>all the scripts have been moved to the [research_projects](https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/) directory.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @sshleifer, I was trying to fine-tune the `distill-pegasus-cnn-16-4` [model](https://huggingface.co/sshleifer/distill-pegasus-cnn-16-4) provided by you but I am not sure of the hyper-parameters and the corresponding file is also not avaialble in the [`research_projects/seq2seq_distillation`](https://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation) directory. Could you please share the hyper-parameters that you used to train this model (and achieve the results shown in Table 5 from your [paper](https://arxiv.org/pdf/2010.13002.pdf))? Thanks a lot! Naman<|||||>To clarify, you are trying to reproduce `distill-pegasus-cnn-16-4`, rather than finetune it further? 1) Make a student using make_student.py or get one from model hub. 2) guessing the train command from memory. I think it should be a combination of `train_distilbart_cnn.sh` and `train_pegasus_xsum.sh`. - You definitely want `max_target_length=142, --adafactor, --freeze-embeds --freeze-encoder`. - I think fp16 will not work. So roughly... ```bash python finetune.py \ --learning_rate=1e-4 \ --do_train \ --do_predict \ --n_val 1000 \ --val_check_interval 0.25 \ --max_target_length 142 \ --freeze_embeds --label_smoothing 0.1 --adafactor \ --eval_beams 2 \ --freeze-encoder \ --sortish_sampler \ --model_name_or_path $YOUR_STUDENT \ "$@" ``` 3) Copy the `config.json` for `distill-pegasus-cnn-16-4` to increase rouge score of trained student model. Hope that helps, and sorry for not having a definitive answer. <|||||>Hey @sshleifer , Sorry, I think I phrased the question wrongly. I am trying to finetune the `distill-pegasus` model on my own dataset since the original pegasus model is huge and is taking a lot of time. I was simply hoping if you could provide me the hyper-parameters like you have provided for `distill-bart` models for CNN/DailyMail dataset ([here](https://github.com/huggingface/transformers/blob/master/examples/research_projects/seq2seq-distillation/train_distilbart_cnn.sh)). Would I be able to use the hyper-parameters that you provided above? Thanks, Naman<|||||>Yes<|||||>Hello, @[sshleifer](https://github.com/sshleifer) how can I freeze only the first two layer from the encoder (I am using BART), and how can I change the dropout of some layers also in BART? Thank you<|||||>Hey @JessicaLopezEspejel , `model.get_encoder().layers` will give you a list (`torch.nn.modules.container.ModuleList` to be precise) of layers in encoder, and you can freeze the required layers using the `freeze_params` function provided in the `utils.py` file. I have included a small code snippet for your reference. Hope this helps! ``` from torch import nn from transformers import AutoTokenizer, AutoModel def freeze_params(model: nn.Module): """Set requires_grad=False for each of model.parameters()""" for par in model.parameters(): par.requires_grad = False model = AutoModel.from_pretrained("facebook/bart-large") enc_layers = model.get_encoder().layers freeze_params(enc_layers[0]) # freeze layer 0 dropout = enc_layers[0].dropout # return dropout value for layer 0 enc_layers[0].dropout = 0.5 # set dropout value for layer 0 ``` Thanks, Naman <|||||>Than you so much @bnaman50 , I will try it.
transformers
10,478
closed
generate() decoder_input_ids padding
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.0.dev0 - Platform: Linux-5.4.0-66-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes (but unrelevant) - Using distributed or parallel set-up in script?: No ### Who can help As it is a generation issue: - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj But also may benefit from better documentation: Documentation: @sgugger ## Information When using the `generate()` method from `generation_utils` for Bart (although other models will probably behave the same by checking the code), I am using `decoder_input_ids` to inform of the first tokens each sample in the batch should start with. Although not stated in the documentation (see [post in HF forum](https://discuss.huggingface.co/t/generate-continuation-for-seq2seq-models/)), `generate()` can take `decoder_input_ids` as the forward method would. This works fine with batch size equal to one or if the `decoder_input_ids` would all have same length and not require padding. However, when padding is involved, the `generate` method does not ignore the padding tokens in order to generate text for each sample in the batch, generating instead after the padding tokens in `decoder_input_ids`. ## To reproduce Steps to reproduce the behavior: 1. Load your favorite bart model for generation. 2. Prepare your `inputs_ids` for the encoder and the `decoder_input_ids` for your decoder, using sequences of different length. 3. Check the generated text. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` from transformers import AutoConfig, AutoModelForSeq2SeqLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("sshleifer/distilbart-cnn-6-6") model = AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distilbart-cnn-6-6") model.to("cuda:0") model.eval() inputs_text = ['Hugging Face is taking its first step into machine translation this week with the release of more than 1,000 models. Researchers trained models using unsupervised learning and the Open Parallel Corpus (OPUS). OPUS is a project undertaken by the University of Helsinki and global partners to gather and open-source a wide variety of language data sets, particularly for low resource languages. Low resource languages are those with less training data than more commonly used languages like English.', 'Hugging Face has announced the close of a $15 million series A funding round led by Lux Capital, with participation from Salesforce chief scientist Richard Socher and OpenAI CTO Greg Brockman, as well as Betaworks and A.Capital.'] decoder_text = ['</s><s>Hugging Face released', '</s><s>Hugging Face closed a'] inputs = tokenizer(inputs_text, return_tensors = 'pt', padding=True) inputs_decoder = tokenizer(decoder_text, return_tensors = 'pt', padding=True, add_special_tokens = False) generated_tokens = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), decoder_input_ids = inputs_decoder['input_ids'].to(model.device)) decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=False) print('\n'.join(decoded_preds), '\n') Output: </s><s>Hugging Face released<pad> models using unsupervised learning and the Open Parallel Corpus (OPUS) Low resource languages are those with less training data than more commonly used languages like English. The project was undertaken by the University of Helsinki and global partners to gather and open-source a wide variety of language data sets.</s> </s><s>Hugging Face closed a $15 million series A funding round led by Lux Capital. Salesforce chief scientist Richard Socher and OpenAI CTO Greg Brockman participated. Betaworks and A.Capital also took part in the round, with participation from Betaworkorks.</s><pad><pad><pad><pad><pad><pad><pad> ``` In the first generated sentence, the `<pad>` token after released is kept, and would be worse if there was a higher length difference between the decoder inputs. ## Expected behavior If one gives as `decoder_input_ids` same length sentences, for instance, we remove the a from "Hugging Face closed a": ``` decoder_text = ['</s><s>Hugging Face released', '</s><s>Hugging Face closed'] inputs = tokenizer(inputs_text, return_tensors = 'pt', padding=True) inputs_decoder = tokenizer(decoder_text, return_tensors = 'pt', padding=True, add_special_tokens = False) generated_tokens = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), decoder_input_ids = inputs_decoder['input_ids'].to(model.device)) decoded_preds = tokenizer.batch_decode(generated_tokens, skip_special_tokens=False) print('\n'.join(decoded_preds), '\n') Output: </s><s>Hugging Face released its first step into machine translation this week. Researchers trained models using unsupervised learning and the Open Parallel Corpus (OPUS) OPUS is a project undertaken by the University of Helsinki and global partners. Low resource languages are those with less training data than more commonly used languages like English.</s> </s><s>Hugging Face closed a $15 million series A funding round led by Lux Capital. Salesforce chief scientist Richard Socher and OpenAI CTO Greg Brockman were involved. Betaworks and A.Capital also participated in the round, which will take place in New York City.</s><pad><pad><pad><pad><pad> ``` Then the output does not include any `<pad>` token in between the generated text. It would be nice if this would be the same for different length in the decoder_input_ids (ie. ignore the `<pad>` tokens).
03-02-2021 11:08:27
03-02-2021 11:08:27
The issue was discussed at https://github.com/huggingface/transformers/pull/10552, and this is expected behaviour. If one wants to generate using `decoder_input_ids` with different lengths, the suggested approach is to use `padding_side` as `left` on the tokenizer. For a custom solution if one wants to preserve the same results as if there was no padding, see code changes in https://github.com/huggingface/transformers/pull/10552.<|||||>I hope it is okay, that I ask a follow up question here, because I feel it might be highly related (also to [this](https://github.com/huggingface/transformers/pull/10552#issuecomment-801246652) comment about non-identical outputs with & without padding). If not I would be happy over any pointers were to ask this question. I am using an BERT encoder-decoder model as per the example [here](https://huggingface.co/transformers/model_doc/encoderdecoder.html#encoderdecodermodel) and want to condition the decoder output on some know sequence. The known sequence is of varying length, tokenized and `left` padded, exactly as described above. To exactly match the training regime (which never contains padding before the first token), I was wondering whether there is a way to pass an additional padding mask for the `decoder_input_ids`, e.g. a `decoder_attention_mask` of size `(batch x max_known_seq_len)`?<|||||>Hi @l-salewski As I mentioned in the PR comment, it would be better to group the sequence of similar length into a batch then pass them to generate otherwise call `generate` for each example. Also it's possible to pass `decoder_attention_mask`, its shape need to be `[bath, current_seq_length]`<|||||>Hi @patil-suraj, thank you for getting back to me! Grouping of similar length seems to be an approximate solution, but e.g. my test set is relatively small and exhibits sentences of many different lengths. Running each example individually on the other hand may be quite slow. Using `decoder_attention_mask` is what I did in the end. I had to overwrite the `prepare_inputs_for_generation` function, such that if a `decoder_attention_mask` is passed, it overrules the generated one: ```python def prepare_inputs_for_generation( self, input_ids, past=None, attention_mask=None, use_cache=None, encoder_outputs=None, **kwargs ): decoder_inputs = self.decoder.prepare_inputs_for_generation(input_ids, past=past) decoder_attention_mask = decoder_inputs["attention_mask"] if "attention_mask" in decoder_inputs else None # if we have been passed an attention mask, use it to overrule the generated one if "decoder_attention_mask" in kwargs: initial_decoder_attention_mask = kwargs.pop("decoder_attention_mask") initial_sequence_length = initial_decoder_attention_mask.size(1) decoder_attention_mask[:,:initial_sequence_length] = initial_decoder_attention_mask input_dict = { "attention_mask": attention_mask, "decoder_attention_mask": decoder_attention_mask, "decoder_input_ids": decoder_inputs["input_ids"], "encoder_outputs": encoder_outputs, "past_key_values": decoder_inputs["past_key_values"], "use_cache": use_cache, **kwargs, } return input_dict ``` Furthermore, I overwrite `_expand_inputs_for_generation` from the beam search such that the `decoder_attention_mask` is also expanded for each of the beams: ```python @staticmethod def _expand_inputs_for_generation( input_ids: torch.LongTensor, expand_size: int = 1, is_encoder_decoder: bool = False, attention_mask: torch.LongTensor = None, encoder_outputs: ModelOutput = None, **model_kwargs, ) -> Tuple[torch.LongTensor, Dict[str, Any]]: expanded_return_idx = ( torch.arange(input_ids.shape[0]).view(-1, 1).repeat(1, expand_size).view(-1).to(input_ids.device) ) input_ids = input_ids.index_select(0, expanded_return_idx) if "token_type_ids" in model_kwargs: token_type_ids = model_kwargs["token_type_ids"] model_kwargs["token_type_ids"] = token_type_ids # this has been added to the original method if "decoder_attention_mask" in model_kwargs: model_kwargs["decoder_attention_mask"] = model_kwargs["decoder_attention_mask"].index_select(0, expanded_return_idx) if attention_mask is not None: model_kwargs["attention_mask"] = attention_mask.index_select(0, expanded_return_idx) if is_encoder_decoder: assert encoder_outputs is not None encoder_outputs["last_hidden_state"] = encoder_outputs.last_hidden_state.index_select( 0, expanded_return_idx.to(encoder_outputs.last_hidden_state.device) ) model_kwargs["encoder_outputs"] = encoder_outputs return input_ids, model_kwargs ``` To exactly match the training setting, I tokenize the known inputs, prepend a `[CLS]` token to each and extend the `decoder_attention_mask` with a 1 column to the left such that it also attends to the `[CLS]` token: ```python # Combine a CLS Column with the forced input kwargs["decoder_input_ids"] = torch.cat([ torch.zeros_like(forced_input.input_ids)[:,:1]+self.tokenizer.cls_token_id, forced_input.input_ids], dim=1) # Attend to the CLS column, but not the PAD tokens of the forced input kwargs["decoder_attention_mask"] = torch.cat([ torch.ones_like(forced_input.attention_mask)[:,:1], forced_input.attention_mask], dim=1) ``` Then `**kwargs` is passed to `generate`. Overall this approach works flawlessly as it reduces the overhead (e.g. no organizing batches or looping over batches) from the user perspective. I just tokenize the known inputs separately add the `[CLS]` token as described above and that is it.
transformers
10,477
closed
Facing NCCL error on Multi-GPU training(on single machine) using run_glue.py script
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.2 - Platform: Linux-4.19.0-14-cloud-amd64-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: 4xTesla T4 (GCP) - Using distributed or parallel set-up in script?: torch.distributed.launch ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): DistilRoberta The problem arises when using: * [*] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [*] my own task or dataset: (give details below) Regression task with a single output, using BertForSequenceClassification ## To reproduce Steps to reproduce the behavior: 1.python -m torch.distributed.launch --nproc_per_node 4 /home/run_glue.py --train_file /home/data/train.csv --validation_file /home/data/dev.csv --test_file /home/data/test.csv --model_name_or_path distilroberta-base --output_dir /home/model --num_train_epochs 5 --per_device_train_batch_size 1 --per_device_eval_batch_size 16 --do_train --do_eval --fp16 --gradient_accumulation_steps 2 --do_predict --logging_steps 100 --evaluation_strategy steps --save_steps 100 --overwrite_output_dir File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 442, in init_process_group 732793de051f:1895:1925 [1] NCCL INFO transport/shm.cc:101 -> 2 732793de051f:1895:1925 [1] NCCL INFO transport.cc:30 -> 2 732793de051f:1895:1925 [1] NCCL INFO transport.cc:49 -> 2 732793de051f:1895:1925 [1] NCCL INFO init.cc:766 -> 2 732793de051f:1895:1925 [1] NCCL INFO init.cc:840 -> 2 732793de051f:1895:1925 [1] NCCL INFO group.cc:73 -> 2 [Async thread] barrier() File "/opt/conda/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1947, in barrier Traceback (most recent call last): File "/home/run_text_classification.py", line 480, in <module> work = _default_pg.barrier() RuntimeError: NCCL error in: /opt/conda/conda-bld/pytorch_1603729138878/work/torch/lib/c10d/ProcessGroupNCCL.cpp:784, unhandled system error, NCCL version 2.7.8 main() File "/home/run_text_classification.py", line 163, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/opt/conda/lib/python3.7/site-packages/transformers/hf_argparser.py", line 180, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 60, in __init__ File "/opt/conda/lib/python3.7/site-packages/transformers/training_args.py", line 478, in __post_init__ if is_torch_available() and self.device.type != "cuda" and self.fp16: File "/opt/conda/lib/python3.7/site-packages/transformers/file_utils.py", line 1346, in wrapper return func(*args, **kwargs) File "/opt/conda/lib/python3.7/site-packages/transformers/training_args.py", line 583, in device return self._setup_devices 732793de051f:1897:1927 [3] include/shm.h:28 NCCL WARN Call to posix_fallocate failed : No space left on device File "/opt/conda/lib/python3.7/site-packages/transformers/file_utils.py", line 1336, in __get__ 732793de051f:1897:1927 [3] NCCL INFO include/shm.h:41 -> 2 732793de051f:1897:1927 [3] include/shm.h:48 NCCL WARN Error while creating shared memory segment nccl-shm-recv-b3d54cebe4167a34-0-2-3 (size 9637888) <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Expected model training to proceed smoothly using 4xGPU. When I run the said script with nproc_per_node=1(or even 2), it runs smoothly but setting it as 4 gives strange errors. After updating to 1.9.0 I face a different error: RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:832, unhandled system error, NCCL version 2.7.8 ncclSystemError: System call (socket, malloc, munmap, etc) failed. <!-- A clear and concise description of what you would expect to happen. -->
03-02-2021 10:22:31
03-02-2021 10:22:31
cc @sgugger <|||||>This seems like a problem in your environment install for NCCL: if your script can run on two GPUs there is nothing in the code to change to make it run on four GPUs so this is not a bug in the training script or transformers. I have never seen that particular NCCL error so I'm afraid I can't really help debugging it.<|||||>Thanks for the quick reply. Yeah, it’s strange that it works on 2 GPUs but not on 4. Will check again and let you know.<|||||>@sgugger just to clarify: The system has 4 GPUs. It’s only the nproc_per_node argument I’m changing (from 1 to 2,4,etc.). Just want to ensure I’ve not misunderstood the cause of the error. Right?<|||||>Yes I understood that. The PyTorch launcher is going to spawn a different number of processes depending on the number your pass, which in turn will use the number of GPUs specified (and the others are idle).<|||||>Thanks. Just wanted to confirm that. Will try reinstalling the environment and update if I find the solution. <|||||>Hi @sgugger, Good news, the issue seems to have been an environment issue. Thanks for the instant help<|||||>I still meet the same problem, could you please tell me how to solve it?
transformers
10,476
closed
The size of CoNLL-2003 is not consistant with the official release.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: - - Python version: - - PyTorch version (GPU?): - - Tensorflow version (GPU?): - - Using GPU in script?: - - Using distributed or parallel set-up in script?: - ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->@sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [ ] the official example scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: CoNLL-2003 ## To reproduce Steps to reproduce the behavior: 1. just run the code <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The official release of CoNLL-2003 is: # train 14987 # dev 3466 # test 3684 While CoNLL-2003 in datasets is: # train 14041 # dev 3250 # test 3453 Wish for your reply~ Thank you!
03-02-2021 09:03:19
03-02-2021 09:03:19
Hi there, The training scripts load the dataset using the `datasets` library, so this issue is related to the `datasets` lib, you can open it in that repo https://github.com/huggingface/datasets.<|||||>Thank you!
transformers
10,475
closed
Fixes compatibility bug when using grouped beam search and constrained decoding together
Fixes #10415 ## Who can review? @patrickvonplaten
03-02-2021 06:53:24
03-02-2021 06:53:24
Thanks a lot!
transformers
10,474
closed
Continue pre-training using the example code "run_mlm.py"
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-5.4.0-65-generic-x86_64-with-glibc2.10 (Ubuntu 18.04) - Python version: 3.8.8 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help - tokenizers: @LysandreJik - maintained examples (not research project or legacy): @sgugger, @patil-suraj ## Information Model I am using (Bert, XLNet ...): albert-xlarge-v2 The problem arises when using: transformers/examples/language-modeling/run_mlm.py (https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py) The tasks I am working on is: continue pre-training on a specific dataset (glue/sst2) ## To reproduce Steps to reproduce the behavior: ```= CUDA_VISIBLE_DEVICES=0 python run_mlm.py \ --model_name_or_path albert-xlarge-v2 \ --dataset_name "glue" \ --dataset_config_name "sst2" \ --do_train \ --do_eval \ --output_dir ckpt/pre_training/glue ``` ### Error message ```= Traceback (most recent call last): File "src/run_mlm.py", line 447, in <module> main() File "src/run_mlm.py", line 353, in main tokenized_datasets = datasets.map( File "/home/robotlab/anaconda3/envs/thesis/lib/python3.8/site-packages/datasets/dataset_dict.py", line 369, in map { File "/home/robotlab/anaconda3/envs/thesis/lib/python3.8/site-packages/datasets/dataset_dict.py", line 370, in <dictcomp> k: dataset.map( File "/home/robotlab/anaconda3/envs/thesis/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1120, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/home/robotlab/anaconda3/envs/thesis/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1091, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "src/run_mlm.py", line 351, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/home/robotlab/anaconda3/envs/thesis/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2286, in __call__ assert isinstance(text, str) or ( AssertionError: text input must of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized exampl es). ``` I skipped the message that relates to the model, as there's no problem with the loading of the model. Dataset is successfully downloaded. There's a warning above the assertion error ```= 03/02/2021 14:03:44 - WARNING - datasets.builder - Reusing dataset glue (/home/robotlab/.cache/huggingface/datasets/glue/sst2/1.0.0/7c99657241149a24692c402a5c3f34d 4c9f1df5ac2e4c3759fadea38f6cb29c4) ``` ## Expected behavior When I continue pre-training on other datasets such as 'ag_news', 'dbpedia_14', 'imdb', there's no error and everything is fine. There are also no "dataset_config_name" in these three datasets. However, there's no error when I use `dataset_name=wikitext` and `dataset_config_name=wikitext-2-raw-v1` in `run_mlm.py` Judging from the error message above, it seems like the data format of the SST-2 is wrong so that the datasets can not handled the data correctly. Any suggestion is highly appreciated!
03-02-2021 06:49:39
03-02-2021 06:49:39
Hi there, the `run_mlm` script expects an unlabeled text dataset, i.e a dataset with the column `text` in it, if there is no `text` column then it assumes that the first column is the text column. Here `sst2` is a classification dataset and the first column is `idx`. So you could change the script and directly hardcode the `text_column_name`, which is `sentence` for `sst2`. https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/examples/language-modeling/run_mlm.py#L304 And also pass the `--line_by_line` argument.<|||||>Thanks for your prompt and accurate reply. This does help! But I notice that the line_by_line argument is not needed as long as the `text_column_name` is hard-coded in that line. Anyways, thanks!
transformers
10,473
closed
Issue with converting my own BERT TF2 checkpoint to PyTorch and loading the converted PyTorch checkpoint for training
Hi, I’m using huggingface to train my own bert model. I have checkpoints which are in TensorFlow2 and I have converted them successfully in PyTorch. The checkpoint conversion script has created **_.bin_** file which is having following subdirectories and pkl file. ![dfa9d353b4e25888726ebae99c19c048aaa9d4e4](https://user-images.githubusercontent.com/12840374/109604765-167cb500-7b4a-11eb-9e52-013762d02608.png) I wanted to gain some more information on the following points: - Am’I doing the conversation of TensorFlow checkpoint to PyTorch checkpoint correct? I can see the pre-trained bert-base-uncased model which is hosted on hugging face model repository is just having single [pytorch_model.bin](https://huggingface.co/bert-base-uncased/blob/main/pytorch_model.bin) file. - How I can load this custom BERT-PyTorch model’s checkpoint using [modeling_bert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py) script? Any help or suggestions will be helpful. Thanks
03-02-2021 05:56:39
03-02-2021 05:56:39
You've converted them successfully in PyTorch, so you should be left with a `pytorch_model.bin` alongside a `config.json`, is that right? I recommend reading the [quicktour entry related to using models](https://huggingface.co/transformers/quicktour.html#using-the-model) in order to get a sense of how one can load a model in PyTorch.<|||||>Hi @LysandreJik, Thank you for replying on this thread. > You've converted them successfully in PyTorch, so you should be left with a pytorch_model.bin alongside a config.json, is that right? Yes, I have pytorch_model.bin file but the point is: it is not single serialised file like [this](https://huggingface.co/bert-base-uncased/blob/main/pytorch_model.bin) but I can unzip that exported bin file which I have generated using [convert_bert_original_tf2_checkpoint_to_pytorch.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/convert_bert_original_tf2_checkpoint_to_pytorch.py) script and I can see the three main components which I posted in snapshot earlier. Ideally, I should have single serialised pytorch_model.bin, right? I have checked the same with original BERT model and tried to convert it from TF2 to PyTorch there also I have bert-model.bin file but I can unzip that bert-model.bin and it also have three components present inside the bin. See the snapshots. Snapshot 1: My exported bin file ![Screenshot from 2021-03-08 10-18-44](https://user-images.githubusercontent.com/12840374/110276112-7d8de400-7ff8-11eb-956e-a72e13cf68b0.png) Snapshot 2: I can unzip this bin file ![Screenshot from 2021-03-08 10-18-52](https://user-images.githubusercontent.com/12840374/110276111-7cf54d80-7ff8-11eb-9a0e-f004106b3a5c.png) Snapshot 3: I can see the three components ![Screenshot from 2021-03-08 10-18-58](https://user-images.githubusercontent.com/12840374/110276106-7bc42080-7ff8-11eb-90a1-cf64f0ca58d6.png) Is this expected or Am I missing something? Basically my goal here is to make sure that I'm converting the my own BERT model which is trained using TF2 to PyTorch in bug-free manner. Thanks you again for sharing the documentation link I will try that and post my further questions. Thanks,<|||||>That is the way PyTorch works. See the documentation for [torch.save](https://pytorch.org/docs/stable/generated/torch.save.html). It states: > The 1.6 release of PyTorch switched torch.save to use a new zipfile-based file format. torch.load still retains the ability to load files in the old format. If for any reason you want torch.save to use the old format, pass the kwarg _use_new_zipfile_serialization=False.<|||||>Thank you so much @LysandreJik for your help now I have one single .bin file. Closing this issue. I will reopen if needed.
transformers
10,472
closed
Needed a feature to convert facebook mbart many - many model to ONNX runtime inorder to reduce the inference time
We need a feature which makes the facebook mbart many - many model to convert to ONNX runtime which reduces the inference time, current mbart many-many model takes 9 secs to translate and we need to quantize it further to reduce the inference time
03-02-2021 05:05:30
03-02-2021 05:05:30
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,471
closed
Question: change location of cache datasets
This is not a bug, it's a question. I don't have too much space allowed in my home dir and I have to store most stuff elsewhere. I noticed that the library caches a lot of stuff under ~/.cache/huggingface (in particular under datasets). How do I change the location of the cache dir? I have `export TRANSFORMERS_CACHE=...` in my run script but that doesn't seem to be it. I started using some large datasets and whatever caching happens because of these datasets is causing a disk quota issue. I hope it's possible to change the location of the cache dir. Thanks!
03-02-2021 04:55:09
03-02-2021 04:55:09
One more thing (maybe this is a bug): why are the datasets stuff cached in ~/.cache/huggingface/datasets when I have both the env var set up AND I specify a cache dir with --cache_dir when I run my scripts? The pretrained models go in the --cache_dir specified but the datasets don't. This is confusing (and maybe buggy?)<|||||>I wonder if this has something to do with newer versions of the library. I remember at the last upgrade I had to install the package `datasets` separately, which I didn't have to do before. Is there a way to specify a cache for the `datasets` and it's separate than the model one? <|||||>I'll have to look into this more, but I'm doing an educated guess that there is probably a way to send the cache_dir to the datasets api. In any case, I would like to learn how to change the cache dir globally, if at all possible. Thanks!<|||||>I'm guessing HF_HOME is what I'm looking for... <|||||>Yes, that worked. Closing.
transformers
10,470
closed
(Sorry I can not visit the forum) BORT question: pre-training-using-knowledge-distillation is better than pre-training-only for downstream tasks?
The paper shows the MLM accuarcy comparision. ![image](https://user-images.githubusercontent.com/4702353/109590213-87ba6900-7b46-11eb-9fea-f046d23abeb2.png) What is the **downstream tasks'** performance comparison for pre-training-using-knowledge-distillation and pre-training-only to the end? Thank you very much.
03-02-2021 03:01:23
03-02-2021 03:01:23
https://github.com/alexa/bort/issues/10
transformers
10,469
closed
The described function in docs was not implemented in source code
https://github.com/huggingface/transformers/blob/0c2325198fd638e5d1f0c7dcbdd8bf7f14c0ff7d/src/transformers/file_utils.py#L1512 The method of `update` in `ModelOutput` described as below, https://huggingface.co/transformers/main_classes/output.html?highlight=modeloutput#transformers.file_utils.ModelOutput.update but the source code will raise an exception when try to use it, which disagree with the docs.
03-02-2021 01:54:03
03-02-2021 01:54:03
Ah, indeed! cc @sgugger
transformers
10,468
closed
run_ner.py training data file format
# 🚀 Feature request Would it be possible to support text files as input files for the run_ner.py script similar to what was supported in examples/legacy/token-classification/run_ner.py ? This, I believe is the CoNLL-2003 format. The current version of the run_ner.py script in /examples/token-classification appears to support only json and csv format for training input. Could someone also tell me where I can find example files for aforementioned json and csv files ? Looking at some example files would help me format my labeled data in the required format if supporting text files is not possible. ## Motivation I have some training data formatted as text files with one token and label per line and would like to use that as input to the run_ner.py script if possible.
03-01-2021 23:45:47
03-01-2021 23:45:47
Hi @pranav-s We only support the `json` and `csv` files in the examples. To convert your text files to the `json` format, you could use this `datasets` script as references, which converts the `conll` text files to the `datasets` format. https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py <|||||>Hi @pranav-s, I found this script from `test_examples.py` file, ``` run_ner.py --model_name_or_path bert-base-uncased --train_file tests/fixtures/tests_samples/conll/sample.json --validation_file tests/fixtures/tests_samples/conll/sample.json --output_dir {tmp_dir} --overwrite_output_dir --do_train --do_eval --warmup_steps=2 --learning_rate=2e-4 --per_device_train_batch_size=2 --per_device_eval_batch_size=2 --num_train_epochs=2 ``` at this path `tests/fixtures/tests_samples/conll/sample.json` you can find an example file which is having required input format<|||||>Thank you @bhadreshpsavani and @patil-suraj . This was helpful.<|||||>Thank you @bhadreshpsavani . You saved my life!
transformers
10,467
closed
modeling files loaded when they aren't being asked to be loaded
``` File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/trainer_seq2seq.py", line 22, in <module> from .trainer import Trainer File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/trainer.py", line 65, in <module> from .trainer import Trainer File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/trainer.py", line 65, in <module> from .models.auto.modeling_auto import MODEL_FOR_QUESTION_ANSWERING_MAPPING File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/models/auto/modeling_auto.py", line 214, in <module> from .models.auto.modeling_auto import MODEL_FOR_QUESTION_ANSWERING_MAPPING File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/models/auto/modeling_auto.py", line 214, in <module> from ..tapas.modeling_tapas import ( from ..tapas.modeling_tapas import ( File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/models/tapas/modeling_tapas.py", line 51, in <module> File "/gpfsdswork/projects/rech/ajs/uiz98zp/stas/transformers-master/src/transformers/models/tapas/modeling_tapas.py", line 51, in <module> from torch_scatter import scatter File "/gpfslocalsup/pub/anaconda-py3/2020.02/envs/pytorch-gpu-1.7.1/lib/python3.7/site-packages/torch_scatter/__init__.py", line 12, in <module> ``` Do we have to load `models/tapas/modeling_tapas.py` when we aren't using `tapas`? There is some unrelated problem that gets triggered by this model loading `torch_scatter`, which is a 3rd party module. I figured out the solution to the problem it triggered (binary incompatible `torch_scatter`), but still it might be a good idea not to pre-load model files until they are needed. @LysandreJik, @sgugger
03-01-2021 23:21:16
03-01-2021 23:21:16
I'm not sure there is a way to workaround `Trainer` loading all models since it needs the `MODEL_FOR_QUESTION_ANSWERING_MAPPING` to get the names of the labels (those models have different label names that are not `labels`). The auto models then loads every model in the lib, which we can't work around without rewriting the module and its logic from scratch.<|||||>If I look at the code it's just: ``` MODEL_FOR_QUESTION_ANSWERING_MAPPING = OrderedDict( [ # Model for Question Answering mapping (ConvBertConfig, ConvBertForQuestionAnswering), (LEDConfig, LEDForQuestionAnswering), (DistilBertConfig, DistilBertForQuestionAnswering), (AlbertConfig, AlbertForQuestionAnswering), [...] (IBertConfig, IBertForQuestionAnswering), ] ``` why does it need to load all models in order to access this dict? I understand that it is doing it now, but why can't it be in a separate file which doesn't load all models? e.g. in `trainer.py`: ``` from .models.auto.modeling_auto_maps_only import MODEL_FOR_QUESTION_ANSWERING_MAPPING ``` and inside `models.auto.modeling_auto_maps` ``` from .models.auto.modeling_auto_maps_only import MODEL_FOR_QUESTION_ANSWERING_MAPPING ``` Ah, right, because it doesn't know what those symbols are without loading all these models... duh! Ok, so what you're saying it could have been done in a different way that doesn't require loading models - e.g. class names as strings but it'd require a big redesign.<|||||>OK, so since we are auto-generating code anyway during `make style`, we could autogenerate a simple dict with: ``` QA_model_classes = ( 'transformers.models.albert.modeling_albert.AlbertForQuestionAnswering', 'transformers.models.led.modeling_albert.LEDForQuestionAnswering', ... ) ``` and then in trainer: ``` default_label_names = ( ["start_positions", "end_positions"] if self.model.__class__ in QA_model_classes else ["labels"] ) ``` and there is no longer a need to load all models.<|||||>That would probably work yes.
transformers
10,466
closed
Fix the bug in constructing the all_hidden_states of DeBERTa v2
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR fixes the bug in constructing the `all_hidden_states` of DeBERTa v2. In the master branch, it keeps appending `hidden_states` to `all_hidden_states` which comes from the inputs and is never updated in each layer. This would make `all_hidden_states` a list of duplicated elements. Instead, it should append `output_states` (which is the counterpart of the `hidden_states` in DeBERTa v1) to `all_hidden_states` because it is the real output of each layer. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2021 22:43:10
03-01-2021 22:43:10
transformers
10,465
closed
Tflite conversion error for TFMT5ForConditionalGeneration model
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: Version: 4.4.0.dev0 - Platform: Not sure what this means - Python version: 3 - PyTorch version (GPU?): - Tensorflow version (GPU?):'2.4.1' - Using GPU in script?:No - Using distributed or parallel set-up in script?:No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. --> t5: @patrickvonplaten tensorflow: @jplu ## Information Model I am using (Bert, XLNet ...): TFMT5ForConditionalGeneration (it is already trained) The problem arises when using: * [ ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) converting the model to tflite ``` config = AutoConfig.from_pretrained( model_path # output_attentions=True, # output_hidden_states=True, # use_cache=True, # return_dict=True ) tokenizer = AutoTokenizer.from_pretrained( model_path ) model = TFMT5ForConditionalGeneration.from_pretrained( model_path, from_pt=True, config=config ) conc_func = model.serving.get_concrete_function() converter = tf.lite.TFLiteConverter.from_concrete_functions([conc_func]) converter.optimizations = [tf.lite.Optimize.DEFAULT] converter.target_spec.supported_ops = [tf.lite.OpsSet.TFLITE_BUILTINS, tf.lite.OpsSet.SELECT_TF_OPS] tflite_model = converter.convert() print("Converted successfully") ``` ## To reproduce Steps to reproduce the behavior: 1.run the above script on TFMT5ForConditionalGeneration model 2. will get the follow error ``` Exception: /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/ops/array_ops.py:1047:0: error: 'tf.Reshape' op requires 'shape' to have at most one dynamic dimension, but got multiple dynamic dimensions at indices 0 and 3 /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/util/dispatch.py:201:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/transformers/models/t5/modeling_tf_t5.py:703:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:1218:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:1165:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/transformers/models/t5/modeling_tf_t5.py:702:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:1218:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/autograph/operators/control_flow.py:1165:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/transformers/models/t5/modeling_tf_t5.py:694:0: note: called from /home/python_user/.pyenv/versions/3.7.2/lib/python3.7/site-packages/tensorflow/python/keras/engine/base_layer.py:1012:0: note: called from ``` When I inspected : t5/modeling_tf_t5.py:703: i got Line number: code ``` 702: if num_dims_encoder_attention_mask == 2: 703: encoder_extended_attention_mask = inputs["encoder_attention_mask"][:, None, None, :] ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior convert the model to tflite successfully <!-- A clear and concise description of what you would expect to happen. -->
03-01-2021 22:36:16
03-01-2021 22:36:16
Hello! Currently, most of the TF models are not compliant with TFLite. Sorry for the inconvenience, if you want to help on this, you can propose a PR to fix this, this will be more than welcome!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,464
closed
[Deepspeed] Allow HF optimizer and scheduler to be passed to deepspeed
# Use HF optimizer and/or scheduler unless specified in deepspeed config If HF is already creating an optimizer and LR scheduler, we should not try to match that config/implementation in a ds_config.json instead we pass it to deepspeed.initialize(..., lr_scheduler=hf_lr_scheduler) * [x] This PR checks if ds_config has an optimizer or scheduler, if it does not, calls `create_optimizer` or `create_cheduler` (after splitting it) to create an optimizer or scheduler. Then HF optimizer and scheduler are passed it to deepspeed.initialize(). DeepSpeed can handle any optimizer and scheduler if these are passed directly to deepspeed.initialize() as an object. Due to the chicken-n-egg init problem, the valid combinations are: | Combos | HF Scheduler | DS Scheduler | |--------------|--------------|--------------| | HF Optimizer | Yes | Yes | | DS Optimizer | No | Yes | but if `cpu_offload` is used all bets are off - we can only use DS optim/sched. ---------- added by @stas00 below: Added: * [x] make init_deepspeed support config dict, besides the config file - this makes the testing much easier * [x] add tests for this PR using this new feature of passing the dict * [x] various small clean ups * [x] update the docs * [x] check for `cpu_offload` - add test * [x] recode the config overrides to have one true source of values * [x] tweak one not working test **blocking event: waiting for a new release 0.3.13 from DeepSpeed.** @sgugger
03-01-2021 18:53:23
03-01-2021 18:53:23
OK, 2 tests added and no, this doesn't work w/o neither the default optimizer nor the default scheduler. e.g. if you comment out the `del ` lines in the tests then we are using DS optim/sched and things are back to normal. I didn't have time to investigate as it's late, so just sharing the outputs at the moment - will look closer tomorrow. I think both are issues on the DeepSpeed side, but I could be wrong. Also note that the normal CI doesn't run these tests, so green doesn't say anything about those. ``` pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_hf_native_scheduler examples/tests/deepspeed/test_deepspeed.py:103: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/trainer.py:917: in train model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps) src/transformers/integrations.py:351: in init_deepspeed trainer.create_scheduler(num_training_steps=num_training_steps) src/transformers/trainer.py:685: in create_scheduler self.lr_scheduler = get_scheduler( src/transformers/optimization.py:266: in get_scheduler return schedule_func(optimizer, num_warmup_steps=num_warmup_steps, num_training_steps=num_training_steps) src/transformers/optimization.py:98: in get_linear_schedule_with_warmup return LambdaLR(optimizer, lr_lambda, last_epoch) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <torch.optim.lr_scheduler.LambdaLR object at 0x7fd86fb0a160>, optimizer = None lr_lambda = <function get_linear_schedule_with_warmup.<locals>.lr_lambda at 0x7fd86fafc160>, last_epoch = -1, verbose = False def __init__(self, optimizer, lr_lambda, last_epoch=-1, verbose=False): self.optimizer = optimizer if not isinstance(lr_lambda, list) and not isinstance(lr_lambda, tuple): > self.lr_lambdas = [lr_lambda] * len(optimizer.param_groups) E AttributeError: 'NoneType' object has no attribute 'param_groups' /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/optim/lr_scheduler.py:197: AttributeError ```` ``` pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_hf_native_optimizer examples/tests/deepspeed/test_deepspeed.py:91: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ src/transformers/trainer.py:917: in train model, optimizer, lr_scheduler = init_deepspeed(self, num_training_steps=max_steps) src/transformers/integrations.py:384: in init_deepspeed model, optimizer, _, lr_scheduler = deepspeed.initialize( ../../github/00optimize/DeepSpeed/deepspeed/__init__.py:110: in initialize engine = DeepSpeedEngine(args=args, ../../github/00optimize/DeepSpeed/deepspeed/runtime/engine.py:174: in __init__ self._configure_optimizer(optimizer, model_parameters) ../../github/00optimize/DeepSpeed/deepspeed/runtime/engine.py:570: in _configure_optimizer self.optimizer = self._configure_zero_optimizer(basic_optimizer) ../../github/00optimize/DeepSpeed/deepspeed/runtime/engine.py:691: in _configure_zero_optimizer optimizer = FP16_DeepSpeedZeroOptimizer( ../../github/00optimize/DeepSpeed/deepspeed/runtime/zero/stage2.py:239: in __init__ flatten_dense_tensors_aligned( ../../github/00optimize/DeepSpeed/deepspeed/runtime/zero/stage2.py:74: in flatten_dense_tensors_aligned return _flatten_dense_tensors(padded_tensor_list) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ tensors = [] def _flatten_dense_tensors(tensors): """Flatten dense tensors into a contiguous 1D buffer. Assume tensors are of same dense type. Since inputs are dense, the resulting tensor will be a concatenated 1D buffer. Element-wise operation on this buffer will be equivalent to operating individually. Args: tensors (Iterable[Tensor]): dense tensors to flatten. Returns: A contiguous 1D buffer containing input tensors. """ if len(tensors) == 1: return tensors[0].contiguous().view(-1) > flat = torch.cat([t.contiguous().view(-1) for t in tensors], dim=0) E RuntimeError: There were no tensor arguments to this function (e.g., you passed an empty list of Tensors), but no fallback function is registered for schema aten::_cat. This usually means that this function requires a non-empty list of Tensors. Available functions are [CPU, CUDA, QuantizedCPU, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradNestedTensor, UNKNOWN_TENSOR_TYPE_ID, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode]. E E CPU: registered at /pytorch/build/aten/src/ATen/RegisterCPU.cpp:5925 [kernel] E CUDA: registered at /pytorch/build/aten/src/ATen/RegisterCUDA.cpp:7100 [kernel] E QuantizedCPU: registered at /pytorch/build/aten/src/ATen/RegisterQuantizedCPU.cpp:641 [kernel] E BackendSelect: fallthrough registered at /pytorch/aten/src/ATen/core/BackendSelectFallbackKernel.cpp:3 [backend fallback] E Named: registered at /pytorch/aten/src/ATen/core/NamedRegistrations.cpp:7 [backend fallback] E AutogradOther: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradCPU: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradCUDA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradXLA: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradNestedTensor: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E UNKNOWN_TENSOR_TYPE_ID: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradPrivateUse1: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradPrivateUse2: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E AutogradPrivateUse3: registered at /pytorch/torch/csrc/autograd/generated/VariableType_2.cpp:9161 [autograd kernel] E Tracer: registered at /pytorch/torch/csrc/autograd/generated/TraceType_2.cpp:10551 [kernel] E Autocast: registered at /pytorch/aten/src/ATen/autocast_mode.cpp:254 [kernel] E Batched: registered at /pytorch/aten/src/ATen/BatchingRegistrations.cpp:1016 [backend fallback] E VmapMode: fallthrough registered at /pytorch/aten/src/ATen/VmapModeRegistrations.cpp:33 [backend fallback] /home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/_utils.py:259: RuntimeError ```<|||||>OK, so I had a look at the first failing test. These 2 can't be separated the way it was done, since the optimizer is needed to init the scheduler. But we don't have it yet if it's Deepspeed that creates the optimizer. So we have a chicken-n-egg problem here. Unless deepspeed provides a new API to handle that. So probably at the moment we can only support one of: 1, 2, 3 and not 4. 1. DS scheduler + DS optimizer 2. HF scheduler + HF optimizer 3. DS scheduler + HF optimizer 4. HF scheduler + DS optimizer Note I added a new test for the combo 2 and renamed all tests to match, so now we have: ``` pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_hf_scheduler_hf_optimizer pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_ds_scheduler_hf_optimizer pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_hf_scheduler_ds_optimizer ```<|||||>This deepspeed PR https://github.com/microsoft/DeepSpeed/pull/827 fixes the issues. The following tests would pass. DS scheduler + DS optimizer HF scheduler + HF optimizer DS scheduler + HF optimizer Shall we put a check in HF to disallow the case HF scheduler + DS optimizer?<|||||>I tested with your https://github.com/microsoft/DeepSpeed/pull/827 PR tree and indeed ``` pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_hf_scheduler_hf_optimizer pytest -sv examples/tests/deepspeed/test_deepspeed.py -k test_ds_scheduler_hf_optimizer ``` now pass. awesome! > Shall we put a check in HF to disallow the case HF scheduler + DS optimizer? Correct! Please let me know if you will be taking care of it or you'd rather me finish this up. Either way works. Also will need to update the docs to reflect this new more flexible reality. I can take care of that. We will need to wait for a new release from your side to commit this PR and add a requirement for that new version. I already have this check setup in another PR waiting for this new release: https://github.com/huggingface/transformers/pull/9624 There is another PR by @jeffra today that also needs a new release first before we can update our tree.<|||||>Great. Can you please add the check for the case HF scheduler + DS optimizer? Since you are updating the docs, I think it makes more sense for you to do it. I will work with @jeffra to push the deepspeed PRs into the new release. Thanks.<|||||>@cli99, I made further changes to your original code 1. as @jeffra suggested we can't use HF optimizer with offload enabled - so coded to defend against that 2. I realized my original design was flawed and that the user could end up with a mismatch between cl args and the ds config, so I recoded the optimizer/scheduler config sections to override ds config with cl args where needed. Please let me know if I broke anything in your original plan. I have also updated the docs extensively. They look a bit scary at the moment and will need a rework down the road. My main goal here is to prevent from user getting subtle errors, so setting command line arguments to override DS config. Hope it makes sense. <|||||>@sgugger, I made more doc updates - if you get a chance please kindly skim over them? Thank you! I think we will merge this on Monday when deepspeed==0.3.13 is planned to be released.<|||||>@stas00 I'll let you merge when you are ready (since you followed this more closely than me). It looks good to merge to me :-) Thanks for your contribution @cli99!<|||||>I'm on top of this - we are waiting for a new DeepSpeed release required by this PR. Thank you, @sgugger
transformers
10,463
closed
Script for squad_v2 for custom data not working
**Environment info** transformers version: 4.3.3 Platform: Linux-4.15.0-91-generic-x86_64-with-debian-buster-sid Python version: 3.7.6 Using GPU in script?: True Using distributed or parallel set-up in script?: True **Who can help** @gowtham1997 @patil-suraj I am running the script from docs to train and evaluate squad_v2 data with custom arguments. My dataset is as per the structure of squad format with every keys and values properly. [colabJson2.zip](https://github.com/huggingface/transformers/files/6063805/colabJson2.zip) ``` python run_qa.py \ --model_name_or_path bert-base-uncased \ --version_2_with_negative\ --train_file=/content/colabJson.json \ --validation_file=/content/dev-v2.0.json \ --do_train \ --do_eval \ --per_device_train_batch_size 12 \ --learning_rate 3e-5 \ --num_train_epochs 2 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/debug_squad_16 ``` Output of the script i am getting error as ``` Traceback (most recent call last): File "run_qa.py", line 507, in <module> main() File "run_qa.py", line 230, in main datasets = load_dataset(extension, data_files=data_files, field="data") File "/usr/local/lib/python3.7/dist-packages/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py", line 234, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py", line 234, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.7/dist-packages/datasets/utils/py_utils.py", line 172, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/usr/local/lib/python3.7/dist-packages/datasets/builder.py", line 831, in _as_dataset return Dataset(**dataset_kwargs) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 250, in __init__ self.info.features, self.info.features.type, inferred_features, inferred_features.type ValueError: External features info don't match the dataset: Got {'title': Value(dtype='string', id=None), 'paragraphs': [{'qas': [{'question': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None), 'answers': [{'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int64', id=None)}], 'is_impossible': Value(dtype='bool', id=None), 'plausible_answers': [{'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int64', id=None)}]}], 'context': Value(dtype='string', id=None)}]} with type struct<paragraphs: list<item: struct<context: string, qas: list<item: struct<answers: list<item: struct<answer_start: int64, text: string>>, id: string, is_impossible: bool, plausible_answers: list<item: struct<answer_start: int64, text: string>>, question: string>>>>, title: string> but expected something like {'title': Value(dtype='string', id=None), 'paragraphs': [{'context': Value(dtype='string', id=None), 'qas': [{'answers': [{'text': Value(dtype='string', id=None), 'answer_start': Value(dtype='int64', id=None)}], 'question': Value(dtype='string', id=None), 'id': Value(dtype='string', id=None)}]}]} with type struct<paragraphs: list<item: struct<context: string, qas: list<item: struct<answers: list<item: struct<answer_start: int64, text: string>>, id: string, question: string>>>>, title: string> ``` In my dataset I don't have any field as **plausible_answers**, **is_impossible** .. how to match it with expected format when it is already in that format
03-01-2021 18:53:00
03-01-2021 18:53:00
hi @BatMrE could you try the new `run_qa.py` example script and let us know if you still face the issue. You can find the new script here https://github.com/huggingface/transformers/tree/master/examples/question-answering<|||||>> hi @BatMrE > > could you try the new `run_qa.py` example script and let us know if you still face the issue. You can find the new script here > https://github.com/huggingface/transformers/tree/master/examples/question-answering ``` [INFO|tokenization_utils_base.py:1786] 2021-03-02 18:56:09,448 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99 [INFO|tokenization_utils_base.py:1786] 2021-03-02 18:56:09,448 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4 [INFO|modeling_utils.py:1027] 2021-03-02 18:56:09,548 >> loading weights file https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f [WARNING|modeling_utils.py:1135] 2021-03-02 18:56:14,139 >> Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.predictions.transform.LayerNorm.bias'] - This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). [WARNING|modeling_utils.py:1146] 2021-03-02 18:56:14,139 >> Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs.weight', 'qa_outputs.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. 1% 3/227 [00:01<02:25, 1.54ba/s]thread '<unnamed>' panicked at 'assertion failed: stride < max_len', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/encoding.rs:322:9 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace ```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,462
closed
Add I-BERT to README
This PR adds I-BERT to the readme, it was forgotten.
03-01-2021 17:12:22
03-01-2021 17:12:22
transformers
10,461
closed
pass correct head mask to cross-attention layer
# What does this PR do? MBart, Blender, and Pegaus models decoder layers pass `layer_head_mask` head mask to cross attention layer, which is incorrect. This PR passes the correct `encoder_layer_head_mask` to cross-attention layer.
03-01-2021 15:59:46
03-01-2021 15:59:46
I see, then this looks a bit confusing to me, other seq2seq models (BART, Marian, BlendSmall) pass `encoder_layer_head_mask` to cross attention https://github.com/huggingface/transformers/blob/9248e27037ce7f7c9359802e6fdf819a1e227a18/src/transformers/models/bart/modeling_bart.py#L419 https://github.com/huggingface/transformers/blob/9248e27037ce7f7c9359802e6fdf819a1e227a18/src/transformers/models/marian/modeling_marian.py#L438 https://github.com/huggingface/transformers/blob/9248e27037ce7f7c9359802e6fdf819a1e227a18/src/transformers/models/blenderbot_small/modeling_blenderbot_small.py#L421<|||||>> s looks a bit confusing to me, other seq2seq models (BART, Marian, BlendSmall) pass `encoder_layer_head_mask` to cross attention I would argue that they are wrong then :D (cc @stancld - maybe do you have more insight here?)<|||||>@patil-suraj @patrickvonplaten - What a coincidence... I was discussing this topic with my PhD supervisor this morning and I think the most proper way how to handle head masking for cross attention is to introduce a separate cross-attention head mask tensor to disentangle the cross-attention effect from the self-attention one?<|||||>I guess this would actually be the cleanest option! At the moment the cross-attention layer the exact same shape as the decoder self-attention layer -> so I think we can use the same mask for now and maybe improve it later with a `cross-attention head mask`. Using the `encoder_layer_head_mask` however could lead to errors IMO - so this option is just wrong to me<|||||>I can create a new issue and can have a look at this cross-attention `head_mask` at the weekend :)<|||||>That would be great @stancld! I will close this PR.
transformers
10,460
closed
How to Reduce the inference time of Facebook/many to many model?
Facebook/many to many model takes 9s on cpu to translate , how to reduce the inference time on cpu ? It will be helpful if an method is suggested .
03-01-2021 14:32:50
03-01-2021 14:32:50
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,459
closed
[Wav2Vec2] Improve SpecAugment function by converting numpy based function to pytorch based function
# 🚀 Feature request As can be seen here: https://github.com/huggingface/transformers/blob/11655fafdd42eb56ad94e09ecd84d4dc2d1041ae/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L47, the function `_compute_mask_indices` (responsible for spec augment) of Wav2Vec2 is written in numpy which means that the function is not GPU compatible. The function could simply be rewritten in PyTorch, which should make training on GPU faster. This "Good First Issue" is about converting `_compute_mask_indices` to PyTorch while keeping the same functionality. ## Your contribution I'm happy to guide the contributor through the PR!
03-01-2021 14:28:44
03-01-2021 14:28:44
@patrickvonplaten You mean it function definition will become def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.Tensor] = None, min_masks: int = 0, ) -> torch.tensor: ?? Of course internal working also needs to be changed according. Just trying to learn what need to be done<|||||>essentially all `np....` operations should be replaced by `torch....` operations :-) <|||||>``` def _compute_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.Tensor] = None, min_masks: int = 0, ) -> torch.tensor: bsz, all_sz = shape #mask = np.full((bsz, all_sz), False) mask = torch.Tensor(bsz, all_sz).fill_(False) all_num_mask = int( # add a random number for probabilistic rounding mask_prob * all_sz / float(mask_length) + torch.rand() ) all_num_mask = max(min_masks, all_num_mask) mask_idcs = [] padding_mask = attention_mask.ne(1) if attention_mask is not None else None for i in range(bsz): if padding_mask is not None: sz = all_sz - padding_mask[i].long().sum().item() num_mask = int( # add a random number for probabilistic rounding mask_prob * sz / float(mask_length) + torch.rand() ) num_mask = max(min_masks, num_mask) else: sz = all_sz num_mask = all_num_mask lengths = torch.Tensor(num_mask).fill_(mask_length) if sum(lengths) == 0: lengths[0] = min(mask_length, sz - 1) min_len = min(lengths) if sz - min_len <= num_mask: min_len = sz - num_mask - 1 #mask_idc = np.random.choice(sz - min_len, num_mask, replace=False) mask_idc = torch.randperm(sz - min_len)[:num_mask] #mask_idc = np.asarray([mask_idc[j] + offset for j in range(len(mask_idc)) for offset in range(lengths[j])]) mask_idc = torch.from_numpy(np.asarray([mask_idc[j] + offset for j in range(len(mask_idc)) for offset in range(lengths[j])])) #mask_idcs.append(np.unique(mask_idc[mask_idc < sz])) mask_idcs.append(torch.unique(mask_idc[mask_idc < sz])) min_len = min([len(m) for m in mask_idcs]) for i, mask_idc in enumerate(mask_idcs): if len(mask_idc) > min_len: #mask_idc = np.random.choice(mask_idc, min_len, replace=False) mask_idc = torch.randperm(mask_idc)[:min_len] mask[i, mask_idc] = True return mask ```<|||||>Can you guide me if I am doing anything wrong above ? <|||||>This looks nice already - do you want to open a PR for this? It would be ideal to replace the `for ....` loops with pure PyTorch vector operations as well<|||||>yes sure. Let me try. I will send PR tomorrow. Bit late to work on it for now.<|||||>@patrickvonplaten Can you please help pass all checks. I have ran "make style", "make fixup" and did respective changes. But make quality is failing on master itself. Do you have any suggestion for code improvement ? <|||||>@patrickvonplaten Hello. I am Master 1student in japan. I want to fine tuning with local small data. Should I fix this your code??? https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_asr.py Sorry for low level question. p.s. Is it okay to ask such a question here?<|||||>> @patrickvonplaten > Hello. I am Master 1student in japan. > I want to fine tuning with local small data. > Should I fix this your code??? > https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_asr.py > > Sorry for low level question. > > p.s. Is it okay to ask such a question here? I'll open-source an explicit notebook on how to fine-tune Wav2Vec2 in a ~1,2 weeks under the hugging face blog. If you haven't seen it by then, please ping me here again.<|||||>Hi @patrickvonplaten. I'd like to contribute to this issue. Is it still open?<|||||>Hey @amalad, Yes the PR is still open since the other PR mentioned here seems to be stuck - feel free to open a new one :-)<|||||>Since Pytorch has no equivalent function to `np.random.choice`, there're only workarounds. Some [discussions](https://github.com/pytorch/pytorch/issues/16897) about this issue. Anyway here's my take. ```python3 import torch from typing import Optional, Tuple import random def _compute_specaugment_mask_indices( shape: Tuple[int, int], mask_prob: float, mask_length: int, attention_mask: Optional[torch.Tensor] = None, min_masks: int = 0, ) -> torch.Tensor: """ Computes random mask spans for a given shape Args: shape: the the shape for which to compute masks. should be of size 2 where first element is batch size and 2nd is timesteps attention_mask: optional padding mask of the same size as shape, which will prevent masking padded elements mask_prob: probability for each token to be chosen as start of the span to be masked. this will be multiplied by number of timesteps divided by length of mask span to mask approximately this percentage of all elements. however due to overlaps, the actual number will be smaller (unless no_overlap is True) mask_length: size of the mask min_masks: minimum number of masked spans Adapted from `fairseq's data_utils.py <https://github.com/pytorch/fairseq/blob/e0788f7007a8473a76db573985031f3c94201e79/fairseq/data/data_utils.py#L376>`__. """ bsz, all_sz = shape mask = torch.full((bsz, all_sz), False) all_num_mask = int( # add a random number for probabilistic rounding mask_prob * all_sz / float(mask_length) + random.random() ) all_num_mask = max(min_masks, all_num_mask) if all_num_mask == 0: return mask mask_idcs = [] padding_mask = attention_mask.ne(1) if attention_mask is not None else None for i in range(bsz): if padding_mask is not None: sz = all_sz - padding_mask[i].long().sum().item() num_mask = int( # add a random number for probabilistic rounding mask_prob * sz / float(mask_length) + random.random() ) num_mask = max(min_masks, num_mask) else: sz = all_sz num_mask = all_num_mask lengths = torch.full([num_mask], mask_length) if sum(lengths) == 0: lengths[0] = min(mask_length, sz - 1) min_len = int(min(lengths)) if not lengths.nelement() == 0 else 0 if sz - min_len <= num_mask: min_len = sz - num_mask - 1 #mask_idc = torch.randint(sz - min_len, [num_mask]) # TODO: should sample w/o replacement mask_idc = random.sample(range(sz - min_len), num_mask) mask_idc = torch.Tensor([mask_idc[j] + offset for j in range(num_mask) for offset in range(lengths[j])]) mask_idcs.append(torch.unique(mask_idc[mask_idc < sz])) min_len = min([len(m) for m in mask_idcs]) for i, mask_idc in enumerate(mask_idcs): if len(mask_idc) > min_len: mask_idc = mask_idc.gather(dim=0, index=torch.multinomial(mask_idc, min_len, replacement=False)) mask[i, mask_idc.long()] = True return mask ```<|||||>Hey @chutaklee, This looks nice! Could you maybe open a PR for it and measure the speed improvement when training on a GPU? :-)<|||||>Hi, @patrickvonplaten can I still work on this issue? or Is @chutaklee working on it? Actually, I have been working with fairseq's wav2vec and would like to give this issue a go.<|||||>@01-vyom Hi Vyom, I'm stuck at vectorizing the mask generation. So feel free to try it.<|||||>Feel free to give it a go @01-vyom :-)<|||||>@patrickvonplaten Made a PR.
transformers
10,458
closed
Work towards fixing Flax tests
I still get failures that seem due to missing HTTP artifacts, e.g. E OSError: Can't load weights for 'roberta-large'. Make sure that: E E - 'roberta-large' is a correct model identifier listed on 'https://huggingface.co/models' E E - or 'roberta-large' is the correct path to a directory containing a file named pytorch_model.bin. This is the command I used to run tests: RUN_SLOW=true python -m pytest -k flax -n 8 --dist=loadfile -rA -s --make-reports=tests_flax ./tests/ | tee tests_output.txt # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? (<-- this PR fixes existing tests) ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2021 10:52:40
03-01-2021 10:52:40
/cc @patrickvonplaten for review<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Think this issue has been resolved :-)<|||||>Yes :)
transformers
10,457
closed
[Wav2Vec2] Remove unused config
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Removes unused config variable ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
03-01-2021 09:15:00
03-01-2021 09:15:00
transformers
10,456
closed
How to Improve inference time of facebook/mbart many to many model?
If we tried to run translation service on facebook mbart many to many on cpu it take 9 secs to translate, how do we reduce the inference time further
03-01-2021 08:21:18
03-01-2021 08:21:18
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
10,455
closed
[Wav2Vec2FeatureExtractor] smal fixes
# What does this PR do? This PR adds the `return_attention_mask` argument to `Wav2Vec2FeatureExtractor.__call__` method.
03-01-2021 07:51:14
03-01-2021 07:51:14
transformers
10,454
closed
How can I make the logging utils log to a file as well?
# 🚀 Feature request I want to make the logging utils log to a file in addition to the console. But I can't find an API that lets me add a handler to the logging utils.
03-01-2021 07:37:48
03-01-2021 07:37:48
I just realize that I can enable the propagation and add the file handler to logging.root<|||||>> I just realize that I can enable the propagation and add the file handler to logging.root how? can you please provide the example code, if possible.<|||||>> > I just realize that I can enable the propagation and add the file handler to logging.root > > how? can you please provide the example code, if possible. @sid8491 add this piece of code at the beginning of your script ```python file_formatter = logging.Formatter(fmt="%(asctime)s - %(levelname)s - %(name)s - %(message)s", datefmt="%m/%d/%Y %H:%M:%S", ) file_handler = logging.FileHandler( os.path.join(training_args.output_dir, f"log.{os.getpid()}.{training_args.local_rank}.txt")) file_handler.setFormatter(file_formatter) logging.root.addHandler(file_handler) ```
transformers
10,453
closed
pytorch Albert quantization error
I use huggingface transformers 'albert_chinese_base',but in pytorch quantization,The following problem occurred: File "test_simple.py", line 186, in <module> model_pt_quantized(input_ids=model_inputs["input_ids"], token_type_ids=model_inputs["token_type_ids"], attention_mask=model_inputs["attention_mask"]) File "/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py", line 563, in forward output_hidden_states=output_hidden_states, File "/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py", line 346, in forward output_hidden_states, File "/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py", line 299, in forward layer_output = albert_layer(hidden_states, attention_mask, head_mask[layer_index], output_attentions) File "/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py", line 277, in forward attention_output = self.attention(hidden_states, attention_mask, head_mask, output_attentions) File "/work/runtime/torch/lib64/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl result = self.forward(*input, **kwargs) File "/work/runtime/torch/lib/python3.6/site-packages/transformers/modeling_albert.py", line 251, in forward self.dense.weight.t() AttributeError: 'function' object has no attribute 't'
03-01-2021 07:26:03
03-01-2021 07:26:03
I also meet the bug,I find that self.dense has been through prune_linear_layer() and return a nn.Linear as it should be, ``` >>> m = torch.nn.Linear(1,2) >>> m.weight.t <built-in method t of Parameter object at 0x7fcb7c3a45f0> >>> m.weight.t() tensor([[-0.0714, 0.7815]], grad_fn=<TBackward>) ``` but I can not find a way to fix it If the maintainer answer your question,@me pls,thanks a lot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,452
closed
BartForConditionalGeneration breaks with label smoothing loss
## Environment info - `transformers` version: 4.3.3 - The other parameters are irrelevant ### Who can help @patrickvonplaten @sgugger ## Information I apologize for not using the provided template for this issue. By generating entries with `PreTrainedTokenizer.prepare_seq2seq_batch`, collating with `DataCollatorForSeq2Seq` and training a `BartForConditionalGeneration` with `Seq2SeqTrainer`, I ran into this particular error message. ```txt Traceback (most recent call last): File "playground.py", line 42, in <module> trainer.train() File "/Users/mingrui.wang/miniconda3/envs/bert/lib/python3.7/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/Users/mingrui.wang/miniconda3/envs/bert/lib/python3.7/site-packages/transformers/trainer.py", line 1304, in training_step loss = self.compute_loss(model, inputs) File "/Users/mingrui.wang/miniconda3/envs/bert/lib/python3.7/site-packages/transformers/trainer.py", line 1341, in compute_loss loss = self.label_smoother(outputs, labels) File "/Users/mingrui.wang/miniconda3/envs/bert/lib/python3.7/site-packages/transformers/trainer_pt_utils.py", line 398, in __call__ nll_loss = log_probs.gather(dim=-1, index=labels) RuntimeError: Size does not match at dimension 1 expected index [1, 7, 1] to be smaller than src [1, 5, 50265] apart from dimension 2 ``` Provided is a script to replicate the error ```python from torch.utils.data import Dataset from transformers import (BartForConditionalGeneration, BartTokenizer, BatchEncoding, DataCollatorForSeq2Seq, PreTrainedTokenizer, Seq2SeqTrainer, TrainingArguments) class DummySeq2SeqDataset(Dataset): def __init__(self, tokenizer: PreTrainedTokenizer): self.tokenizer = tokenizer self.data = [ ("Hello world!", "Hallo welt!"), ] def __len__(self): return len(self.data) def __getitem__(self, index: int) -> BatchEncoding: src_text, tgt_text = self.data[index] return self.tokenizer.prepare_seq2seq_batch( src_text, tgt_text, return_token_type_ids=False ) train_args = TrainingArguments(output_dir='tmp', label_smoothing_factor=0.1) tokenizer = BartTokenizer.from_pretrained('facebook/bart-base') model = BartForConditionalGeneration.from_pretrained('facebook/bart-base') train_dataset = DummySeq2SeqDataset(tokenizer) data_collator = DataCollatorForSeq2Seq(tokenizer) trainer = Seq2SeqTrainer( model=model, args=train_args, data_collator=data_collator, train_dataset=train_dataset, tokenizer=tokenizer, ) trainer.train() ``` ## Source of problem The problem lies with the interaction between `BartForConditionalGeneration` and how label-smoothing is implemented in `Trainer`. `BartForConditionalGeneration.forward` is highly tied to `labels`, since it's also used to generate `decoder_input_ids`. In the behavior of label-smoothing as implemented in the `Trainer` class, the following is currently being done. https://github.com/huggingface/transformers/blob/3c733f320870261ea948049505a30c30fd6ea23a/src/transformers/trainer.py#L1447-L1469 The whoopsie is that `labels` is removed from the arguments passed to `BartForConditionalGeneration.forward`. Computation of logits then defaulted to using `input_ids` as `decoder_input_ids`. ## Possible solutions A possible way to fix this would be to shift smooth label loss into the loss computation of each model rather than in `Trainer`. Doing it this way comes with its own set of pros and cons. Pros - Backward compatibility can be completely maintained - Removes the little bit of code smell where true training loss is not reflected in `model.forward` when `label_smoothing > 0`. Cons - Complicates configuration - label_smoothing loss defined in model config rather than training args - Requires changes in many places in this repository (albeit, they are the same exact set of changes)
03-01-2021 07:15:07
03-01-2021 07:15:07
Another possible solution would be to change the behavior of `PreTrainedTokenizer.prepare_seq2seq_batch`, by adding something like a `return_decoder_input_ids` flag. Pros - Backward compatibility also maintained - Only a handful of changes has to be made Cons - This change can complicate matters for non-autoregressive sequence generation - A additional cost on serialization and I/O is incurred by having more tensors to pass from data loader workers and send to GPU.<|||||>Hi @mingruimingrui You are right. But this issue has already been fixed on master. The default `DataCollatorForSeq2Seq` now prepares `decoder_input_ids` when label smoothing is enabled. All seq2seq models now have `prepare_decoder_input_ids_from_labels` method, which let's you prepare `decoder_input_ids` outside of the model. So you can directly pass `decoder_input_ids` to the model and drop `labels` when using label smoothing.<|||||>Hi @patil-suraj I see but just a little bit of nit-picking regarding this solution... Mainly because I really can't excuse the passing of the entire model to DataCollator. Under PyTorch, both Dataset and DataCollator should be passable to python subprocesses. But a trainable model moved to the GPU would likely not work well with python multiprocessing. Also worth mentioning is the breaking of backward compatibility.<|||||>That's a great point! Pinging @sgugger here. > Also worth mentioning is the breaking of backward compatibility. I'm not sure what's breaking backward compatibility here. And regarding the previous comments > Another possible solution would be to change the behavior of PreTrainedTokenizer.prepare_seq2seq_batch, by adding something like a return_decoder_input_ids flag. This method is now deprecated and will be removed in v5 so we don't encourage using this method. > A possible way to fix this would be to shift smooth label loss into the loss computation of each model rather than in Trainer We generally tend to avoid any training-specific code in model files, the model classes are just responsible for doing a forward pass and computing the loss. Most of the training-related functionality will be handled by `Trainer` or the training scripts.<|||||>> I'm not sure what's breaking backward compatibility here. Ah, this is my bad! I was thinking that since `DataCollatorForSeq2Seq` may not work well with multiprocessing. Scripts currently using `transformers<=4.3.3` may be affected when updating to `transformers>4.3.3`. But in `transformers<=4.3.3`, there is no model attribute.<|||||>> We generally tend to avoid any training-specific code in model files, the model classes are just responsible for doing a forward pass and computing the loss. Most of the training-related functionality will be handled by `Trainer` or the training scripts. I agree, adding unrelated functionalities makes a class overloaded and code hard to read. Though it can be argued that the computation of loss is also a training-related function.<|||||>I don't think the multiprocessing will create copies of the models, it will just pass along the reference to it. Did you see GPU memory usage be multiplied by number of processes? I jsut tried your snippet of code and added the model to the data collator: ``` data_collator = DataCollatorForSeq2Seq(tokenizer, model=model) ``` and I didn't see any change in GPU memory use with 1 or 4 workers in the dataloader.<|||||>@sgugger The reason I discourage is practice is because it seemed to encourage the usage of model-related components in the `DataCollator`. Passing references of model parameters to python subprocesses is completely fine but special care has to be taken to not use them (in the subprocess) or the terribly descriptive `RuntimeError: CUDA error: initialization error` can be encountered. GPU memory usage was not my concern in this issue. Similar to what you had mentioned, pickling of PyTorch tensors passes only a reference to the original tensor so GPU memory usage would not increase due to this behavior.<|||||>We only use the method of the model ot generate decoder input IDs, not the actual model, so I think it's completely fine in this case. Passing the method from the model would be way weirder in terms of user API.<|||||>I see, it's a fair point. Implementing the feature this way also ensures that DataCollator performs all required preprocessing for training data input. Closing issue.
transformers
10,451
closed
BART for generating sequence of length more than 1024 tokens
I was using pretraining code given in transformers/examples/seq2seq to finetune on my custom dataset containing summaries of the text of greater than 1024 tokens. But I am getting an error regarding index out of bounds error. Is it possible to fine-tune BART to generate summaries of more than 1024 tokens? I have added log file for reference. [v100job.txt](https://github.com/huggingface/transformers/files/6059178/v100job.txt)
03-01-2021 04:41:31
03-01-2021 04:41:31
Hi @silentghoul-spec for BART the maximum sequence length is 1024, so it can't process text larger than 1024 tokens. You could use the `LED` model for long document summarization, here's a notebook which demonstrates how to use LED https://github.com/patrickvonplaten/notebooks/blob/master/Fine_tune_Longformer_Encoder_Decoder_(LED)_for_Summarization_on_pubmed.ipynb Also please use the forum https://discuss.huggingface.co/ to ask such questions first<|||||>Thanks, @patil-suraj , I wondered why it has a token limit of 1024 as the original paper https://arxiv.org/pdf/1910.13461.pdf didn't have any mentioned limit as such. I guess it's because BART model cards currently available were trained with encoder having a limit of 1024 tokens. Btw thanks for pointing me to discussion forums; I will use them for further discussions.
transformers
10,450
closed
OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] When I try to use my model
an error occurred while importing my model from a folder. I cloned my repository and wanted to use the model, I got an error https://huggingface.co/Fidlobabovic/beta-kvantorium-simple-small Do I need to change files in my repository? What should I fix in code or model files? #7370 #9667 ```` from transformers import pipeline nlp = pipeline("question-answering", model='/content/beta-kvantorium-simple-small', tokenizer='/content/beta-kvantorium-simple-small') context = r""" Цель текущего контроля и аттестаций - выявление уровня обученности, развития способностей обучающихся, приобретенных компетенций и их соответствие прогнозируемым результатам дополнительной общеобразовательной программы. """ print(nlp(question="Какая Цель текущего контроля аттестаций?", context=context)) -> 1046 state_dict = torch.load(resolved_archive_file, map_location="cpu") 1047 except Exception: 9 frames UnpicklingError: invalid load key, 'v'. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) OSError: Unable to load weights from pytorch checkpoint file for '/content/beta-kvantorium-simple-small' at '/content/beta-kvantorium-simple-small/pytorch_model.bin'If you tried to load a PyTorch model from a TF 2.0 checkpoint, please set from_tf=True. During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) /usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1183 raise EnvironmentError( 1184 "Error no file named {} found in directory {} or `from_pt` set to False".format( -> 1185 [WEIGHTS_NAME, TF2_WEIGHTS_NAME], pretrained_model_name_or_path 1186 ) 1187 ) OSError: Error no file named ['pytorch_model.bin', 'tf_model.h5'] found in directory /content/beta-kvantorium-simple-small or `from_pt` set to False
02-28-2021 18:19:47
02-28-2021 18:19:47
Please check if the `pytorch_model.bin` file is available in your cloned repo. I can see that file on the hub, so there might have been some mistake when cloning the repo.<|||||>@patil-suraj when cloning the repository, this file is located in the folder <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Please check if the `pytorch_model.bin` file is available in your cloned repo. I can see that file on the hub, so there might have been some mistake when cloning the repo. Thanks!🥰 It solves my problem, some files' name were changed after downloaded🤕
transformers
10,449
closed
pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: Linux-5.4.0-65-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.7.1 (False) - Tensorflow version (GPU?): 2.5.0-dev20210225 (False) - Using GPU in script?: 2 V100 32GB - Using distributed or parallel set-up in script?: parallel ### Who can help @LysandreJik @sgugger @n1t0 <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) I used `run_ner.py` from examples. The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) PoS Tagging task with the datasets: https://github.com/yigit353/turkish-bert-itu/tree/main/imst ## To reproduce Steps to reproduce the behavior: 1. Converted a BERT TensorFlow 1 checkpoint pre-trained from scratch using a custom corpus and vocabulary with the original Google's BERT run_pretraining.py via `transformers-cli convert` 2. Used the datasets in this repo (I just uploaded them there): https://github.com/yigit353/turkish-bert-itu/tree/main/imst 3. Used run_ner.py on the dataset with the following code: ```bash python3 "$USER_ROOT/$LIB_DIR/run_ner.py" \ --task_name=pos \ --model_name_or_path "$USER_ROOT/$BERT_DIR/$TORCH_DIR" \ --train_file "$USER_ROOT/$DATA_DIR/tr_imst-ud-train.conllu.json" \ --validation_file "$USER_ROOT/$DATA_DIR/tr_imst-ud-dev.conllu.json" \ --output_dir "$USER_ROOT/$DATA_DIR/$OUTPUT_DIR-$SEED" \ --per_device_train_batch_size=$BATCH_SIZE \ --num_train_epochs=$NUM_EPOCHS \ --overwrite_cache=True \ --do_train \ --do_eval \ --seed=$SEED \ --fp16 ``` 4. It worked good with NER datasets (which is parallel to PoS dataset) here: https://github.com/yigit353/turkish-bert-itu/tree/main/datasets/ner 5. It also worked with the PyTorch model (both with PoS and NER without errors or warnings): https://huggingface.co/dbmdz/bert-base-turkish-cased I also receive the following warning for NER and POS datasets: `thread '<unnamed>' panicked at 'no entry found for key', /__w/tokenizers/tokenizers/tokenizers/src/models/mod.rs:36:66` However, NER task worked nonetheless with this script: ```bash python3 "$USER_ROOT/$LIB_DIR/run_ner.py" \ --model_name_or_path "$USER_ROOT/$BERT_DIR/$OUT_DIR/$TORCH_OUT_DIR" \ --train_file "$USER_ROOT/$DATA_DIR/tr-data3/train.json" \ --validation_file "$USER_ROOT/$DATA_DIR/tr-data3/dev.json" \ --output_dir "$USER_ROOT/$DATA_DIR/$OUTPUT_DIR-$SEED" \ --per_device_train_batch_size=$BATCH_SIZE \ --num_train_epochs=$NUM_EPOCHS \ --do_train \ --do_eval \ --fp16` ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> ``` [INFO|trainer.py:837] 2021-02-28 16:04:10,685 >> ***** Running training ***** [INFO|trainer.py:838] 2021-02-28 16:04:10,685 >> Num examples = 3664 [INFO|trainer.py:839] 2021-02-28 16:04:10,685 >> Num Epochs = 10 [INFO|trainer.py:840] 2021-02-28 16:04:10,685 >> Instantaneous batch size per device = 16 [INFO|trainer.py:841] 2021-02-28 16:04:10,685 >> Total train batch size (w. parallel, distributed & accumulation) = 32 [INFO|trainer.py:842] 2021-02-28 16:04:10,685 >> Gradient Accumulation steps = 1 [INFO|trainer.py:843] 2021-02-28 16:04:10,685 >> Total optimization steps = 1150 0%| | 0/1150 [00:00<?, ?it/s]/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:64: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' Traceback (most recent call last): File "/okyanus/users/ctantug/transformers/examples/token-classification/run_ner.py", line 466, in <module> main() File "/okyanus/users/ctantug/transformers/examples/token-classification/run_ner.py", line 400, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1302, in training_step loss = self.compute_loss(model, inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 162, in forward return self.gather(outputs, self.output_device) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 174, in gather return gather(outputs, output_device, dim=self.dim) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 68, in gather res = gather_map(outputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in gather_map for k in out)) File "<string>", line 7, in __init__ File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/file_utils.py", line 1413, in __post_init__ for element in iterator: File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 62, in <genexpr> for k in out)) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 55, in gather_map return Gather.apply(target_device, dim, *outputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 71, in forward return comm.gather(inputs, ctx.dim, ctx.target_device) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/parallel/comm.py", line 230, in gather return torch._C._gather(tensors, dim, destination) RuntimeError: CUDA error: device-side assert triggered /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [2,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [5,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [9,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [12,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [1,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [4,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [14,0,0] Assertion `t >= 0 && t < n_classes` failed. /pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:108: cunn_ClassNLLCriterion_updateOutput_kernel: block: [0,0,0], thread: [16,0,0] Assertion `t >= 0 && t < n_classes` failed. ``` The stack trace always gives a different error location.
02-28-2021 18:16:04
02-28-2021 18:16:04
I think this means you have a label outside of boundaries from the error message but I can't be sure: `CUDA error: device-side assert triggered` are very tricky since they are thrown not when they appear but when there is a synchronization between all the CUDA processes. The very best way to debug those errors is to try to run a few batches on the CPU to get a better error message. I can try to look into this later on but your model is not public (in the command you give us to repro) and you said it worked for other models?<|||||>> I think this means you have a label outside of boundaries from the error message but I can't be sure: `CUDA error: device-side assert triggered` are very tricky since they are thrown not when they appear but when there is a synchronization between all the CUDA processes. After running with a single CPU and no GPU I got this more explanatory error: ``` 0%| | 0/229 [00:00<?, ?it/s]Traceback (most recent call last): File "/okyanus/users/ctantug/transformers/examples/token-classification/run_ner.py", line 471, in <module> main() File "/okyanus/users/ctantug/transformers/examples/token-classification/run_ner.py", line 405, in main train_result = trainer.train(resume_from_checkpoint=checkpoint) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 940, in train tr_loss += self.training_step(model, inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1304, in training_step loss = self.compute_loss(model, inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/trainer.py", line 1334, in compute_loss outputs = model(**inputs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/transformers/models/bert/modeling_bert.py", line 1708, in forward loss = loss_fct(active_logits, active_labels) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/modules/loss.py", line 962, in forward ignore_index=self.ignore_index, reduction=self.reduction) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/functional.py", line 2471, in cross_entropy return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction) File "/okyanus/users/ctantug/.conda/envs/py37/lib/python3.7/site-packages/torch/nn/functional.py", line 2267, in nll_loss ret = torch._C._nn.nll_loss(input, target, weight, _Reduction.get_enum(reduction), ignore_index) IndexError: Target 9 is out of bounds. ``` > The very best way to debug those errors is to try to run a few batches on the CPU to get a better error message. I can try to look into this later on but your model is not public (in the command you give us to repro) and you said it worked for other models? The model is not public yet, indeed. I will make them public if necessary. The model successfully performed the NER task (which is nearly the same considering the data structure). However, failed at PoS tagging (which has only different labels).<|||||>So the problem is indeed with the labels -> here you have a label with an index of 9 and you should print the value of `num_labels` in the script, but it looks like it's less than this from the error. I think the datasets should be fixed somehow. It may be that your validation dataset has a label the training dataset did not have, which then causes this issue. In that case you should either make sure that label is present in the training set too or remove the samples with that label in your evaluation dataset.<|||||>These are my active labels: ``` Activate labels: tensor([-100, 9, 4, 7, 12, -100, -100, -100, 7, 0, 8, 7, -100, -100, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 0, 2, -100, 0, 7, -100, 11, 7, -100, 7, 12, -100, 12, 2, -100, 7, -100, 1, 8, 7, 3, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 7, -100, -100, 1, 0, 7, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, -100, -100, -100, 10, -100, 0, 12, -100, -100, -100, 11, 9, 0, -100, 7, -100, -100, 7, 12, -100, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 7, -100, 11, 7, -100, -100, -100, 11, 7, -100, -100, -100, 11, 7, -100, -100, 12, 8, 7, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 4, 9, 9, 1, 2, -100, 12, -100, -100, -100, 0, 3, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2, 7, 7, -100, 12, 11, 4, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2, 12, -100, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 8, 7, -100, 12, -100, -100, 12, -100, 1, 12, 11, 8, 7, 2, 12, -100, 7, 3, 9, 11, 4, 7, -100, 9, 8, 7, 12, -100, -100, -100, 11, 2, 0, -100, -100, 0, -100, 11, 12, -100, 12, -100, 1, 0, 12, 12, 0, -100, 3, 11, -100, -100, 10, -100, -100, 11, 0, 7, -100, 7, 1, 2, 0, -100, -100, 12, -100, 5, 7, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 7, -100, 7, 7, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, -100, 4, 11, 7, -100, 10, -100, -100, 10, -100, -100, -100, -100, 0, 7, -100, 12, 12, -100, -100, 4, 12, -100, 12, -100, 11, 7, -100, -100, -100, 12, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 0, 7, 12, -100, 11, 12, -100, -100, 11, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, 7, -100, -100, -100, 12, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 0, -100, -100, 7, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, 9, 7, 12, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]) ``` And the targets: ``` target: tensor([-100, 9, 4, 7, 12, -100, -100, -100, 7, 0, 8, 7, -100, -100, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 0, 2, -100, 0, 7, -100, 11, 7, -100, 7, 12, -100, 12, 2, -100, 7, -100, 1, 8, 7, 3, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 7, -100, -100, 1, 0, 7, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, -100, -100, -100, 10, -100, 0, 12, -100, -100, -100, 11, 9, 0, -100, 7, -100, -100, 7, 12, -100, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 7, -100, 11, 7, -100, -100, -100, 11, 7, -100, -100, -100, 11, 7, -100, -100, 12, 8, 7, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 4, 9, 9, 1, 2, -100, 12, -100, -100, -100, 0, 3, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2, 7, 7, -100, 12, 11, 4, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 2, 12, -100, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 8, 7, -100, 12, -100, -100, 12, -100, 1, 12, 11, 8, 7, 2, 12, -100, 7, 3, 9, 11, 4, 7, -100, 9, 8, 7, 12, -100, -100, -100, 11, 2, 0, -100, -100, 0, -100, 11, 12, -100, 12, -100, 1, 0, 12, 12, 0, -100, 3, 11, -100, -100, 10, -100, -100, 11, 0, 7, -100, 7, 1, 2, 0, -100, -100, 12, -100, 5, 7, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 7, -100, 7, 7, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, -100, 4, 11, 7, -100, 10, -100, -100, 10, -100, -100, -100, -100, 0, 7, -100, 12, 12, -100, -100, 4, 12, -100, 12, -100, 11, 7, -100, -100, -100, 12, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 0, 7, 12, -100, 11, 12, -100, -100, 11, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, 7, -100, -100, -100, 12, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 0, -100, -100, 7, 12, -100, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, 10, 9, 7, 12, 11, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100, -100]) ```<|||||>I have already checked the number of labels first thing. That's why it surprised me that it is not the problem. I also run the script without eval. ``` 03/01/2021 17:45:18 - INFO - __main__ - Label list ['ADJ', 'ADP', 'ADV', 'AUX', 'CCONJ', 'DET', 'INTJ', 'NOUN', 'NUM', 'PRON', 'PROPN', 'PUNCT', 'VERB', 'X'] 03/01/2021 17:45:18 - INFO - __main__ - Label to id {'ADJ': 0, 'ADP': 1, 'ADV': 2, 'AUX': 3, 'CCONJ': 4, 'DET': 5, 'INTJ': 6, 'NOUN': 7, 'NUM': 8, 'PRON': 9, 'PROPN': 10, 'PUNCT': 11, 'VERB': 12, 'X': 13} 03/01/2021 17:45:18 - INFO - __main__ - Num labels 14 ``` What might be another cause of this?<|||||>Solved it! Turns out in my config.json (which is copied from another PyTorch checkpoint) should also change the `label2id` and `id2label`. That was totally unexpected. In order to match 14 labels I changed the config file as follows: ``` { ... "architectures": [ "BertForTokenClassification" ], "id2label": { "0": "LABEL_0", "1": "LABEL_1", "2": "LABEL_2", "3": "LABEL_3", "4": "LABEL_4", "5": "LABEL_5", "6": "LABEL_6", "7": "LABEL_7", "8": "LABEL_8", "9": "LABEL_9", "10": "LABEL_10", "11": "LABEL_11", "12": "LABEL_12", "13": "LABEL_13" }, "label2id": { "LABEL_0": 0, "LABEL_1": 1, "LABEL_2": 2, "LABEL_3": 3, "LABEL_4": 4, "LABEL_5": 5, "LABEL_6": 6, "LABEL_7": 7, "LABEL_8": 8, "LABEL_9": 9, "LABEL_10": 10, "LABEL_11": 11, "LABEL_12": 12, "LABEL_13": 13 }, ... } ``` Thank you anyways.
transformers
10,448
closed
When I try to import my model I run into an error "TypeError: PyMetaspace.__new__() got an unexpected keyword argument: str_rep"
In the Hugging Face repository I have my own model Fidlobabovic / beta-kvantorium-simple-small https://huggingface.co/Fidlobabovic/beta-kvantorium-simple-small/tree/main #7370 #10148 When I try to import it, I get an error. What can I do to fix it? Overwrite model files or rename them? What should I write for this? ```` from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("Fidlobabovic/beta-kvantorium-simple-small") model = AutoModelForMaskedLM.from_pretrained("Fidlobabovic/beta-kvantorium-simple-small") TypeError Traceback (most recent call last) <ipython-input-5-3f4375d4cdf7> in <module>() 1 from transformers import AutoTokenizer, AutoModelForMaskedLM 2 ----> 3 tokenizer = AutoTokenizer.from_pretrained("Fidlobabovic/beta-kvantorium-simple-small") 4 5 model = AutoModelForMaskedLM.from_pretrained("Fidlobabovic/beta-kvantorium-simple-small") 4 frames /usr/local/lib/python3.7/dist-packages/transformers/models/gpt2/tokenization_gpt2_fast.py in __init__(self, vocab_file, merges_file, tokenizer_file, unk_token, bos_token, eos_token, add_prefix_space, **kwargs) 150 pre_tok_class = getattr(pre_tokenizers, pre_tok_state.pop("type")) 151 pre_tok_state["add_prefix_space"] = add_prefix_space --> 152 self.backend_tokenizer.pre_tokenizer = pre_tok_class(**pre_tok_state) 153 154 self.add_prefix_space = add_prefix_space TypeError: PyMetaspace.__new__() got an unexpected keyword argument: str_rep ````
02-28-2021 17:37:49
02-28-2021 17:37:49
Maybe @n1t0 knows what might be happening here.<|||||>@n1t0 LysandreJik spoked call you <|||||>Dose this problem fixed? I also got the same problem, I trained a SentencePieceBPETokenizer, use save() api to persist it as a tokenizer.json file, but got this error while loading. I dig into the rust code, and found that Metaspace is defined as a struct with three attributes but the constructor only accept two. I tried to modify the json file and remove the one not in the constructor method, but got another error: `Exception: data did not match any variant of untagged enum PyPreTokenizerTypeWrapper at line 1 column 1449 ` Below is the content around char 1449: ` lse, "normalized": false}], "normalizer": {"type": "NFKC"}, "pre_tokenizer": {"type": "Metaspace", "replacement": "\\u2581", "add_prefix_space": true}, "post_processor": null, "decoder": {"type": "Meta `<|||||>This has been fixed in the latest version of tokenizers (`0.10.2`)
transformers
10,447
closed
changing the way checkpoint is done in the new release
Hi Currently HuggingFace library checks the path for the latest checkpoint and then starts the training from there, this approach is not working due to following reason: - lets assume you train the model with limit of 1 checkpoint and then you checkpoint only when the best eval accuracy is achieved, then if library loads the model from the last checkpoint, this is not training from the place training is stopped, and this can goes back a lot in time, since the last checkpoint is not necessarily the last model we need to load from, and this is only the last model with best accuracy. To solve the issue, in addition to the checkpoint folders, one needs to introduce a "save_path_folder", and then when resuming the model trainining, loading from this path and not from last checkpoint folder anymore. Please let me know if any part is not clear. thanks
02-28-2021 17:09:13
02-28-2021 17:09:13
I am not sure if I understand this correctly, how would the `Trainer` know if the last model is different from the actual last checkpoint, and if the model is only saved when the best eval accuracy reached, how would you save the last model to `save_path_folder`? Also if you are saving the last model (and optimizer/scheduler) to some other directory, you could always pass that path as the `model_name_or_path` and change the `--output_dir` so the `Trainer` won't load the last checkpoint <|||||>cc @sgugger <|||||>> lets assume you train the model with limit of 1 checkpoint and then you checkpoint only when the best eval accuracy is achieved, then if library loads the model from the last checkpoint, this is not training from the place training is stopped, and this can goes back a lot in time, since the last checkpoint is not necessarily the last model we need to load from, and this is only the last model with best accuracy. Yes, you should not use a `save_total_limit` of 1 in conjunction with metric tracking (such as `load_bst_model_at_end=True`) or accept you won't be able to resume training from the last checkpoint. If you want to restart from scratch, you should just pass `--overwrite_output_dir`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,446
closed
AttributeError: 'Trainer' object has no attribute 'log_metrics'
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.7.1+cu101 (True) - Tensorflow version (GPU?): 2.4.1 (True) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help @sgugger ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [x] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) `examples/language-modeling/run_mlm.py` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) Just a usual txt file with texts line by line. Logging on the epoch's end of mlm training fails: ``` Traceback (most recent call last): File "examples/language-modeling/run_mlm.py", line 442, in <module> main() File "examples/language-modeling/run_mlm.py", line 416, in main trainer.log_metrics("train", metrics) AttributeError: 'Trainer' object has no attribute 'log_metrics' ``` ## To reproduce Steps to reproduce the behavior: ``` pip install transformers cd transformers python examples/language-modeling/run_mlm.py \ --model_name_or_path Geotrend/bert-base-ru-cased \ --train_file <path to train file> \ --validation_file <path to validation file> \ --do_train \ --do_eval \ --num_train_epochs 1 \ --output_dir <path to output dir> \ --save_steps 10000 \ --line_by_line True ``` ## Expected behavior It works after checkout to the previous commit, [b01483f](https://github.com/huggingface/transformers/commit/b01483faa0cfb57369cbce153c671dbe48cc0638) : ``` 02/28/2021 14:00:45 - INFO - __main__ - ***** Train results ***** 02/28/2021 14:00:45 - INFO - __main__ - epoch = 1.0 02/28/2021 14:00:45 - INFO - __main__ - train_runtime = 1091.7453 02/28/2021 14:00:45 - INFO - __main__ - train_samples_per_second = 70.642 02/28/2021 14:00:45 - INFO - __main__ - *** Evaluate *** [INFO|trainer.py:1600] 2021-02-28 14:00:45,719 >> ***** Running Evaluation ***** [INFO|trainer.py:1601] 2021-02-28 14:00:45,719 >> Num examples = 154244 [INFO|trainer.py:1602] 2021-02-28 14:00:45,719 >> Batch size = 8 100% 19281/19281 [06:56<00:00, 46.28it/s] 02/28/2021 14:07:42 - INFO - __main__ - ***** Eval results ***** 02/28/2021 14:07:42 - INFO - __main__ - perplexity = 4.859176983612205 ```
02-28-2021 16:39:59
02-28-2021 16:39:59
Hi there. As is mentioned at the very beginning of the [examples README](https://github.com/huggingface/transformers/tree/master/examples#important-note), running the examples requires an install from source. If you want the examples associated with v4.3.3, you can find them [here](https://github.com/huggingface/transformers/tree/v4.3.3/examples).
transformers
10,445
closed
[IBert] Correct link to paper
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-28-2021 15:51:45
02-28-2021 15:51:45
transformers
10,444
closed
TypeError: can only concatenate str (not "int") to str
While running run_seq2seq.py for summarization task on my own CSV, i get following error: All the weights of BartForConditionalGeneration were initialized from the model checkpoint at sshleifer/distilbart-cnn-12-6. If your task is similar to the task the model of the checkpoint was trained on, you can already use BartForConditionalGeneration for predictions without further training. Traceback (most recent call last): File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 645, in <module> main() File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 476, in main load_from_cache_file=not data_args.overwrite_cache, File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1120, in map update_data = does_function_return_dict(test_inputs, test_indices) File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 1091, in does_function_return_dict function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 448, in preprocess_function inputs = [prefix + inp for inp in inputs] File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 448, in <listcomp> inputs = [prefix + inp for inp in inputs] TypeError: can only concatenate str (not "int") to str
02-28-2021 11:18:10
02-28-2021 11:18:10
Hi, if the `text_column` and `summary_column` arguments are not specified when running the script, it is assumed that the first column in a csv file contains the full text and the second column the corresponding summaries. From the error message, it seems your csv file has integers in the first column. <|||||>done with this but now this error: <|||||> File "run_seq2seq.py", line 645, in <module> main() File "run_seq2seq.py", line 518, in main pad_to_multiple_of=8 if training_args.fp16 else None, TypeError: __init__() got an unexpected keyword argument 'model'<|||||>This is because the `DataCollatorForSeq2Seq` now adds a new `model` argument. Upgrading to master will fix this issue. Also, as is mentioned at the very beginning of the [examples README,](https://github.com/huggingface/transformers/tree/master/examples#important-note) running the examples requires an install from source. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,443
closed
Adds terms to Glossary
# What does this PR do? Wanted a definition of what a transformer is since it is not in the glossary, @cronoik provided one that required two other terms so this pull request makes those changes so more people can understand if they are new to the field. Previous discussion can be found here - https://github.com/huggingface/transformers/issues/9078 - transformer: self-attention based deep learning model architecture. - self-attention: each element of the input finds out which other elements of the input they should attend to. - deep learning: machine learning algorithms which uses neural networks with several layers. Any improvements/corrections to the definitions are welcome. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Merging this pull request would resolve Issue https://github.com/huggingface/transformers/issues/9078 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-28-2021 10:02:56
02-28-2021 10:02:56
transformers
10,442
closed
Bug in Electra Example
The description for Electra (https://huggingface.co/google/electra-small-discriminator) contains code the example below. The last line fails, I think instead of predictions.tolist() it should be predictions.squeeze() as predictions is 1xN Also the example doesn't seem to detect the corrupted tokens. ``` from transformers import ElectraForPreTraining, ElectraTokenizerFast import torch discriminator = ElectraForPreTraining.from_pretrained("google/electra-small-discriminator") tokenizer = ElectraTokenizerFast.from_pretrained("google/electra-small-discriminator") sentence = "The quick brown fox jumps over the lazy dog" fake_sentence = "The quick brown fox fake over the lazy dog" fake_tokens = tokenizer.tokenize(fake_sentence) fake_inputs = tokenizer.encode(fake_sentence, return_tensors="pt") discriminator_outputs = discriminator(fake_inputs) predictions = torch.round((torch.sign(discriminator_outputs[0]) + 1) / 2) [print("%7s" % token, end="") for token in fake_tokens] [print("%7s" % int(prediction), end="") for prediction in predictions.tolist()] ```
02-28-2021 08:00:47
02-28-2021 08:00:47
Ah, good point! I'll edit it now, thanks for letting us know.<|||||>Just fixed it in [hf@cf81dc](https://huggingface.co/google/electra-small-discriminator/commit/cf81dc100ac08ff43eb688cb1e3e7d69a822f359), I added you as a co-author too.<|||||>Thanks @LysandreJik, out of interest did you get the same results as me - the model doesn't appear to have identified the fake token? I'll do some more investigation and perhaps post in the forum but it seems that no matter what I do it always retuns zeros? Also really the example should demonstrate the model working :)
transformers
10,441
closed
TypeError: __init__() got an unexpected keyword argument 'model' in `run_seq2seq.py` example when using on our own files
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.3 - Platform: Linux-4.15.0-109-generic-x86_64-with-debian-buster-sid - Python version: 3.6.13 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> @patil-suraj ## Information I am using `run_seq2seq.py` in `transformers/examples/seq2seq` The problem arises when using: * the official example scripts: when I run the following: ``` python run_seq2seq.py \ --model_name_or_path t5-small \ --do_train \ --do_eval \ --task summarization \ --train_file train.csv \ --validation_file test.csv \ --output_dir output \ --overwrite_output_dir \ --per_device_train_batch_size=4 \ --per_device_eval_batch_size=4 \ --predict_with_generate \ --max_train_samples 500 \ --max_val_samples 500 ``` I get the following error: ``` Traceback (most recent call last): File "run_seq2seq.py", line 645, in <module> main() File "run_seq2seq.py", line 518, in main pad_to_multiple_of=8 if training_args.fp16 else None, TypeError: __init__() got an unexpected keyword argument 'model' ``` The tasks I am working on is: * my own task or dataset: I take the examples provided in the README file for the custom CSV file. Specifically, I have two files `train.csv` and `test.csv` in the same directory as `run_seq2seq.py` with the following content: ``` text,summary "I'm sitting here in a boring room. It's just another rainy Sunday afternoon. I'm wasting my time I got nothing to do. I'm hanging around I'm waiting for you. But nothing ever happens. And I wonder","I'm sitting in a room where I'm waiting for something to happen" "I see trees so green, red roses too. I see them bloom for me and you. And I think to myself what a wonderful world. I see skies so blue and clouds so white. The bright blessed day, the dark sacred night. And I think to myself what a wonderful world.","I'm a gardener and I'm a big fan of flowers." "Christmas time is here. Happiness and cheer. Fun for all that children call. Their favorite time of the year. Snowflakes in the air. Carols everywhere. Olden times and ancient rhymes. Of love and dreams to share","It's that time of year again." ``` ## To reproduce Steps to reproduce the behavior: 1. I copy and paste the file `run_seq2seq.py` located [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) into a directory. 2. I create two files named `train.csv` and `test.csv` in the same directory the file `run_seq2seq.py` is located. 3. I run the command provided above. Here's the full terminal output: ``` 02/27/2021 21:10:36 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 4distributed training: False, 16-bits training: False 02/27/2021 21:10:36 - INFO - __main__ - Training/evaluation parameters Seq2SeqTrainingArguments(output_dir='output', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluation_strategy=<EvaluationStrategy.NO: 'no'>, prediction_loss_only=False, per_device_train_batch_size=4, per_device_eval_batch_size=4, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=<SchedulerType.LINEAR: 'linear'>, warmup_steps=0, logging_dir='runs/Feb27_21-10-36_legendary1', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level='O1', fp16_backend='auto', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name='output', disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=['tensorboard'], ddp_find_unused_parameters=None, dataloader_pin_memory=True, sortish_sampler=False, predict_with_generate=True) 02/27/2021 21:10:36 - WARNING - datasets.builder - Using custom data configuration default-40a1a8e44205ddce Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/users/apouranb/.cache/huggingface/datasets/csv/default-40a1a8e44205ddce/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93... Dataset csv downloaded and prepared to /home/users/apouranb/.cache/huggingface/datasets/csv/default-40a1a8e44205ddce/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93. Subsequent calls will reuse this data. https://huggingface.co/t5-small/resolve/main/config.json not found in cache or force_download set to True, downloading to /home/users/apouranb/.cache/huggingface/transformers/tmpgh87jvjl Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.20k/1.20k [00:00<00:00, 843kB/s] storing https://huggingface.co/t5-small/resolve/main/config.json in cache at /home/users/apouranb/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985 creating metadata file for /home/users/apouranb/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985 loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /home/users/apouranb/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985 Model config T5Config { "architectures": [ "T5WithLMHeadModel" ], "d_ff": 2048, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "relu", "initializer_factor": 1.0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "n_positions": 512, "num_decoder_layers": 6, "num_heads": 8, "num_layers": 6, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } }, "transformers_version": "4.3.3", "use_cache": true, "vocab_size": 32128 } loading configuration file https://huggingface.co/t5-small/resolve/main/config.json from cache at /home/users/apouranb/.cache/huggingface/transformers/fe501e8fd6425b8ec93df37767fcce78ce626e34cc5edc859c662350cf712e41.406701565c0afd9899544c1cb8b93185a76f00b31e5ce7f6e18bbaef02241985 Model config T5Config { "architectures": [ "T5WithLMHeadModel" ], "d_ff": 2048, "d_kv": 64, "d_model": 512, "decoder_start_token_id": 0, "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "relu", "initializer_factor": 1.0, "is_encoder_decoder": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "n_positions": 512, "num_decoder_layers": 6, "num_heads": 8, "num_layers": 6, "output_past": true, "pad_token_id": 0, "relative_attention_num_buckets": 32, "task_specific_params": { "summarization": { "early_stopping": true, "length_penalty": 2.0, "max_length": 200, "min_length": 30, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": "summarize: " }, "translation_en_to_de": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to German: " }, "translation_en_to_fr": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to French: " }, "translation_en_to_ro": { "early_stopping": true, "max_length": 300, "num_beams": 4, "prefix": "translate English to Romanian: " } }, "transformers_version": "4.3.3", "use_cache": true, "vocab_size": 32128 } https://huggingface.co/t5-small/resolve/main/spiece.model not found in cache or force_download set to True, downloading to /home/users/apouranb/.cache/huggingface/transformers/tmpuwh13b51 Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 792k/792k [00:00<00:00, 2.15MB/s] storing https://huggingface.co/t5-small/resolve/main/spiece.model in cache at /home/users/apouranb/.cache/huggingface/transformers/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d creating metadata file for /home/users/apouranb/.cache/huggingface/transformers/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d https://huggingface.co/t5-small/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /home/users/apouranb/.cache/huggingface/transformers/tmpt45yih6q Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.39M/1.39M [00:00<00:00, 3.50MB/s] storing https://huggingface.co/t5-small/resolve/main/tokenizer.json in cache at /home/users/apouranb/.cache/huggingface/transformers/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529 creating metadata file for /home/users/apouranb/.cache/huggingface/transformers/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529 loading file https://huggingface.co/t5-small/resolve/main/spiece.model from cache at /home/users/apouranb/.cache/huggingface/transformers/65fc04e21f45f61430aea0c4fedffac16a4d20d78b8e6601d8d996ebefefecd2.3b69006860e7b5d0a63ffdddc01ddcd6b7c318a6f4fd793596552c741734c62d loading file https://huggingface.co/t5-small/resolve/main/tokenizer.json from cache at /home/users/apouranb/.cache/huggingface/transformers/06779097c78e12f47ef67ecb728810c2ae757ee0a9efe9390c6419783d99382d.8627f1bd5d270a9fd2e5a51c8bec3223896587cc3cfe13edeabb0992ab43c529 https://huggingface.co/t5-small/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /home/users/apouranb/.cache/huggingface/transformers/tmpqjragsda Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 242M/242M [00:03<00:00, 73.1MB/s] storing https://huggingface.co/t5-small/resolve/main/pytorch_model.bin in cache at /home/users/apouranb/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885 creating metadata file for /home/users/apouranb/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885 loading weights file https://huggingface.co/t5-small/resolve/main/pytorch_model.bin from cache at /home/users/apouranb/.cache/huggingface/transformers/fee5a3a0ae379232608b6eed45d2d7a0d2966b9683728838412caccc41b4b0ed.ddacdc89ec88482db20c676f0861a336f3d0409f94748c209847b49529d73885 All model checkpoint weights were used when initializing T5ForConditionalGeneration. All the weights of T5ForConditionalGeneration were initialized from the model checkpoint at t5-small. If your task is similar to the task the model of the checkpoint was trained on, you can already use T5ForConditionalGeneration for predictions without further training. 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 161.24ba/s] 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 383.81ba/s] Traceback (most recent call last): File "run_seq2seq.py", line 645, in <module> main() File "run_seq2seq.py", line 518, in main pad_to_multiple_of=8 if training_args.fp16 else None, TypeError: __init__() got an unexpected keyword argument 'model' ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
02-28-2021 05:33:02
02-28-2021 05:33:02
I have the same exact problem. Even if you skip that part, it keeps happening in other parts of the code. It seems that file hasn't been updated with the last changes.<|||||>any solution for this? issue persists<|||||>This is because of the old version of `transformers`, upgrading to master should resolve this issue. Also always install transformers from source to use examples.<|||||>I had the same issue. Upgrading to master solved the issue. I have a somewhat related question: Currently, the metrics are logged at the end of the training. The output right now looks like: ``` ***** train metrics ***** epoch = 3.0 init_mem_cpu_alloc_delta = 7MB init_mem_cpu_peaked_delta = 0MB init_mem_gpu_alloc_delta = 230MB init_mem_gpu_peaked_delta = 0MB train_mem_cpu_alloc_delta = 0MB train_mem_cpu_peaked_delta = 1MB train_mem_gpu_alloc_delta = 696MB train_mem_gpu_peaked_delta = 4220MB train_runtime = 16.6234 train_samples = 100 train_samples_per_second = 2.346 02/28/2021 13:29:32 - INFO - __main__ - *** Evaluate *** ***** Running Evaluation ***** Num examples = 10 Batch size = 8 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00, 1.29s/it] ***** eval metrics ***** epoch = 3.0 eval_gen_len = 61.9 eval_loss = 3.8083 eval_mem_cpu_alloc_delta = 1MB eval_mem_cpu_peaked_delta = 0MB eval_mem_gpu_alloc_delta = 0MB eval_mem_gpu_peaked_delta = 896MB eval_rouge1 = 11.4663 eval_rouge2 = 1.0712 eval_rougeL = 9.2587 eval_rougeLsum = 9.6266 eval_runtime = 4.7684 eval_samples = 10 eval_samples_per_second = 2.097 ``` I was wondering if there an easier way of getting metrics (say rouge score/loss) for each epoch so that we can see how the training is going and plot the loss? One solution that I could think of was writing a custom callback function with `on_epoch_end`. Just wondering if there's an easier solution? <|||||>There is, you could set the `evaluation_strategy` and `logging_strategy` argument to `epoch`, which will tell the trainer to evaluate and log after each epoch. If it doesn't feel free to createe another issue.
transformers
10,440
closed
Checkpoint refactoring for Multiple Models
Hi, thank you for providing an example @sgugger Linked to #10193, this PR refactors the checkpoint names in one private constant. A couple notes: - I refactored most of the modeling files, however I excluded the modeling_tf_*.py files for now. - The bare Distilbert foward pass has two add_code_sample_docstrings decorators, I wanted to check for confirmation that its redundant. - funnel, gpt and squeeze bert all have 2 checkpoint models for different tasks, so I left those alone. Fixes #10193 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
02-28-2021 03:50:07
02-28-2021 03:50:07
> Thanks a lot for the PR! > > We can do the TF models in another PR, I'm completely fine with that. Regarding your other comments: > > * funnel is a special case indeed, so it's fine to leave it as it for now. > * for squeezebert, you can just use "squeezebert/squeezebert-uncased" everywhere > * for gpt I didn't see two checkpoints, just one. > * for distilbert, the duplicate `add_code_ample` decorator should be removed. > > One last thing, you added a `datasets` submodule in your PR that you should remove before we can merge it. oo sorry about that Ill remove the datasets and clean up the rest, so GPT2 has 2 checkpoints("gpt2" and for sequence classification.. ``` @add_code_sample_docstrings( tokenizer_class=_TOKENIZER_FOR_DOC, checkpoint="microsoft/dialogrpt", output_type=SequenceClassifierOutputWithPast, config_class=_CONFIG_FOR_DOC, ) ```<|||||>Ok for GPT2, you can leave this checkpoint and just refactor the traditional "gpt2" in the other places then.<|||||>Oh no, it looks like the rebase messed with GitHub and the diff. Do you think you could close this PR and open a fresh one on your branch? Also, for the next step #10424 contains an example of what I was envisioning for the "# Copied from", if it still interests you to work on this part in a second stage. The PR needs to be merged before you work on it because there is some fixes in our check_copies script inside.<|||||> sure no problem, Ill include the TF modeling files as well in the next pull request.
transformers
10,439
closed
Option to output "test_preds_seq2seq.txt" text file with each checkpoint generated in "run_seq2seq.py"
I had previously raised an issue in a mistaken belief that this functionality _used_ to exist in Transformers. Current behavior: The "test_preds_seq2seq.txt" file is created once, at the end of the last epoch. For many Seq2Seq tasks, at least for mine, it would be very useful to get these predictions at each checkpoint, to see how the model changes over time as it is trained. thanks
02-27-2021 23:41:32
02-27-2021 23:41:32
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,438
closed
Setting max_length for model training produces error
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.7.1+cu101 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: True/False - Using distributed or parallel set-up in script?: False ### Who can help Models: - tensorflow: @jplu - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger ## Information Model I am using (Bert, XLNet ...): RoBERTa (Large) The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce **Error on GPU:-** ```py Some weights of the model checkpoint at roberta-large were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-large and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-25-bb6a14612ca7> in <module>() 46 ) 47 ---> 48 train_results = trainer.train() 17 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 938 tr_loss += self.training_step(model, inputs) 939 else: --> 940 tr_loss += self.training_step(model, inputs) 941 self._total_flos += self.floating_point_ops(inputs) 942 /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs) 1300 if self.use_amp: 1301 with autocast(): -> 1302 loss = self.compute_loss(model, inputs) 1303 else: 1304 loss = self.compute_loss(model, inputs) /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1332 else: 1333 labels = None -> 1334 outputs = model(**inputs) 1335 # Save past state if it exists 1336 # TODO: this needs to be fixed and made cleaner later. /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1153 output_attentions=output_attentions, 1154 output_hidden_states=output_hidden_states, -> 1155 return_dict=return_dict, 1156 ) 1157 sequence_output = outputs[0] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 815 output_attentions=output_attentions, 816 output_hidden_states=output_hidden_states, --> 817 return_dict=return_dict, 818 ) 819 sequence_output = encoder_outputs[0] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 512 encoder_attention_mask, 513 past_key_value, --> 514 output_attentions, 515 ) 516 /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions) 397 head_mask, 398 output_attentions=output_attentions, --> 399 past_key_value=self_attn_past_key_value, 400 ) 401 attention_output = self_attention_outputs[0] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions) 327 encoder_attention_mask, 328 past_key_value, --> 329 output_attentions, 330 ) 331 attention_output = self.output(self_outputs[0], hidden_states) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions) 184 output_attentions=False, 185 ): --> 186 mixed_query_layer = self.query(hidden_states) 187 188 # If this is instantiated as a cross-attention module, the keys /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/torch/nn/modules/linear.py in forward(self, input) 91 92 def forward(self, input: Tensor) -> Tensor: ---> 93 return F.linear(input, self.weight, self.bias) 94 95 def extra_repr(self) -> str: /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias) 1690 ret = torch.addmm(bias, input, weight.t()) 1691 else: -> 1692 output = input.matmul(weight.t()) 1693 if bias is not None: 1694 output += bias RuntimeError: CUDA error: CUBLAS_STATUS_ALLOC_FAILED when calling `cublasCreate(handle)` ``` **Error on CPU:-** ```py Downloading: 100% 482/482 [00:00<00:00, 846B/s] Downloading: 100% 1.43G/1.43G [00:51<00:00, 27.7MB/s] Some weights of the model checkpoint at roberta-large were not used when initializing RobertaForSequenceClassification: ['lm_head.bias', 'lm_head.dense.weight', 'lm_head.dense.bias', 'lm_head.layer_norm.weight', 'lm_head.layer_norm.bias', 'lm_head.decoder.weight', 'roberta.pooler.dense.weight', 'roberta.pooler.dense.bias'] - This IS expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model). - This IS NOT expected if you are initializing RobertaForSequenceClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model). Some weights of RobertaForSequenceClassification were not initialized from the model checkpoint at roberta-large and are newly initialized: ['classifier.dense.weight', 'classifier.dense.bias', 'classifier.out_proj.weight', 'classifier.out_proj.bias'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference. --------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-26-6888fbac6ba6> in <module>() 46 ) 47 ---> 48 train_results = trainer.train() 11 frames /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, **kwargs) 938 tr_loss += self.training_step(model, inputs) 939 else: --> 940 tr_loss += self.training_step(model, inputs) 941 self._total_flos += self.floating_point_ops(inputs) 942 /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in training_step(self, model, inputs) 1302 loss = self.compute_loss(model, inputs) 1303 else: -> 1304 loss = self.compute_loss(model, inputs) 1305 1306 if self.args.n_gpu > 1: /usr/local/lib/python3.7/dist-packages/transformers/trainer.py in compute_loss(self, model, inputs, return_outputs) 1332 else: 1333 labels = None -> 1334 outputs = model(**inputs) 1335 # Save past state if it exists 1336 # TODO: this needs to be fixed and made cleaner later. /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, labels, output_attentions, output_hidden_states, return_dict) 1153 output_attentions=output_attentions, 1154 output_hidden_states=output_hidden_states, -> 1155 return_dict=return_dict, 1156 ) 1157 sequence_output = outputs[0] /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict) 803 token_type_ids=token_type_ids, 804 inputs_embeds=inputs_embeds, --> 805 past_key_values_length=past_key_values_length, 806 ) 807 encoder_outputs = self.encoder( /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/transformers/models/roberta/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds, past_key_values_length) 119 embeddings = inputs_embeds + token_type_embeddings 120 if self.position_embedding_type == "absolute": --> 121 position_embeddings = self.position_embeddings(position_ids) 122 embeddings += position_embeddings 123 embeddings = self.LayerNorm(embeddings) /usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(), /usr/local/lib/python3.7/dist-packages/torch/nn/modules/sparse.py in forward(self, input) 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse) 127 128 def extra_repr(self) -> str: /usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854 IndexError: index out of range in self ``` This was working fine actually until I added the `max_length` argument:- ```py train_encodings = tokenizer(train_text, truncation=True, padding=True, max_length=4072) val_encodings = tokenizer(val_text, truncation=True, padding=True, max_length=4072) ``` The reason for adding was in the inference stage it was producing an error about the sequence being too long. Figuring I would be inferencing on sequences larger in the test data (I have confirmed it) I tried this but doesn't work. Any Idea how to solve this?
02-27-2021 22:27:22
02-27-2021 22:27:22
The RoBERTa model takes maximum lengths of 512 tokens, and you are giving it inputs padded (or truncated) to a length of 4072. This is why you get this error.<|||||>Hmm... well, I was able to train and infer with `roberta-base` without any errors (though the output was very bad). It seems that the only way to proceed is to use longformers. D'you reckon its easy to swap out roberta and use longformer, or is there some extra step? lastly, I would have preferred that this query be answered in the forum rather than coming to Github for help :(
transformers
10,437
closed
[Trainer] add --max_train_samples --max_val_samples --max_test_samples
As we were planning to add `--max_train_samples --max_val_samples --max_test_samples` to all examples https://github.com/huggingface/transformers/issues/10423, I thought is there any reason why we don't expand the Trainer to handle that? It surely would be useful to be able to truncate the dataset at the point of Trainer to enable quick testing. Another plus is that the metrics can then automatically include the actual number of samples run, rather than how it is done at the moment in examples. That way this functionality would be built-in and examples will get it for free. TODO: 1. [ ] port `--max_train_samples --max_val_samples --max_test_samples` to Trainer and remove the then unneeded code in `run_seq2seq.py` 2. [ ] extend metrics to report the number of samples as it's done now in: https://github.com/huggingface/transformers/blob/aca6288ff42cebded5421020f0ff088adeb446dd/examples/seq2seq/run_seq2seq.py#L590 so that all scripts automatically get this metric reported. Most likely it should be done here: https://github.com/huggingface/transformers/blob/aca6288ff42cebded5421020f0ff088adeb446dd/src/transformers/trainer_utils.py#L224 @sgugger
02-27-2021 17:23:58
02-27-2021 17:23:58
Yes, that would be a nice refactor in `Trainer`! I believe this can be done when we create the dataloaders, to keep the original datasets untouched.<|||||>@bhadreshpsavani, would you like to try this slightly more complex task? Step 1 is to take `run_seq2seq.py` and move the functionality that handles `--max_train_samples --max_val_samples --max_test_samples` (args and logic) to `training_args.py` and `trainer.py` correspondingly. And to ensure it works the same way. Please see @sgugger's note above to where to move the logic to. Step 2 - no step, every other script should just work with these now-Trainer-level cl args. and then later it'd be great to have the metrics updated with the actual number of samples run, like it's done manually right now in `run_seq2seq.py` - I added the full details to the OP. <|||||>Ya @stas00, I can do that<|||||>Awesome! Please don't hesitate to ask question if you run into any uncertainties. Thank you!<|||||>I agree with the proposed solution here. But we pre-process the datasets in scripts before passing them to the `Trainer`. Now if I just want to use say a 100 validation examples, the script would unnecessary process the whole dataset and then `Trainer` will drop the extra examples.<|||||>Hi @patil-suraj, You are right about that I was thinking to add the below code [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L544) ```python if data_args.max_train_samples is not None: train_dataset = train_dataset.select(range(data_args.max_train_samples)) ``` But ya, it will select sample from processed dataset only<|||||>Ah, in that case we sadly can't really add this in the `Trainer` as it would be inefficient. I was also thinking more and having the functionality in Trainer will require us to support all kinds of datasets (even iterable datasets) and it's going to be very painful, so I think we should just copy the code in all the scripts.<|||||>Hi @sgugger, Shall we add this arguments `--max_train_samples --max_val_samples --max_test_samples` and code accordingly in all the scripts like implemented in `run_seq2seq.py`?<|||||>Yes, I think it's the best solution.<|||||>Hi @stas00, I was going through the code of `run_seq2seq` and trying to make changes in other scripts I came across [`result={}`](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py#L597). We are not storing anything inside this dictionary and we are returning it at the end as an empty dictionary. Is it necessary? In other scripts like `run_qa.py` we are using results instead of metrics in [eval ](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L495) and test section. Should we unify this behaviour in the scripts? I mean either use metrics like in `run_seq2seq` or use results like other scripts I also want to ask that in many scripts we are only doing train and eval things, Not test/predict things. Should we also create a separate issue for that? we might not be adding `--max_test_samples` since we are not doing testing in the script?<|||||>The test thing should definitely have its separate issue (and `--max_test_samples` can be added when the test/predict is added for those scripts).<|||||>Good analysis, @bhadreshpsavani! > I was going through the code of `run_seq2seq` and trying to make changes in other scripts > I came across [`result={}`](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py#L597). We are not storing anything inside this dictionary and we are returning it at the end as an empty dictionary. Is it necessary? That was invalid porting of the original. As you can see the original aggregated all the metrics and returned them: https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/examples/legacy/seq2seq/finetune_trainer.py#L299 > In other scripts like `run_qa.py` we are using results instead of metrics in [eval ](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L495) and test section. Should we unify this behaviour in the scripts? I mean either use metrics like in `run_seq2seq` or use results like other scripts I think all these return metrics from `main` were mainly used for testing. But since we save all the metrics to the disc, this isn't necessarily needed as the data is already accessible - changing this may impact some tests which would need to be adjusted to use the metric dumps. Alternatively, we could have the trainer also store all the metrics not only on the disc but internally, and so the last command of `main` in all scripts could be: ``` return trainer.metrics() ``` @sgugger, what do you think - should we just not return anything from `main` in all example scripts or always return all metrics and then tweak the trainer to store all the metrics internally and have an method to return them? > I also want to ask that in many scripts we are only doing train and eval things, Not test/predict things. Should we also create a separate issue for that? we might not be adding `--max_test_samples` since we are not doing testing in the script? Another good catch. Probably for now just skip `--max_test_samples` in those scripts, but meanwhile I raised your question here: https://github.com/huggingface/transformers/issues/10482<|||||>> The test thing should definitely have its separate issue (and `--max_test_samples` can be added when the test/predict is added for those scripts). Filed: https://github.com/huggingface/transformers/issues/10482 It might be easier to sort out test/predict first then, as it'd make the copy-n-paste of all 3 cl arg flags easier. But either way works. <|||||>The metrics returned are mainly for the tests AFAIK, so we can remove that behavior if the tests are all adapted to load the file where the metrics are stored.<|||||>OK, let's sync everything then to remove the inconsistent return metrics. Just please be aware that the example tests ` examples/test_examples.py` will need to be adapted, e.g. currently they rely on the return value from `main`: https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/examples/test_examples.py#L100-L102 So instead should write a wrapper to load the metrics from the filesystem and test that.<|||||>Just to make sure my mentioning of a wrapper wasn't ambiguous: For models and examples we are trying to be as explicit as possible to help the users understand what is going on in the code - so avoiding refactoring and duplicating code where it is needed. Unless we can make something a functional method in Trainer and then all the noise can be abstracted away, especially for code that's really just formatting. For tests it's the software engineering as normal, refactoring is important as it helps minimize hard to maintain code and avoid errors. So there let's not duplicate any code like reading the json file from the filesystem.<|||||>Hi @stas00, I could not figure out the code or implementation for a wrapper for loading all metrics for testing in `test_exmaples.py`. We are storing in the file system based on an argument `output_dir` which is accessible for trainer object. I don't how to access the trainer object for individual script in `test_exmaples.py`. In trainer, we can write code for loading the metrics but to access the trainer in `test_examples.py` that I couldn't figure out. Another thing if we use `all_metrics={}` to store all the metrics of train, test, and validation, we can save it once as `all_results.json` like [legacy ](https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/examples/legacy/seq2seq/finetune_trainer.py#L356) code, right? Sorry keep asking multiple questions, Once these things are clear then implementation and testing will not take much time<|||||>> Hi @stas00, > I could not figure out the code or implementation for a wrapper for loading all metrics for testing in `test_examples.py`. We are storing in the file system based on an argument `output_dir` which is accessible for trainer object. I don't how to access the trainer object for individual script in `test_exmaples.py`. You have the `output_dir` `tmp_dir` here: https://github.com/huggingface/transformers/blob/805c5200dc41aa7ca8dbb851688223df8627b411/examples/test_examples.py#L78 so you just load the `f"{tmp_dir}/all_results.json"` file right after this line: https://github.com/huggingface/transformers/blob/805c5200dc41aa7ca8dbb851688223df8627b411/examples/test_examples.py#L101 That's it - You have the metrics to test on the following line ;) > Another thing if we use `all_metrics={}` to store all the metrics of train, test, and validation, we can save it once as `all_results.json` like [legacy ](https://github.com/huggingface/transformers/blob/b013842244df7be96b8cc841491bd1e35e475e36/examples/legacy/seq2seq/finetune_trainer.py#L356) code, right? I already changed the code to save `all_results.json` auto-magically in `trainer.save_metrics` - make sure you rebased your branch https://github.com/huggingface/transformers/blob/805c5200dc41aa7ca8dbb851688223df8627b411/src/transformers/trainer_pt_utils.py#L651-L661 > Sorry keep asking multiple questions, Once these things are clear then implementation and testing will not take much time On the contrary, please don't hesitate to ask any questions. It takes quite some time to find one's way in this complex massive code base. <|||||>Hi @stas00, Below two lines in [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py#L421) don't seem much meaningful since `metrics` does not represent any details from test/prediction. ```python trainer.log_metrics("test", metrics) trainer.save_metrics("test", metrics) ```<|||||>Why do you suggest it's not meaningful? `metrics` gets set here: https://github.com/huggingface/transformers/blob/6290169eb3391d72d9a08cab5c54a54b73a87463/examples/token-classification/run_ner.py#L412 <|||||>oooh, I didn't notice it! I thought it is taking earlier `metrics` Thanks <|||||>Hello @stas00 and @sgugger, I have made changes for adding three arguments in PyTorch-based scripts. It's working as expected. I also modified `test_examples.py` accordingly. For TensorFlow-based scripts, I am facing issues while running the script even in colab directly from the master branch without any changes. I create an [issue](https://github.com/huggingface/transformers/issues/10541) for the same. We have four run_tf_*.py files : 1. run_tf_multiple_choice.py (Reported Issue) 2. run_tf_squad.py (Reported Issue) 3. run_tf_glue.py (Got AttributeError: 'TFTrainer' object has no attribute 'log_metrics') 4. run_tf_text_classification.py Based on the error in the third file, `trainer.py` only for PyTorch based model, and `trainer_tf.py` is for tf based model. In that case do we need to write `save_metrics()` and `log_metrics()` for `trainer_tf.py`, right? In the last pull request, I could not test the changes for TF Script but I will fix that mistake in this PR. Do we need to add test_script for this TensorFlow files, currently, we only have PyTorch-based scripts in the `test_examples.py`? <|||||>Please don't touch the TF examples as they have not been cleaned up and will change in the near future. And yes, none of the TF examples are currently tested.
transformers
10,436
closed
updated logging and saving metrics
# What does this PR do? I have updated redundant code for saving and logging metrics in the example scripts <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10337 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-27-2021 16:47:46
02-27-2021 16:47:46
Please run `make style` and commit to appease to `check_code_quality` CI job
transformers
10,435
closed
Confused about the time of forword
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version:3.0.0 - Platform: ubuntu 1604 - Python version: 3.7 - PyTorch version (GPU?):1.2.0 gpu - Tensorflow version (GPU?): none - Using GPU in script?: none - Using distributed or parallel set-up in script?:No ### Who can help @LysandreJik ## Information Model I am using Bert, spceifically roberta_chinese_clue_tiny The problem arises when using: BertModel.from_pretrained The tasks I am working on is: just forward Question: Why the time of same forword is different? How to make it same? ## To reproduce Steps to reproduce the behavior: ``` from transformers import BertTokenizer, BertModel import time tokenizer = BertTokenizer.from_pretrained("clue/roberta_chinese_clue_tiny") model = BertModel.from_pretrained("clue/roberta_chinese_clue_tiny") inputs = tokenizer("testtest", return_tensors="pt") time_start=time.time() outputs = model(**inputs) time_end=time.time() time_start2=time.time() outputs = model(**inputs) time_end2=time.time() print('totally cost',time_end-time_start) print('totally cost2',time_end2-time_start2) ``` ## Expected behavior ``` totally cost 0.2720155715942383 totally cost2 0.007731199264526367 ```
02-27-2021 13:13:51
02-27-2021 13:13:51
I assume that that's how Python works, if you run the same thing again, the result will be cached. If you had provided different inputs, then the time would be the same. <|||||>> I assume that that's how Python works, if you run the same thing again, the result will be cached. If you had provided different inputs, then the time would be the same. Thanks for reply,but when i have tried differnt input(same length sentense), the inference time of first forword is longer than second forword(even 100x), i guess the model init produce the time cost<|||||>The same length sentences can have a different number of tokens after tokenizing. You could rather randomly create `input_ids` of the same shape to test this reliabley.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,434
closed
TF Dataset Pipeline throws `RuntimeError: Already borrowed` when tokenizing
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: master (4.4.0dev0) - Platform: Google colab - Python version: 3.7 - PyTorch version (GPU?): None - Tensorflow version (GPU?): 2.4 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help @jplu <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): None The problem arises when using: * [ ] the official example scripts: (give details below) * [X] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: This might be somewhat of a duplicate of #9629 but in a different use case ``` dataset = tf.data.TextLineDataset("/content/train.txt") tokenizer = transformers.DistilBertTokenizerFast.from_pretrained("/content/Tokenizer", do_lower_case=False) def tokenize(sentence): sentence = sentence.numpy().decode('utf-8') a = tokenizer.encode_plus(sentence, padding="max_length", max_length=256, truncation=True, return_tensors="tf") return tf.constant(a.input_ids), tf.constant(a.attention_mask), tf.constant(a.input_ids) def get_tokenized(sentence): a = tf.py_function(tokenize, inp=[sentence], Tout=[tf.int32, tf.int32, tf.int32]) return {"input_ids": a[0], "attention_mask": a[1]}, a[2] dataset = dataset.map(get_tokenized, num_parallel_calls=tf.data.AUTOTUNE) # dataset = dataset.apply(tf.data.experimental.assert_cardinality(8000)) print(next(iter(dataset))) ``` Error ``` UnknownError: RuntimeError: Already borrowed Traceback (most recent call last): File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 247, in __call__ return func(device, token, args) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/script_ops.py", line 135, in __call__ ret = self._func(*args) File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/autograph/impl/api.py", line 620, in wrapper return func(*args, **kwargs) File "<ipython-input-34-2e27f300f71b>", line 9, in tokenize a = tokenizer.encode_plus(sentence, padding="max_length", max_length=256, truncation=True, return_tensors="tf") File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 2438, in encode_plus **kwargs, File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py", line 472, in _encode_plus **kwargs, File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py", line 379, in _batch_encode_plus pad_to_multiple_of=pad_to_multiple_of, File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py", line 330, in set_truncation_and_padding self._tokenizer.enable_truncation(max_length, stride=stride, strategy=truncation_strategy.value) RuntimeError: Already borrowed [[{{node EagerPyFunc}}]] ``` The important thing that I should probably mention here is that if I change my code to load the same using the tokenizers library, the code executes without any issues. I have also tried using the slow implementation and the error still persists. Any help regarding this would be great! <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior Tokenization should happen on the fly without errors as it does with the Tokenizer from the tokenizers library. <!-- A clear and concise description of what you would expect to happen. -->
02-27-2021 10:06:27
02-27-2021 10:06:27
Hello! I do not suggest to convert your sentence on the fly but you should do it beforehand. Here the issue you get is because of `sentence = sentence.numpy().decode('utf-8')`, your sentences should not be loaded in a tf datasets before to be processed. I recommend you to read your file normally, convert your examples with the tokenizer, and then create a tf.Dataset with the output of the tokenizer. The best solution would be to create a TFRecord file and then stream this file into your pipeline.<|||||>Hey @jplu I understand this completely. Infact I did end up creating TFRecords for better training speed but I created this issue just to ask if something was wrong with the tokenizer in the transformers library. As I said before, if I use the tokenizer from the tokenizers library, it works perfectly fine and I can load the data on-the-fly. Also, as a side question, does TF masked language model require some custom script to mask tokens randomly as is done by DataCollatorForLanguageModelling for torch?<|||||>You have to create your own function to randomly mask the tokens. There is no such function implemented in TF side for now.<|||||>@jplu Okay thanks. Will you be accepting PRs which implement these functions? Or is someone already working on this?<|||||>Here a function I'm using for doing this, you can adapt it for your needs: ```python def encode(examples, block_size=512): # `examples` is a list of textual content, the output of a dataset from the datasets lib # `block_size` represents the max position size of a model. input_ids = [] texts = [] labels = [] for example in examples["text"]: tokenized_text = tokenizer.convert_tokens_to_ids(tokenizer.tokenize(example)) for i in range(0, len(tokenized_text), block_size - 2): tmp_ids = np.asarray(tokenizer.prepare_for_model(tokenized_text[i : i + block_size - 2], padding="max_length", return_attention_mask=False, return_token_type_ids=False)["input_ids"]) text = " ".join(tokenizer.convert_ids_to_tokens(tmp_ids, skip_special_tokens=True)) tmp_labels = np.copy(tmp_ids) probability_matrix = np.full(tmp_labels.shape, 0.15) special_tokens_mask = tokenizer.get_special_tokens_mask(tmp_labels, already_has_special_tokens=True) probability_matrix = np.ma.array(probability_matrix, mask=special_tokens_mask, fill_value=0.0).filled() if tokenizer._pad_token is not None: padding_mask = np.equal(tmp_labels, tokenizer.pad_token_id) probability_matrix = np.ma.array(probability_matrix, mask=padding_mask, fill_value=0.0).filled() masked_indices = np.random.default_rng().binomial(1, probability_matrix) != 0 tmp_labels[~masked_indices] = -100 indices_replaced = (np.random.default_rng().binomial(1, np.full(tmp_labels.shape, 0.8)) != 0) & masked_indices tmp_ids[indices_replaced] = tokenizer.convert_tokens_to_ids(tokenizer.mask_token) indices_random = (np.random.default_rng().binomial(1, np.full(tmp_labels.shape, 0.5)) != 0) & masked_indices & ~indices_replaced random_words = np.random.randint(len(tokenizer), size=tmp_labels.shape) tmp_ids[indices_random] = random_words[indices_random] assert tmp_ids.size == tmp_labels.size == 512, 'size input_ids: %r -- size labels: %r' % (tmp_ids.size, tmp_labels.size) input_ids.append(tmp_ids.tolist()) labels.append(tmp_labels.tolist()) texts.append(text) return {"text": texts, "input_ids": input_ids, "labels": labels} ```<|||||>Thats nice! Thanks for sharing this. Closing this issue since there does exist an alternative approach to the original question
transformers
10,433
closed
About the speed when return_dict is set to True
Hi! I just want to know whether or not the speed is slower in model forward function like roberta or bert when return_dict=True.
02-27-2021 04:24:41
02-27-2021 04:24:41
No, it shouldn't!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,432
closed
Adding Longformer Encoder Decoder support for T5
# 🚀 Adding Longformer Encoder Decoder support for T5 LED is great for doing long form encoder decoder of documents, but it is based only on BART. T5 has certain advantages, such as being designed for multi tasks (QA, summarization, etc.) and having relative positioning. T5 uses relative positioning which maps well to doing sliding chunks and should not require additional training to learn new relative position buckets. Adding LED support will permit any already trained T5 models to be used efficiently on long document. I've started incorporating LED features into the encoder portion of T5 but have some quesitons about the position_bias and implementation details of t5 and LED. With some help on understanding how sliding window multiplcation works in LED and how relative position is organized, I think I can finish the impelmentation. In particular, T5 passes a position_bias that along with the mask as added in each layer. This bias is added to each score before performing a softmax. I've surmised that I can add the position_bias to the mask in the long former self attention, and then that should mostly be the same as the orginal t5 self attention. T5's position_bias is in the shape of (batch_size, n_heads, seq_length, key_length) . But the mask used for LED is in the form of (batch_size, seq_length), which is then mapped to n_heads and then through sliding multiplication to stack the mask. I permute the postion_bias, and then run through sliding multiplication to stack the bias so that the posiion bias can db added to the mask. I tried a test of attention_window size of 512 with exactly 512 worth of tokens, which should make it equivalent to t5 self attention. But something seems to be off. The encoder produces a tensor that suprisingly can be decoded by the decoder, which is encouraging, but it's not producing an answer for QA for example. I noticed that t5 doesn't use sqrt (key value proj dim) normalization, and has an extra mapping through tensor o. I tried with and without sqrt but no good either way. Am I getting something mixed up with the position_bias? @ibeltagy @patrickvonplaten @sgugger any help would be much appreciated. Happy to contribute this as a PR when completed. Current code: https://github.com/ontocord/t5_led/blob/main/t5_ext.py relevant portion: ``` def forward_long( self, hidden_states, mask=None, position_bias=None, layer_head_mask=None, is_index_masked=None, is_index_global_attn=None, is_global_attn=None, output_attentions=False, compute_relative_attention_bias=False, query_states = None, query_mask = None, layer_id=0, ): """ :class:`LEDEncoderSelfAttention` expects `len(hidden_states)` to be multiple of `attention_window`. Padding to `attention_window` happens in :meth:`LEDEncoderModel.forward` to avoid redoing the padding on each layer. The `mask` is changed in :meth:`LEDEncoderModel.forward` from 0, 1, 2 to: * -10000: no attention * 0: local attention * +10000: global attention """ batch_size, seq_length = hidden_states.shape[:2] if position_bias is None: if not self.has_relative_attention_bias or not compute_relative_attention_bias: position_bias = torch.zeros( (1, self.n_heads, seq_length, seq_lenth), device=hidden_states.device, dtype=hidden_states.dtype ) else: position_bias = self.compute_bias(seq_length, seq_length, False) # (batch_size, n_heads, seq_length, key_length) position_bias = position_bias.permute(0, 2, 1, 3) print ("ccompute bias 2", position_bias.size()) hidden_states = hidden_states.transpose(0, 1) if query_states is None: query_states = hidden_states # project hidden states if query_mask is not None: query_vectors = self.q(query_states) * query_mask.unsqueeze(-1).expand(-1, -1, query.shape[-1]) else: query_vectors = self.q(query_states) key_vectors = self.k(hidden_states) value_vectors = self.v(hidden_states) seq_len, batch_size, embed_dim = hidden_states.size() assert ( embed_dim == self.embed_dim ), f"hidden_states should have embed_dim = {self.embed_dim}, but has {embed_dim}" # normalize query - T5 does not do the sqrt??? query_vectors /= math.sqrt(self.key_value_proj_dim) query_vectors = query_vectors.view(seq_len, batch_size, self.n_heads, self.key_value_proj_dim).transpose(0, 1) key_vectors = key_vectors.view(seq_len, batch_size, self.n_heads, self.key_value_proj_dim).transpose(0, 1) attn_scores = self._sliding_chunks_query_key_matmul( query_vectors, key_vectors, self.one_sided_attn_window_size ) # values to pad for attention probs remove_from_windowed_mask = (mask != 0)[:, :, None, None] # cast to fp32/fp16 then replace 1's with -inf float_mask = remove_from_windowed_mask.type_as(query_vectors).masked_fill( remove_from_windowed_mask, -10000.0 ) # POSITION_BIAS here: stack 2*one_sided_attn_window_size+1 worth of bias in the last dimension position_bias2 = self._sliding_chunks_query_key_matmul( position_bias.new_ones(size=position_bias.size()), position_bias, self.one_sided_attn_window_size ) # diagonal mask with zeros everywhere and -inf inplace of padding diagonal_mask = self._sliding_chunks_query_key_matmul( float_mask.new_ones(size=float_mask.size()), float_mask, self.one_sided_attn_window_size ) # pad local attention probs and add the position bias attn_scores += diagonal_mask + position_bias2 assert list(attn_scores.size()) == [ batch_size, seq_len, self.n_heads, self.one_sided_attn_window_size * 2 + 1, ], f"local_attn_probs should be of size ({batch_size}, {seq_len}, {self.n_heads}, {self.one_sided_attn_window_size * 2 + 1}), but is of size {attn_scores.size()}" # compute local attention probs from global attention keys and contact over window dim if is_global_attn: # compute global attn indices required through out forward fn ( max_num_global_attn_indices, is_index_global_attn_nonzero, is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero, ) = self._get_global_attn_indices(is_index_global_attn) # calculate global attn probs from global key global_key_attn_scores = self._concat_with_global_key_attn_probs( query_vectors=query_vectors, key_vectors=key_vectors, max_num_global_attn_indices=max_num_global_attn_indices, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, ) # concat to local_attn_probs # (batch_size, seq_len, n_heads, extra attention count + 2*window+1) attn_scores = torch.cat((global_key_attn_scores, attn_scores), dim=-1) # free memory del global_key_attn_scores attn_probs = F.softmax(attn_scores, dim=-1, dtype=torch.float32) # use fp32 for numerical stability if layer_head_mask is not None: assert layer_head_mask.size() == ( self.n_heads, ), f"Head mask for a single layer should be of size {(self.n_heads,)}, but is {layer_head_mask.size()}" attn_probs = layer_head_mask.view(1, 1, -1, 1) * attn_probs # softmax sometimes inserts NaN if all positions are masked, replace them with 0 attn_probs = torch.masked_fill(attn_probs, is_index_masked[:, :attn_probs.size()[1], None, None], 0.0) attn_probs = attn_probs.type_as(attn_scores) # free memory del attn_scores # apply dropout attn_probs = F.dropout(attn_probs, p=self.dropout, training=self.training) value_vectors = value_vectors.view(seq_len, batch_size, self.n_heads, self.key_value_proj_dim).transpose(0, 1) # compute local attention output with global attention value and add if is_global_attn: # compute sum of global and local attn attn_output = self._compute_attn_output_with_global_indices( value_vectors=value_vectors, attn_probs=attn_probs, max_num_global_attn_indices=max_num_global_attn_indices, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, ) else: # compute local attn only attn_output = self._sliding_chunks_matmul_attn_probs_value( attn_probs, value_vectors, self.one_sided_attn_window_size ) assert attn_output.size() == (batch_size, seq_len, self.n_heads, self.key_value_proj_dim), "Unexpected size" attn_output = attn_output.transpose(0, 1).reshape(seq_len, batch_size, embed_dim).contiguous() # compute value for global attention and overwrite to attention output # TODO: remove the redundant computation if is_global_attn: global_attn_output, global_attn_probs = self._compute_global_attn_output_from_hidden( hidden_states=hidden_states, max_num_global_attn_indices=max_num_global_attn_indices, layer_head_mask=layer_head_mask, is_local_index_global_attn_nonzero=is_local_index_global_attn_nonzero, is_index_global_attn_nonzero=is_index_global_attn_nonzero, is_local_index_no_global_attn_nonzero=is_local_index_no_global_attn_nonzero, is_index_masked=is_index_masked, ) # get only non zero global attn output nonzero_global_attn_output = global_attn_output[ is_local_index_global_attn_nonzero[0], :, is_local_index_global_attn_nonzero[1] ] # overwrite values with global attention attn_output[is_index_global_attn_nonzero[::-1]] = nonzero_global_attn_output.view( len(is_local_index_global_attn_nonzero[0]), -1 ) # The attention weights for tokens with global attention are # just filler values, they were never used to compute the output. # Fill with 0 now, the correct values are in 'global_attn_probs'. attn_probs[is_index_global_attn_nonzero] = 0 attn_output = attn_output.transpose(0, 1) # t5 runs the attn_output through o, and expects attn_output to be (batch_size, seq_length, dim) attn_output = self.o(attn_output) present_key_value_state = None outputs = (attn_output,) + (present_key_value_state,) + (position_bias,) if output_attentions: outputs = outputs + (attn_weights,) return outputs + (global_attn_probs,) if (is_global_attn and output_attentions) else outputs ```
02-27-2021 02:57:42
02-27-2021 02:57:42
So it looks like using sliding chunk mult is not the way to go. I can't figure out what's happening to the attn_scores and how it is shaped to be able to apply the position bias to it. ``` # POSITION_BIAS here: stack 2*one_sided_attn_window_size+1 worth of bias in the last dimension position_bias2 = self._sliding_chunks_query_key_matmul( position_bias.new_ones(size=position_bias.size()), position_bias, self.one_sided_attn_window_size ) ```<|||||>Thanks, @ontocord! It would be great if we can get an LED based on T5. We gave it a try but the PR is still WIP. Check here: https://github.com/allenai/longformer/pull/149 IIRC, the key idea is in this function: https://github.com/allenai/longformer/blob/t5/longformer/longformer.py#L144-L157 If this is not helpful enough, please let me know and I can explain it in more detail later. <|||||>@ibeltagy, what do you think of something like this? I think it works!! The relative position tensor is over the window_overlap (128), and not the attention_window (512) ``` relative_position = torch.tensor([[i-window_overlap for i in range(2*window_overlap+1)]]) relative_position_bucket = self._relative_position_bucket( relative_position, # shape (query_length, key_length) bidirectional=True, num_buckets=self.relative_attention_num_buckets, ) relative_position_bucket = relative_position_bucket.to(self.relative_attention_bias.weight.device) values = self.relative_attention_bias(relative_position_bucket) # shape (query_length, key_length, num_heads) position_bias = values.permute([0, 2, 1]).unsqueeze(0) # shape (1, num_heads, query_length, key_length) ``` And the test: ``` from transformers import AutoTokenizer, pipelines model = T5ForConditionalGeneration.from_pretrained('t5-small-long') tokenizer = AutoTokenizer.from_pretrained("t5-small") tokenizer.model_max_length=1000000000 #print (tokenizer) p = pipelines.pipeline("text2text-generation", model=model, tokenizer=tokenizer, device=0) print (p("""question: Where was Lincoln born? context: Abraham Lincoln (/ˈlɪŋkən/; February 12, 1809 – April 15, 1865) was an American statesman and lawyer who served as the 16th president of the United States from 1861 until his assassination in 1865. Lincoln led the nation through the American Civil War, the country's greatest moral, constitutional, and political crisis. He succeeded in preserving the Union, abolishing slavery, bolstering the federal government, and modernizing the U.S. economy. Lincoln was born into poverty in a log cabin and was raised on the frontier primarily in Indiana. He was self-educated and became a lawyer, Whig Party leader, Illinois state legislator, and U.S. Congressman from Illinois. In 1849, he returned to his law practice but became vexed by the opening of additional lands to slavery as a result of the Kansas–Nebraska Act. He reentered politics in 1854, becoming a leader in the new Republican Party, and he reached a national audience in the 1858 debates against Stephen Douglas. Lincoln ran for President in 1860, sweeping the North in victory. Pro-slavery elements in the South equated his success with the North's rejection of their right to practice slavery, and southern states began seceding from the union. To secure its independence, the new Confederate States fired on Fort Sumter, a U.S. fort in the South, and Lincoln called up forces to suppress the rebellion and restore the Union. As the leader of moderate Republicans, Lincoln had to navigate a contentious array of factions with friends and opponents on both sides. War Democrats rallied a large faction of former opponents into his moderate camp, but they were countered by Radical Republicans, who demanded harsh treatment of the Southern Confederates. Anti-war Democrats (called "Copperheads") despised him, and irreconcilable pro-Confederate elements plotted his assassination. Lincoln managed the factions by exploiting their mutual enmity, by carefully distributing political patronage, and by appealing to the U.S. people. His Gettysburg Address became a historic clarion call for nationalism, republicanism, equal rights, liberty, democracy and freedom. """)) ``` [{'generated_text': 'Indiana'}] ... But asking the question in t5-long: Who hated Lincoln? I get: [{'generated_text': 'anti-war Democrats (called "Copperheads") despised him, and irre'}] But asking in t5-small, I get: {'generated_text': 'Anti-war Democrats'}] I think there's something going on with the relative_position still (maybe in the extra column?) I've updated the code on my repository so you can see. <|||||>> ` relative_position = torch.tensor([[i-window_overlap for i in range(2*window_overlap+1)]])` > The relative position tensor is over the window_overlap (128), and not the attention_window (512) For an `attention_window = 512`, the relative positions need to be from -256 to 256. What you have here is -128 to 128. I am not sure how the -128 to 128 works, it will give you a tensor with dimensions that don't fit here `attn_scores += diagonal_mask + position_bias2`. > And the test: I would recommend a unit test with input seqlen < 512, then assert that the hidden states you get from `t5-small-long` perfectly match those from `t5-small`. This helps with debugging because if hidden stats don't match, you can step through both models to find the discrepancy. <|||||>@ibeltagy , my mistake. Yes the overlap window is 256, not 128. I meant the code should refer to window_overlap, which made it work. The code you referenced in https://github.com/allenai/longformer/blob/t5/longformer/longformer.py#L144-L157 refers to the whole attention_window*2 which would cause issues. ` relative_position = torch.tensor([[i-self.attention_window for i in range(2*self.attention_window+1)]])` There are still bugs, so I'll do the step through of each hidden_state per your suggestion. Thanks again!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,431
closed
Fix conda-build
Fix the tokenizer version so that conda can correctly build packages
02-27-2021 01:20:25
02-27-2021 01:20:25
transformers
10,430
closed
Inference with Finetuned BERT Model outputting odd results
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.5.1 - Platform: Linux-4.14.203-116.332.amzn1.x86_64-x86_64-with-glibc2.10 - Python version: 3.7.6 - PyTorch version (GPU?): 1.7.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. @LysandreJik @patrickvonplaten Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): Bert The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Trained HuggingFace Transformers model BertForSequenceClassification on custom dataset with PyTorch backend. 2. Used provided convert_graph_to_onnx.py script to convert model (from saved checkpoint) to ONNX format. 3. Loaded the model with ONNXRuntime 4. Instantiated BertTokenizer.from_pretrained('bert-based-uncased') and fed in various input text to encode_plus method. 5. Fed outputs of this to the ONNXRuntime session. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> The expected behavior is that the output of sess.run on the aforementioned inputs should output an array of dimension (1, 100) (corresponding to 100 classes) with each value between 0 and 1, with all entries summing to 1. We get the correct dimension, however, we get values between about -3.04 and 7.14 (unsure what these values refer to).
02-26-2021 21:56:30
02-26-2021 21:56:30
Hi, is it possible to ask this question on the [forum](https://discuss.huggingface.co/) rather than here? Since this question is a perfect use case for that. Thank you. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,429
closed
Trainer's load_best_model_at_end argument results in error with DistributedDataParallel
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.3.0 - Platform: Linux - Python version: 3.8.5 - PyTorch version (GPU?): 1.7.1 (CUDA Version: 11.2) - Tensorflow version (GPU?): NA - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes, DistributedDataParallel ### Who can help @sgugger <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh --> ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: * [ ] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) ``` training_args = TrainingArguments( output_dir=os.path.join(output_dir, 'results'), overwrite_output_dir=True, num_train_epochs=num_train_epochs, per_device_train_batch_size=per_device_train_batch_size, per_device_eval_batch_size=per_device_eval_batch_size, warmup_steps=warmup_steps, weight_decay=weight_decay, logging_dir=os.path.join(output_dir, 'logs'), logging_steps=100, learning_rate=learning_rate, evaluation_strategy="epoch", max_grad_norm=max_grad_norm, metric_for_best_model="eval_loss", report_to=['tensorboard'], local_rank=local_rank) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=val_dataset ) ``` The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [x] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Set load_best_model_at_end=True, when using DistributedDataParallel (python -m torch.distributed.launch ...) and the following stack trace appears after training is complete. 2. If you don't use DistributedDataParallel or don't set load_best_model_at_end to True, then this work as expected and there is no error. <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ``` OSError: Can't load config for 'checkpoint-115'. Make sure that: - 'checkpoint-115' is a correct model identifier listed on 'https://huggingface.co/models' - or 'checkpoint-115' is the correct path to a directory containing a config.json file ``` ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> No error.
02-26-2021 21:53:31
02-26-2021 21:53:31
Could you explain a bit more the code you are running as well as the exact command you are using for launch? We can't help if we can't reproduce your bug and running: ``` python -m torch.distributed.launch --nproc_per_node 2 examples/text-classification/run_glue.py \ --model_name_or_path bert-base-uncased \ --task_name mrpc --output_dir test/mrpc \ --load_best_model_at_end \ --do_train \ --do_eval \ --evaluation_strategy epoch \ --overwrite_output_dir ``` for instance does not reproduce it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,428
closed
[run_seq2seq.py] restore functionality: saving to test_generations.txt
This PR restores the original functionality that for some reason was modified. Fixes: https://github.com/huggingface/transformers/issues/10381 @sgugger
02-26-2021 21:51:55
02-26-2021 21:51:55
Ah, I re-read it closer, you're correct. I just remembered the part about the `test_generations.txt` but didn't bother to check out the full story. My bad. I will study it and follow up once I understand it better. <|||||>I went back to `finetune_trainer.py` from December and checked that it was just saving `test_generations.txt` at the very end once. I can't find any code where it did generate this at every checkpoint. @kingpalethe, please correct me if I'm wrong. If there were to be a `save_checkpoint` callback then it could generate one for each saved check point. So it probably needs to be requested via a feature request Issue. So the current PR is still is a good idea to support those who relied on this particular filename. But I'm not attached to it.<|||||>I don't think it matters much, we just have to be quick if people using the new script start to rely on the new name.<|||||>OK, let's restore the original name.
transformers
10,427
closed
[examples] better model example
As a continued effort to make examples easy to read and synchronizing them all to use the same look and feel, this PR tries to improve `run_seq2seq.py` as a model and then future PRs will sync other examples with it. * [x] makes the helper methods work for rank0 internally - simplifying the caller * [x] abstracts the helper `trainer.state.save_to_json` into a simple method * [x] automatically aggregates all metrics into `all_metrics.json` w/o requiring any extra code on the caller side Anything else? @sgugger
02-26-2021 21:33:09
02-26-2021 21:33:09
transformers
10,426
closed
[WIP] CLIP
# What does this PR do? This PR adds OpenAI's CLIP model. original repo: https://github.com/openai/CLIP initial demo: https://colab.research.google.com/drive/1hwiCuKvw7hwSlE8yv7J1dh280PlYgPef?usp=sharing
02-26-2021 21:29:49
02-26-2021 21:29:49
Awesome effort. Is the current version already compatible with (now merged) #10594 ?<|||||>Thanks @dribnet I haven't added a feature extractor class for CLIP yet. We first need to finish the `ImageFeatureExtractor` (#01608) Then the `ClipFeatureExtractor` can inherit from that. This PR will be ready to merge by the end of next week.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>continuing this in #11445
transformers
10,425
closed
RAG and retrieved documents
I pretrained a rag model using the "finetune_rag.py" script and it generates pretty good results for my (knowledge intensive) use case, certainly better than the straight finetuned BART model i was using before. I am using my own custom datasource generated from use_own_knowledge_dataset.py. One curious thing that is happening is that when i try to find what documents were retrieved during the generation process, i always get the same documents. I'm using the basic code snippet from issue#8104 and no matter what input i give, it returns the same few documents no matter how unrelated they are to the input. The generated results are still very good, so I'm not sure if there's an issue with how i'm retrieving the documents or if it really is always grabbing the same few docs for some reason, potentially an issue with my data. Any help or pointers with this would be greatly appreciated. Thank you.
02-26-2021 20:09:30
02-26-2021 20:09:30
Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests. Could you ask your question on the [forum](https://discusss.huggingface.co) instead? Thanks!
transformers
10,424
closed
Refactor checkpoint name in BERT and MobileBERT
# What does this PR do? Linked to #10193, this PR gives an example on how to refactor the checkpoint names in one private constant and use the `# Copied from` syntax to make the task-specific models that are copies of each other properly watched by our tooling. It adds the option in check-copies to: - put multiple statements behind the with: so for instance `with Bert->MobileBert, bert->mobilebert, BERT->MOBILEBERT` - have the option to do all possible casings (like in the example above) by just adding `all-casing`: `with Bert->MobileBert all-casing`. It also fixes an existing bug when the first line of the function/class copied from was empty.
02-26-2021 18:24:27
02-26-2021 18:24:27
transformers
10,423
closed
[examples] add --max_train_samples --max_val_samples --max_test_samples cl args to all scripts
As a part of an effort to make all examples have the same look and feel this issue requests to sync the support for these 3 cl args in `run_seq2seq.py`: ``` --max_train_samples 5 --max_val_samples 5 --max_test_samples 5 ``` into: 1. all other `examples/*/run_*.py` 2. `templates/adding_a_new_example_script` Part B. the metrics should be now updated to include the actual number of samples that were run. here is an example for train: https://github.com/huggingface/transformers/blob/f52a15897b46ffa40af5c96d3726f0e18e91879b/examples/seq2seq/run_seq2seq.py#L586-L590 and the same for eval/test. I'd say this can probable refactored too. Let me check with Sylvain. The way it's currently used is to limit the number of dataset entries w/o needing to change the dataset, example: ``` run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --do_eval --do_predict --do_train \ --evaluation_strategy=steps --predict_with_generate --task summarization --dataset_name xsum \ --max_train_samples 60 --max_val_samples 10 --n_test 10 ``` All the code that currently takes care of it can be found inside https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py This issue is open to anybody in the community who would like to tackle it. Thank you!
02-26-2021 18:18:01
02-26-2021 18:18:01
Hi @stas00, Can I work on this?<|||||>Yes please! Thank you, @bhadreshpsavani <|||||>I was just thinking about this and why do we not have this functionality in the Trainer in first place? then perhaps none of this will be needed. I'm asking here: https://github.com/huggingface/transformers/issues/10437 Perhaps this task will become redundant then. Please wait a little bit.<|||||>Cool!<|||||>Hi @stas00, `templates/adding_a_new_example_script` is still remaining right?<|||||>That's correct! Thank you for remembering it! for max cl args and also the metrics please! Thank you!<|||||>Hi @stas00, Since its just a template there no way to test the changes right?<|||||>It sounds right. Use your internal compiler. Probably once it's written, it can be run through the cookie-cutter and then tested? But I think it shouldn't be too difficult to test it visually. If you want to try the cookie-cutter, the doc is here: https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_example_script
transformers
10,422
closed
Layoutlm tf
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> This PR adds TF version of LayoutLM for issue [(10312)](https://github.com/huggingface/transformers/issues/10312) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [X] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-26-2021 17:09:11
02-26-2021 17:09:11
Very nice! Can you let us know when you want us to review/give feedback/help? Thanks!<|||||>> Very nice! Can you let us know when you want us to review/give feedback/help? Thanks! thanks! I need to upload the TF model file to the hub and run another check to make sure it gives the same results as the PT version. I had verified that but I made few changes so I am gonna run the checks one more time. I'll tag you when it's done<|||||>@LysandreJik I've uploaded TF models to the model hub under : - atahmasb/tf-layoutlm-base-uncased - atahmasb/tf-layoutlm-large-uncased I appreciate if you and the team could take a look at the code and give me feedback. There are some tests that are failing, I haven't figured the issues out but the code is up for review. Maybe someone could look into the logs and guide me on how to fix them. Meanwhile I'll see if I can make the tests pass.<|||||>> Hi @atahmasb, this looks great! I've only left a few comments, everything looks good. I'll do a deeper review once all the tests pass, as things are bound to change until they do, but the idea here is sound! > > Do you need any help to make the tests pass? I am going to try one more time to resolve them today, if I can't then I'll ask for help<|||||>@LysandreJik all tests passed! It's ready for a deeper review please <|||||>Thanks for letting me know! It seems GitHub botched your rebase, as it is showing 53 files changed. Could you close this PR and open a new one (no need to do anything on your branch) so that we may see the diff better? Thanks! <|||||>> Thanks for letting me know! It seems GitHub botched your rebase, as it is showing 53 files changed. Could you close this PR and open a new one (no need to do anything on your branch) so that we may see the diff better? > > Thanks! sure, will do
transformers
10,421
closed
updated metrics saving and logging
# What does this PR do? I have updated redundant code for saving and logging metrics in the example scripts <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes #10337 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @stas00 @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-26-2021 16:58:42
02-26-2021 16:58:42
I want to mention one thing here while testing the file I found that, For files `run_clm.py, run_mlm.py, run_plm.py, run_ner.py, run glue.py` The logs are as expected like this ``` 02/26/2021 20:31:22 - INFO - __main__ - ***** eval metrics ***** 02/26/2021 20:31:22 - INFO - __main__ - HasAns_exact = 0.0 02/26/2021 20:31:22 - INFO - __main__ - HasAns_f1 = 0.0 02/26/2021 20:31:22 - INFO - __main__ - HasAns_total = 8 02/26/2021 20:31:22 - INFO - __main__ - NoAns_exact = 100.0 02/26/2021 20:31:22 - INFO - __main__ - NoAns_f1 = 100.0 02/26/2021 20:31:22 - INFO - __main__ - NoAns_total = 6 02/26/2021 20:31:22 - INFO - __main__ - best_exact = 42.857142857142854 02/26/2021 20:31:22 - INFO - __main__ - best_exact_thresh = 0.0 02/26/2021 20:31:22 - INFO - __main__ - best_f1 = 42.857142857142854 02/26/2021 20:31:22 - INFO - __main__ - best_f1_thresh = 0.0 02/26/2021 20:31:22 - INFO - __main__ - epoch = 1.43 02/26/2021 20:31:22 - INFO - __main__ - exact = 42.857142857142854 02/26/2021 20:31:22 - INFO - __main__ - f1 = 42.857142857142854 02/26/2021 20:31:22 - INFO - __main__ - total = 14 ``` But for other files like `run_qa.py, run_qa_beam_search.py, run_swags.py` The logs were like below, ``` ***** eval metrics ***** HasAns_exact = 0.0 HasAns_f1 = 0.0 HasAns_total = 8 NoAns_exact = 100.0 NoAns_f1 = 100.0 NoAns_total = 6 best_exact = 42.8571 best_exact_thresh = 0.0 best_f1 = 42.8571 best_f1_thresh = 0.0 epoch = 1.43 exact = 42.8571 f1 = 42.8571 total = 14 ``` Without a timestamp, log level, and function name. When I run the command `python -m unittest discover -s examples -t examples -v` it was giving proper logs <|||||>And also for consistency let's add this bit that is currently in `run_seq2seq.py`: https://github.com/huggingface/transformers/blob/98569d4ba237d84714f6c15e2c301fd22d42d2b1/examples/seq2seq/run_seq2seq.py#L643-L644 this allows the user to load all metrics in one call. It can be part of this PR, or a separate one if you'd like to make this completed faster. And I can make a separate issue to add it. That is if @sgugger you're in agreement with that syncing proposal.<|||||>I'm fine with it. One thing that's striking me is that all those calls are inside an `if trainer.is_world_process_zero()`. Shouldn't we refactor that bit in the `log_metrics`/`save_metrics` method?<|||||>Sure @sgugger and @stas00 I will make changes in this PR to get the changes available it faster<|||||>It won't make much of a difference at the moment since there is other code that is running under `if trainer.is_world_process_zero()` - if we make that code refactored too then absolutely yes - it would make that part of the scripts so much simpler.<|||||>we could probably do something about: ``` trainer.save_metrics("eval", metrics) all_metrics.update(metrics) ``` so that it's not done separately, by 1. either having `save_metrics` always update `all_results.json` with every call 2. or simply have trainer store the metrics internally and then just have one call to flush it to the disk at the end of the run suggestion 1 will require read+write but the cool thing is that it's totally automated and requires no extra calls later.<|||||>@bhadreshpsavani, I suggest we 1. finish this PR w/o introducing new changes we are discussing, since they aren't yet thought out well/agreed upon yet. 2. Then we tweak `run_seq2seq.py` to do things better, have it as a model, 3. and then sync to other scripts? how does that sound? To be clear, perhaps leave out this suggestion for now https://github.com/huggingface/transformers/pull/10421#issuecomment-786806023 if we are going to refactor it anyway - unless you already did it, then please keep it in.<|||||>Okay @stas00, I have added [this](https://github.com/huggingface/transformers/pull/10421#pullrequestreview-599815674) changes about logger and it is working perfectly. I will commit it in this PR for that three files.<|||||>OK, so the only missing step is to update the template `templates/adding_a_new_example_script`<|||||>@bhadreshpsavani, this is now merged: https://github.com/huggingface/transformers/pull/10427 and can be replicated to other scripts (and one template) Please feel free to add it to this PR or open a new one - whatever works the best for you. Thank you! <|||||>@stas00, I will first try to make changes to this PR<|||||>in the `run_glue.py` we have code like this for saving test result, ```python output_test_file = os.path.join(training_args.output_dir, f"test_results_{task}.txt") if trainer.is_world_process_zero(): with open(output_test_file, "w") as writer: logger.info(f"***** Test results {task} *****") writer.write("index\tprediction\n") for index, item in enumerate(predictions): if is_regression: writer.write(f"{index}\t{item:3.3f}\n") else: item = label_list[item] writer.write(f"{index}\t{item}\n") ``` I didn't change it because it was different than our general save_metrics() methods. Is there any way we can generalize it? <|||||>Hi @stas00 and @sgugger, I was trying to make changes in the same PR and after doing rebase from master I needed to merge the new commits to my branch to push my changes. Let me know if this PR is not fine or I need to make another PR with all these changes and delete this one. This is the first time I used `git rebase upstream/master` so I might have done it incorrectly.<|||||>Yes the rebase has made new files appear in the diff that are irrelevant to your work, so it would be great if you could close this PR and open a new one (no need to do anything else than that like creating a new branch, it's just git being annoying here). For your earlier question, leave the part in `run_glue` that doesn't refactor nicely as it is I would say.<|||||>As Sylvain said you can make a new PR branch, but you can also fix this PR by rolling back to the last good commit before the failed rebase: ``` git reset --soft 4e529f1 git commit git push -f ``` and then rebase BTW, If you want to use an automated rebase process please consider this little script: https://github.com/stas00/git-tools/tree/master/git-rebase <|||||>Sure @stas00, I will use an automatic rebase script next time. For simplicity, I have created another [PR](https://github.com/huggingface/transformers/pull/10436) with all the changes and tested the changes. I closing this PR.
transformers
10,420
closed
Unable to convert Facebook/mbart-many-to-many model to onxx
When I tried to convert Facebook/mbart-many-to-many model . I am unable to convert I am getting issue. Pls help me to convert this model to ONXX
02-26-2021 14:33:48
02-26-2021 14:33:48
I do not think mBART can be converted to ONNX as of now.<|||||>Hi Thanks for the information. Facebook/many-to-many takes 9s seconds for translation on cpu , is there a way to reduce the inference time ?<|||||>Hi @sankarsiva123, have you tried HF's API inference ? 9s per inference seems a bit off: https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt?text=Hello+there+%21+ We do run some optimizations there as HF's hosted API but still it seems like you could have better inference times than 9s. Maybe it depends on what you are sending it ? Are you using GPU or CPU ?<|||||>Hi, @Narsil Yeah, I tried HF's API inference, it is pretty much fast. I am using CPU, I tried both in google colab, and in my local, it is taking around 9s. Am I missing something while using the model, so my inference time is high than normal? Also pls let me know is there a way to reduce inference time? ![image](https://user-images.githubusercontent.com/58412261/109768056-6c229180-7c1e-11eb-9ad2-b3d9bf0a14ca.png) <|||||>Can you time your inner loop without the tokenizer ? (Just making sure it's not that). Otherwise you see to use generate, which is the right way to go. I don't know colab's CPU nor yours, but it could definitely be the problem (or the pytorch version you're rolling which might have not been optimized for your CPU instruction set.)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,419
closed
[LED] Correct Docs
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - albert, bert, xlm: @LysandreJik - blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj - longformer, reformer, transfoxl, xlnet: @patrickvonplaten - fsmt: @stas00 - funnel: @sgugger - gpt2: @patrickvonplaten, @LysandreJik - rag: @patrickvonplaten, @lhoestq - tensorflow: @jplu Library: - benchmarks: @patrickvonplaten - deepspeed: @stas00 - ray/raytune: @richardliaw, @amogkam - text generation: @patrickvonplaten - tokenizers: @n1t0, @LysandreJik - trainer: @sgugger - pipelines: @LysandreJik Documentation: @sgugger HF projects: - nlp datasets: [different repo](https://github.com/huggingface/nlp) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Examples: - maintained examples (not research project or legacy): @sgugger, @patil-suraj - research_projects/bert-loses-patience: @JetRunner - research_projects/distillation: @VictorSanh -->
02-26-2021 14:29:28
02-26-2021 14:29:28
transformers
10,418
closed
Slow evaluation using Trainer with TPUs in Colab
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.7.10 - PyTorch version (GPU?): 1.9.0a0+7a178a8 (False) - Tensorflow version (GPU?): 2.4.1 (False) - Using GPU in script?: TPU - Using distributed or parallel set-up in script?: NO @sgugger @patrickvonplaten Model I am using (Bert, XLNet ...): BERT I'm having very slow eval times using the `Trainer` API in conjunction with `XLA` in Google Colab. While the training epochs are running at a good speed, evaluating after each epoch it takes a very long time. I've tried restricting dataset size and tokenization max length with no success. I'm not sure how to check whether it's using `XLA` during evaluation. The task I am working on is NLI, using `multi-nli` from `datasets` ## To reproduce Execute this notebook https://colab.research.google.com/drive/1dVEfoxGvMAKd0GLnrUJSHZycGtyKt9mr?usp=sharing ## Expected behavior Evaluation speed should be approximately the same as training.
02-26-2021 14:21:36
02-26-2021 14:21:36
The notebook won't execute on TPU, you need to spawn a function on multiple processes for this (`xm.spawn(train_function)`). That function should contain all the training code including the `Trainer`, but `Trainer.train` by itself won't spawn multiple processes. The recommended way to train on TPU is to follow the steps in the [examples](https://github.com/huggingface/transformers/tree/master/examples#running-on-tpus) to run the scripts.<|||||>Thanks for your answer @sgugger. Is there any plan to add an easier way to use TPUs in Colab?<|||||>I don't know of any easier way than launching the training function (in PyTorch). If you come across an easy example, please let me know and we will try to make the `Trainer` as easy to use.<|||||>Ok, sorry. I think I misunderstood. I thought that I should create a separate module for the training function because of the same reason that `multiprocessing` has issues with jupyter environments I tried moving everything to a function and using `xmp.spawn(train_nli, args=())`, but I get this error which is not quite clear: ```python --------------------------------------------------------------------------- Exception Traceback (most recent call last) <ipython-input-4-d4081c64cb6f> in <module>() 5 ----> 6 xmp.spawn(train_nli, args=()) 2 frames /usr/local/lib/python3.7/dist-packages/torch_xla/distributed/xla_multiprocessing.py in spawn(fn, args, nprocs, join, daemon, start_method) 393 join=join, 394 daemon=daemon, --> 395 start_method=start_method) 396 397 /usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in start_processes(fn, args, nprocs, join, daemon, start_method) 155 156 # Loop on join until it returns True or raises an exception. --> 157 while not context.join(): 158 pass 159 /usr/local/lib/python3.7/dist-packages/torch/multiprocessing/spawn.py in join(self, timeout) 110 raise Exception( 111 "process %d terminated with exit code %d" % --> 112 (error_index, exitcode) 113 ) 114 Exception: process 7 terminated with exit code 1 ``` Any ideas? (Everything is on the same notebook as before)<|||||>Ok, I followed this notebook ([T5 on TPU](https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb)) and I managed to solve that error by using **`start_method="fork"`** on `xmp.spawn`. Thanks for your help @sgugger! ```python def train_nli(index): # All the training code here ... xmp.spawn(train_nli, args=(), start_method="spawn") ``` The notebook with the full code is [here](https://colab.research.google.com/drive/1dVEfoxGvMAKd0GLnrUJSHZycGtyKt9mr#scrollTo=k-e4NqfrtrJy)
transformers
10,417
closed
Dont use sigmoid when num_labels==1
It seems like we use MSELoss when num_labels==1 in config, i.e. a single column regression problem. But in text_classification pipeline, this is considered as a classification problem and uses sigmoid. This PR fixes that issue.
02-26-2021 14:06:34
02-26-2021 14:06:34
Sorry, I don't really understand this - could give a bit more context?<|||||>@patrickvonplaten Please see this https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1515 or any `XForSequenceClassification` model<|||||>This is a very big breaking change. What do you think of the proposed approach here? https://github.com/huggingface/transformers/pull/8328 I think it allows what you're looking for, but in backwards-compatible way. The PR is old and the diff isn't very readable, but if that's something that could fit your use case I can update it to the latest code.<|||||>@LysandreJik The way it's done in the pipeline here is absolutely incorrect. If the model is trained using MSELoss when num_labels = 1, it means that it is a regression problem and in that case, we should return raw values, not sigmoid. Returning raw values can be an option but for now, this fix is important as the values returned for num_labels=1 in this pipeline is incorrect: it should return raw value for regression, not sigmoid.<|||||>@LysandreJik Also, I didn't understand what would this PR break? <|||||>Thank you for your feedback. It was done this way in order to enable inference on [DialogRPT](https://github.com/golsun/DialogRPT). It was the first model that performed sequence classification with a single label, so we defined it this way as you can see in this issue https://github.com/huggingface/transformers/issues/7493. I understand that this is an issue for most cases, so being able to return raw values is important. However, we must find a way to do it in a backwards compatible way. We can't just change the code and break all models that rely on that pipeline. > @LysandreJik Also, I didn't understand what would this PR break? Well, for one, all of the DialogRPT models on the [hub](https://huggingface.co/models?search=dialogrpt).<|||||>I have updated PR #8328 for better readability. If this suits your use-case, I'll try to have it merged ASAP so as not to be blocking for you.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
10,416
closed
Add BERTForMultiLabel Classification or Regression
02-26-2021 13:09:20
02-26-2021 13:09:20
moving to new pr
transformers
10,415
closed
Bug when combining grouped beam search and constrained prefix decoding
## Environment info - `transformers` version: 4.3.3 - Platform: Linux-5.8.0-38-generic-x86_64-with-glibc2.29 - Python version: 3.8.5 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no ### Who can help @patrickvonplaten ## Information Model I am using (Bert, XLNet ...): T5 The problem arises when using: my own modified scripts ## To reproduce Steps to reproduce the behavior: run this simple script ```python from transformers import T5TokenizerFast, T5ForConditionalGeneration tokenizer = T5TokenizerFast.from_pretrained('t5-small') inp = 'The <extra_id_0> walks in <extra_id_1> park' enc_inp = tokenizer(inp, return_tensors='pt') model = T5ForConditionalGeneration.from_pretrained('t5-small') def prefix_allowed_tokens_fn(batch_id, input_ids): return [2] # dummy value out = model.generate( **enc_inp, num_beams=2, num_beam_groups=2, diversity_penalty=0.2, prefix_allowed_tokens_fn=prefix_allowed_tokens_fn ) ``` This produces the following error: ``` Traceback (most recent call last): File "debugging/grouped_beam_search.py", line 14, in <module> out = model.generate( File "/usr/local/lib/python3.8/dist-packages/torch/autograd/grad_mode.py", line 15, in decorate_context return func(*args, **kwargs) File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 1041, in generate return self.group_beam_search( File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_utils.py", line 2161, in group_beam_search next_token_scores = logits_processor( File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_logits_process.py", line 89, in __call__ scores = processor(input_ids, scores) File "/mounts/Users/student/martin/.local/lib/python3.8/site-packages/transformers/generation_logits_process.py", line 458, in __call__ for batch_id, beam_sent in enumerate(input_ids.view(-1, self._num_beams, input_ids.shape[-1])): RuntimeError: shape '[-1, 2, 1]' is invalid for input of size 1 ``` ## Expected behavior No error. As far as I can tell, the `PrefixConstrainedLogitsProcessor` still receives the original number of beams even when grouped beam search is used. But it should be the number of subbeams. So replacing `num_beams` with `num_beams // num_beam_groups` in the constructor of `PrefixConstrainedLogitsProcessor` in method `_get_logits_processor` in file `generation_utils.py` should fix it. What do you think?
02-26-2021 11:48:47
02-26-2021 11:48:47
Hey @mnschmit, thanks for your bug report! Yes, you're right -> I think we should indeed replace `num_beams` by `num_beams // num_beam_groups`. Do you want to open a PR to fix it? :-) Otherwise, I can do it as well