repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
7,502
closed
Functionality to pass first few tokens as input to the decoder in T5 model
I am finetuning the T5 model on a downstream sequence to sequence task. I wanted to know if it is possible to pass first few tokens as input to the T5 decoder during inference (like it is done in text generation models) apart from the input provided to the encoder?
10-01-2020 12:20:47
10-01-2020 12:20:47
transformers
7,501
closed
Add GPT2ForSequenceClassification based on DialogRPT
# What does this PR do? This PR implements `GPT2ForSequenceClassification` in order to support DialogRPT. Closes https://github.com/huggingface/transformers/issues/7493. `GPT2ForSequenceClassification` uses the last token in order to do the classification, as other causal models (e.g. GPT-1) do. Since it does classification on the last token, it requires to know the position of the last token. If a `pad_token_id` is defined in the configuration, it finds the last token that is not a pad token in each row. If no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the pad tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in each row of the batch). Here's how to replicate the results shown on the [original implementation](https://github.com/golsun/DialogRPT#use-rankers-only): ```py from transformers import GPT2Tokenizer, GPT2ForSequenceClassification import torch tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2ForSequenceClassification.from_pretrained("directory_where_pth_config_are_saved") model_input = tokenizer.encode("I love NLP!<|endoftext|>Here’s a free textbook (URL) in case anyone needs it.", return_tensors="pt") result = model(model_input, return_dict=True) final_output = torch.sigmoid(result.logits) print(final_output) # tensor([[0.6129]], grad_fn=<SigmoidBackward>) ``` Once this PR is merged I'll open two "Good first issues": - Implement `GPT2ForSequenceClassification` in TF2 - Implement sequence classification models for other causal transformers
10-01-2020 11:57:48
10-01-2020 11:57:48
transformers
7,500
closed
Trucated Outputs while finetuning 'bart-base' on XSUM [Summarization Task]
Hey! I am trying to finetune bart-base model on the XSUM using standard commands. In the test-generations.txt file, the outputs I am getting after a few epochs (2-3) are truncated arbitrarily. Here is the exact command I am using: `./finetune.sh --data_dir $XSUM_DIR --train_batch_size=8 --eval_batch_size=8 --output_dir=xsum_results --num_train_epochs 1 --model_name_or_path facebook/bart-base` xsum_results is the directory I created and I am running this inside the examples/seq2seq directory. I referred to these issues but could not find anything that could help me: https://github.com/huggingface/transformers/issues/5656 https://github.com/huggingface/transformers/issues/6502 Some examples of the outputs I am getting: German carmaker Daimler has reported a rise in sales of its cars and trucks in Angelina Jolie has been honoured at a film festival in Bosnia, where she was the Cuba's President Raul Castro has said he will introduce a series of reforms to the country People who are suicidal should be given more help to stop them from jumping, a charity has My pip freeze: `absl-py==0.10.0 cachetools==4.1.1 certifi==2020.6.20 chardet==3.0.4 click==7.1.2 dill==0.3.2 filelock==3.0.12 future==0.18.2 gitdb==4.0.5 GitPython==3.1.8 google-auth==1.21.3 google-auth-oauthlib==0.4.1 grpcio==1.32.0 idna==2.10 joblib==0.16.0 Markdown==3.2.2 nlp==0.4.0 nltk==3.5 numpy==1.19.2 oauthlib==3.1.0 packaging==20.4 pandas==1.1.2 Pillow==7.2.0 portalocker==2.0.0 protobuf==3.13.0 pyarrow==1.0.1 pyasn1==0.4.8 pyasn1-modules==0.2.8 pyparsing==2.4.7 python-dateutil==2.8.1 pytorch-lightning==0.9.0 pytz==2020.1 PyYAML==5.3.1 regex==2020.9.27 requests==2.24.0 requests-oauthlib==1.3.0 rouge-score==0.0.4 rsa==4.6 sacrebleu==1.4.14 sacremoses==0.0.43 sentencepiece==0.1.91 six==1.15.0 smmap==3.0.4 tensorboard==2.2.0 tensorboard-plugin-wit==1.7.0 tokenizers==0.8.1rc2 torch==1.6.0+cu101 torchvision==0.7.0+cu101 tqdm==4.49.0 transformers @ git+https://github.com/yashgupta-7/transformers@9e68d075a4100906509170498480823e7e61874a urllib3==1.25.10 Werkzeug==1.0.1 xxhash==2.0.0 zipp==3.2.0` Here is the recommended setting for XSUM. `--max_target_length=60 --val_max_target_length=60 --test_max_target_length=100` But the default values are all higher than these, so this cannot be a problem in my opinion. It would be great if someone can point me to the potential problems that may be the reason. I am looking forward to fine-tuning on a custom dataset and really want the standard XSUM to get working!
10-01-2020 11:22:28
10-01-2020 11:22:28
Try adding `decoder_start_token_id=2` to `best_tfmr/config.json` and let me know if that changes anything!<|||||>I just did the above-mentioned change and decoded (run_eval.py) without training any further. Still, the issue persists. <|||||>Ok, you could try re-training/training more with new training code. We can't reproduce this on `master`<|||||>okay, thanks!
transformers
7,499
closed
german distilbert not available?
I try using the distilbert-base-german-cased which is also listed here https://huggingface.co/transformers/pretrained_models.html but I get the message that this model is not avaiable. Seems also not here listed? https://huggingface.co/models?filter=tf,de Is this just currently the case?
10-01-2020 11:03:38
10-01-2020 11:03:38
Or is this model only available for pytorch and not TF?<|||||>https://huggingface.co/distilbert-base-german-cased ^^ currently only the Pytorch weights, but you can load into TF pretty easily (follow the README/doc). If needed we can upload the converted TF weights (cc @stefan-it who I believe trained this model? This predates our user/organization namespaces)<|||||>And I've added a `de` language tag so that the model is discoverable via https://huggingface.co/models?filter=de&search=distilbert Thanks for reporting!<|||||>Oh, we've an open issue regarding to the TF checkpoint 😅 https://github.com/dbmdz/berts/issues/8 @julien-c could you convert the model and upload it (not sure if I've access to root S3), thanks :heart: <|||||>> but you can load into TF pretty easily (follow the README/doc). COuld you show me where I can read about htis? Cab just find from the getting stared tour: `bert_model = TFDistilBertModel.from_pretrained('distilbert-base-german-cased', from_tf=False) ` ...but this does not work...<|||||>`from_pt=True`?<|||||>Ok, great. So easy.^^
transformers
7,498
closed
Update README.md
Making transformers readme more robust. # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
10-01-2020 10:23:52
10-01-2020 10:23:52
transformers
7,497
closed
How to generate data using beam search from a custom gpt2 model?
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details I have a custom model with classification and an LM head. ` self.config = AutoConfig.from_pretrained("gpt2", num_labels=3) self.base_model = AutoModel.from_pretrained("gpt2", config=self.config) self.classifier = nn.Sequential( nn.Linear(self.config.hidden_size, self.config.num_labels), ) self.lm_head = nn.Linear(self.base_model.config.n_embd, self.base_model.config.vocab_size, bias=False)` I want to generate the sentences using this model (given the initial prefix) via beam search. How can I achieve that? I know that LM with double head exists but it's not fit for my usecase
10-01-2020 06:45:55
10-01-2020 06:45:55
Here's an example using beam search with GPT-2: ```py from transformers import GPT2LMHeadModel, GPT2Tokenizer tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = GPT2LMHeadModel.from_pretrained("gpt2") input_ids = tokenizer("The day starts with", return_tensors='pt')['input_ids'] print(tokenizer.decode(model.generate(input_ids, num_beams=3)[0])) ``` Result: ``` The day starts with a long walk to the top of the hill. The first thing you ```<|||||>[Here's the doc for the `generate` method](https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate)<|||||>Thanks @LysandreJik for the response but I have a custom model, I want to know how can I generate using my model<|||||>@LysandreJik - How can I generate sentences using my custom model?<|||||>I would recommend you check the[ source code for the `generate` method](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L111) and see how the beam search is implemented. It is not trivial, however. Maybe @sshleifer and @patrickvonplaten have better tips on how to best do this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>@nrjvarshney Did you find any suitable way to use `generate` function for a custom model? I am facing a similar issue with a model of mine, and would be really grateful if you could let me know how to solve the issue.
transformers
7,496
closed
BertforSequenceClassification MSELoss() without normalizing using sigmoid/softmax
- `transformers` version: 3.3.0 - Platform: Darwin-18.7.0-x86_64-i386-64bit - Python version: 3.7.4 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No @LysandreJik @sshleifer I'm trying to use BartForSequenceClassification() to do a regression, which should use a MSELoss() when (num_labels=1), as stated in the documentation. However, when I went over the source code for modeling_Bart.py, it didn't seem like the regression functionality with MSELoss() was added in the source code. Within the class BartForSequenceClassfication(), It only has CrossEntropyLoss() to do classification. I wonder if you will be adding this functionality. So I went to BertForSequenceClassification() class to see how it did it, and I found that it might have a problem. In the class BertForSequenceClassification(BertPreTrainedModel) (line 1352): if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) Here the MSELoss() and CrossEntropyLoss() are both loss functions from pytorch. So you passed in the logits, which are unnormalized probabilities, to both of the loss functions. It is ok to do so for the CrossEntropyLoss(), since from pytorch's documentation they expect the inputs to be unnormalized logits and from their source code they first log softmax it before actually computing the loss, but I don't think it's okay to do the same for the MSELoss() functions. If you look at their implementation of it, it did not normalize it first using either softmax or sigmoid, and they also indicate that the function expects unnormalized logits in their documentation. I'm not sure if this behavior is intended (which is unlikely since we want to normalize it first before loss functions or there could be gradient explosions), but I think this confusion/inconsistency from pytorch may cause a problem and you probably want to change it. Please correct me if I'm wrong and thanks for this wonderful package! Thanks!
10-01-2020 06:43:29
10-01-2020 06:43:29
> * `transformers` version: 3.3.0 > > * Platform: Darwin-18.7.0-x86_64-i386-64bit > > * Python version: 3.7.4 > > * PyTorch version (GPU?): 1.6.0 (False) > > * Tensorflow version (GPU?): not installed (NA) > > * Using GPU in script?: No > > * Using distributed or parallel set-up in script?: No > > > @LysandreJik @sshleifer > > I'm trying to use BartForSequenceClassification() to do a regression, which should use a MSELoss() when (num_labels=1), as stated in the documentation. However, when I went over the source code for modeling_Bart.py, it didn't seem like the regression functionality with MSELoss() was added in the source code. Within the class BartForSequenceClassfication(), It only has CrossEntropyLoss() to do classification. I wonder if you will be adding this functionality. > > So I went to BertForSequenceClassification() class to see how it did it, and I found that it might have a problem. > > In the class BertForSequenceClassification(BertPreTrainedModel) (line 1352): > > if labels is not None: > if self.num_labels == 1: > # We are doing regression > loss_fct = MSELoss() > loss = loss_fct(logits.view(-1), labels.view(-1)) > else: > loss_fct = CrossEntropyLoss() > loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) > > Here the MSELoss() and CrossEntropyLoss() are both loss functions from pytorch. > > So you passed in the logits, which are unnormalized probabilities, to both of the loss functions. It is ok to do so for the CrossEntropyLoss(), since from pytorch's documentation they expect the inputs to be unnormalized logits and from their source code they first log softmax it before actually computing the loss, but I don't think it's okay to do the same for the MSELoss() functions. If you look at their implementation of it, it did not normalize it first using either softmax or sigmoid, and they also indicate that the function expects unnormalized logits in their documentation. I'm not sure if this behavior is intended (which is unlikely since we want to normalize it first before loss functions or there could be gradient explosions), but I think this confusion/inconsistency from pytorch may cause a problem and you probably want to change it. Please correct me if I'm wrong and thanks for this wonderful package! > > Thanks! I am trying to use BertforSequeneClassification to do regression too and faced the same issue. Do u have any walk around? thx! <|||||>@liusiyi641 Just to clarify - do you expect to rewrite that code snippet as follows: ```python if labels is not None: if self.num_labels == 1: # We are doing regression loss_fct = MSELoss() normalizer = nn.Sigmoid() logits = normalizer(logits) * (B - A) + A # we expect the regression values be on [A, B] interval loss = loss_fct(logits.view(-1), labels.view(-1)) else: loss_fct = CrossEntropyLoss() loss = loss_fct(logits.view(-1, self.num_labels), labels.view(-1)) ``` right? I tried this change but it doesn't have any influence on the training process in my case: the same accuracy, the same behavior of the learning curve.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
7,495
closed
quick questions about the `BertModelLMHeadModel`.
Hello, I have a few questions about the `BertModelLMHeadModel`: 1. Is `BertModelLMHeadModel` used to conduct the causal language modeling (next token prediction), as it is the case for the `GPT2LMHeadModel`? 2. For `GPT2LMHeadModel`, I can just specify `labels = input_ids` for convenience. I just specify the `labels` in this way for the `BertModelLMHeadModel` as well? Thanks,
10-01-2020 02:17:15
10-01-2020 02:17:15
Bert is a Masked Lanuage Model. <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,494
closed
Is the multiple-choice head for the pre-trained `LongformerForMultipleChoice` model pre-trained?
Hello, Is the multiple-choice head for the pre-trained `LongformerForMultipleChoice` model pre-trained as well? I am asking because like for the pre-trained `GPT2DoubleHeadsModel`, the main body of the model is trained but its multiple-choice head is not. Is the multiple-choice head for the pre-trained `LongformerForMultipleChoice` model untrained, just like the case for the `GPT2DoubleHeadsModel`? Thank you,
10-01-2020 02:15:06
10-01-2020 02:15:06
Hi! This depends on the checkpoint. If you're using a checkpoint pre-trained on multiple-choice, then it's very possible that it is pre-trained. If you're using a checkpoint pre-trained on another task, it might not be pre-trained. You should be wary of the task on which the model was trained when leveraging a model with a pre-trained head, as it might not overlap with your current task.
transformers
7,493
closed
Sharing Microsoft's DialogRPT (new dialog ranking model)
# 🌟 New model addition ## Model description Thanks for the awesome work! [DialogRPT](https://github.com/golsun/DialogRPT) (Dialog Ranking Pretrained Transformers) is a set of GPT-2 based dialogue ranking models recently released with an [EMNLP paper](https://arxiv.org/abs/2009.06978) by Microsoft Research. It's a follow-up work of [DialoGPT](https://huggingface.co/transformers/model_doc/dialogpt.html) (thanks for hosting it!) The architecture is pretty simple: a `GPT2Model` followed by a `torch.nn.Linear(n_embd, 1, bias=False)`, and implemented based on a [previous HuggingFace commit](https://github.com/huggingface/transformers/commit/4d456542e9d381090f9a00b2bcc5a4cb07f6f3f7) At first, I'm trying to create a model card for it, but then realized that it seems there's no existing model architecture in HuggingFace is compatible with DialogRPT. I noticed a lot of BERT-based sequence classification models, but ours is GPT-2 based. If there's a simple fix (or I missed something) please let me know! If implementation in modeling_gpt2.py is necessary, I'm also glad to help! ## Open source status * [x] the model implementation is available: (https://github.com/golsun/DialogRPT) * [x] the model weights are available: (https://github.com/golsun/DialogRPT) * [x] who are the authors: @golsun @dreasysnail
10-01-2020 00:16:55
10-01-2020 00:16:55
Hi @golsun! Thanks a lot for opening an issue and offering to contribute it! Indeed, there is no `GPT2ForSequenceClassification` model in the library (yet!) I'm adding it right now with the goal of supporting DialogRPT. I'll get back to you in a bit.<|||||>Hi @golsun! `GPT2ForSequenceClassification` has been implemented on #7501 and I verified that I obtain the same results as you do on your README using your examples. You should only need to upload your models on the model hub now! Some helpers regarding the configuration: - You should upload a model configuration on the hub, for every model. - You can simply copy-paste the `gpt2-medium` configuration that you can find [here](https://s3.amazonaws.com/models.huggingface.co/bert/gpt2-medium-config.json). - You will need to add a `num_labels=1` field to these configurations. - In the `architectures` field, you should put `GPT2ForSequenceClassification`<|||||>wow, super fast!!! thank you @LysandreJik , I'll update my repo to reflect this once the [pull](https://github.com/huggingface/transformers/pull/7501) is merged. <|||||>The pul request is now merged @golsun!<|||||>Thank you so much @LysandreJik ! I just tried `GPT2ForSequenceClassification` and it works! 👍 Then I created this [model card](https://huggingface.co/microsoft/DialogRPT-updown), but `model = AutoModelForSequenceClassification.from_pretrained("microsoft/DialogRPT-updown")` gives me the following error, which can be reproduced with [this Notebook](https://colab.research.google.com/drive/1cAtfkbhqsRsT59y3imjR1APw3MHDMkuV?usp=sharing): ``` /content/transformers/src/transformers/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs) 1203 config.__class__, 1204 cls.__name__, -> 1205 ", ".join(c.__name__ for c in MODEL_FOR_SEQUENCE_CLASSIFICATION_MAPPING.keys()), 1206 ) 1207 ) ValueError: Unrecognized configuration class <class 'transformers.configuration_gpt2.GPT2Config'> for this kind of AutoModel: AutoModelForSequenceClassification. Model type should be one of DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, BartConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, BertConfig, XLNetConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, FunnelConfig, DebertaConfig. ``` <|||||>Indeed, this should be solved by #7630.<|||||>thank you @LysandreJik `AutoModelForSequenceClassification` works now. The [inference webpage](https://huggingface.co/microsoft/DialogRPT-updown) still gives the `Unrecognized configuration class` error but I guess it will sync with the latest code soon. I'm going to introduce model card in the original repo. Thanks again for the help!<|||||>We just updated the API inference so that it uses the latest code. I've taken the liberty to add a padding token to your models, in your configuration (`pad_token_id: 50256`) and in the `special_tokens_map.json`: `pad_token: "<|endoftext|>"`, as it is necessary for the models to have a padding token to run in the API inference. I've taken these values from your code [here](https://github.com/golsun/DialogRPT/blob/master/src/feeder.py#L51) and [here](https://github.com/golsun/DialogRPT/blob/master/src/feeder.py#L18). Models should now work correctly in the [inference webpage :) ](https://huggingface.co/microsoft/DialogRPT-width?text=I+like+you.+I+love+you)<|||||>Great! Thank you for updating the config and special_tokens_map for us! :) The inference webpage will output a score of 1 no matter what input is. I guess it's because it outputs `softmax(logits)`, which is always 1 if `num_labels==1`. Maybe the following if-else will fix it? ``` if num_labels == 1: return torch.sigmoid(logits) else: return torch.softmax(logits) ``` the case `num_labels==1` follows the DialogRPT code [here](https://github.com/golsun/DialogRPT/blob/master/src/model.py#L95)<|||||>You're correct! Solving that in #7726.<|||||>Also @golsun on the inference API, you can have custom label names (instead of just `LABEL_0` here) if you set your label names in your `config.json` See https://huggingface.co/roberta-large-mnli's config.json file for an example<|||||>Awesome! thank you @LysandreJik @julien-c
transformers
7,492
closed
`run_squad_trainer` doesn't actually use a Rust tokenizer + errors in `squad_convert_example_to_features` when using a Rust tokenizer
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.1 - Platform: Linux-4.14.35-1902.303.4.1.el7uek.x86_64-x86_64-with-oracle-7.8 - Python version: 3.6.8 - PyTorch version (GPU?): 1.6.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: yes - Using distributed or parallel set-up in script?: no and - `transformers` version: 3.3.1 - Platform: macOS-10.15.6-x86_64-i386-64bit - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help @mfuntowicz @LysandreJik @patil-suraj ## Information Model I am using (Bert, XLNet ...): bert-base-uncased The problem arises when using: * [x] the official example scripts: (give details below) * [x] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) Firstly, in `run_squad_trainer.py`, I noticed that the "use_fast" arg doesn't get propagated into the tokenizer instantiation: https://github.com/huggingface/transformers/blob/0acd1ffa09a06084efa7cfa0e4e9d97cffdda5f9/examples/question-answering/run_squad_trainer.py#L107 Probably should be ``` tokenizer = AutoTokenizer.from_pretrained( model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, cache_dir=model_args.cache_dir, use_fast=model_args.use_fast ) ``` However, when I make that change, the script hangs at the call to `squad_convert_examples_to_features` in SquadProcessor. So, I did a little digging. The error is in `squad_convert_example_to_features` and seems to be due to inconsistencies in the behavior of `tokenizer.encode_plus` between the Python and Rust tokenizers, detailed below. I've also provided a [gist](https://gist.github.com/k8si/a143346dfa875c28d98e95cba1f82f1b) that hopefully elucidates & will help reproduce each of these points. I tested both BertTokenizer/BertTokenizerFast and GPT2Tokenizer/GPT2TokenizerFast. 1) Python tokenizers handle negative values for `stride`, Rust tokenizers throw an exception (`OverflowError: can't convert negative int to unsigned`) 2) For sequence pairs, Python tokenizers are fine if the first arg (`text`) is a list of ints and the second arg (`text_pair`) is a list of strings. The Rust tokenizers throw an exception `ValueError: PreTokenizedInputSequence must be Union[List[str], Tuple[str]]`. (Furthermore, the typehints for these arguments indicate that a string, a list of strings, or a list of ints are all fine.) 3) Leaving the `is_split_into_words` kwarg at its default value (`False`), then running `tokenizer.encode_plus(list of ints)` works fine for the Python tokenizers. The Rust tokenizers raise an exception `ValueError: TextInputSequence must be str`. 4) When running on a pair of sequences and setting `return_tensors=None`, the Python tokenizers return an output dict with input_ids (and other elements) as a list of ints i.e.`input_ids = [id1, id2, ...]` whereas the Rust tokenizers return a dict with input_ids as a list of list of ints i.e. `input_ids = [[id1, id2, ...]]`. I also noticed that if you set `return_tensors="pt"`, both the Python and Rust tokenizers return `input_ids = tensor([[id1, id2, ...]])`. 5) When `return_overflowing_tokens=True`, the Python tokenizers return a list of the overflowing tokens at key `overflowing_tokens` as expected. The Rust tokenizers return them at key `overflow_to_sample_mapping` which is not documented anywhere, as far as I can tell. The values seem to be different for the Python output vs. Rust output. 6) Running the same procedure on the same input twice produces the same result each time for the Python tokenizer. For the Rust tokenizer, the result of the second run is **different**. I am not familiar enough with the Rust tokenizer internals at this point to have a theory as to why this is the case. Anyway, this is the point at which I stopped debugging and decided to file an issue. ## To reproduce Steps to reproduce the behavior: 1. Download squad 2.0 dataset from ["official" squad website](https://rajpurkar.github.io/SQuAD-explorer/) 2. Make fix in `run_squad_training.py` described above to correctly instantiate a Rust tokenizer 3. Run script: `python examples/question-answering/run_squad_trainer.py --model_name_or_path bert-base-uncased --use_fast --output_dir "./outputs-squad" --do_train --data_dir "./squad-data" --version_2_with_negative` Also see gist detailing issues described above: https://gist.github.com/k8si/a143346dfa875c28d98e95cba1f82f1b ## Expected behavior 1) I expected `run_squad_trainer.py` to use a Rust tokenizer when the `use_fast` arg was set to True 2) I expected `SquadProcessor.squad_convert_example_to_features` to not raise exceptions when processing squad data when using a Rust tokenizer 3) I expected `tokenizer.encode_plus` to return the same outputs given the same inputs, regardless of whether the tokenizer is a Rust tokenizer or a Python tokenizer
09-30-2020 23:47:44
09-30-2020 23:47:44
Hello! Indeed, the Rust tokenizers are not handled by the SQuAD data processing. This is one item we would like to resolve when refactoring the data processing methods, which will soon be implemented directly in `datasets` rather than in `transformers`. Thank you for your detailed issue!<|||||>For what it's worth, my main issue is with the behavioral issues with the Python vs. Rust tokenizers, not with the SQuAD data processing itself (I can easily write my own SQuAD processor but writing my own tokenizers is more work--the tokenizers are part of why I've been using the library in the first place). It places a sizeable burden on people coding against the library that different kwargs result in completely different behaviors (and different errors) between the Rust vs. Python implementations, and these differences often aren't documented. And Item # 6 seems like a fundamental error somewhere in the Rust codebase. Are there any plans to address these issues in the near future? For what it's worth, I've never had an issue with the Python tokenizers. I'd like to use the Rust ones because they're Fast, plus I can train them myself easily, but navigating the API weirdness has been a slog.<|||||>You're correct that there is currently a mismatch between the python and rust tokenizers, and thank you for writing such a detailed issue explaining all of your pain points. We'll keep a close eye to the issues mentioned here as we continue working and improving the compatibility between the two APIs, which is something we will be focusing on in the near future.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,491
closed
Update README.md
@julien-c Model is now fine-tuned on Transformers 3.1.0. Previous model fine-tuned on Transformers 2.3.0 is out-of-date.
09-30-2020 23:06:15
09-30-2020 23:06:15
Nice! FYI we'll have model versioning rolled out in ~1 month or so
transformers
7,490
closed
Clean the Trainer state
# What does this PR do? This PRs clean the fields used inside the `Trainer` to store state and gather them all in a clear class named `TrainerState` that is typed so the user knows exactly what they can access when subclassing and overriding methods (and for my next step, when writing callbacks). As a result, the `log_history` does not need to be saved and loaded separately and the hack to get the steps we were at from the checkpoint folder name is removed (the user can copy all the checkpoint saved by the `Trainer` in a previous training in any folder and use it for resuming training). This PRs adds a test of the full reproducibility of a training resumed from a checkpoint. There is a tiny breaking change that would affect users that trained a model using an earlier version of transformers and would like to resume it with a version obtained after this commit but I don't think this matters much. It also enforces that the `TrainingArguments` passed to a `Trainer` are not changed during training, to avoid subtle bugs when launching several trainings in a row. This is verified by a new test.
09-30-2020 21:19:45
09-30-2020 21:19:45
Tests pass in a multi-GPU environment and the specific `test_distributed_trainer` passes too. There is just one test that requires a batch size not too big so I manually skip it if there are more than 2 GPUs.<|||||>Nice PR!!! I think it is a nice addition. Only the `global_step` won't be necessary as it is already integrated by default into the TF checkpoints. This is perfect! I plan also to add Keras callbacks into the TF Trainer that can handles few more arguments that are in this "state" class such as best checkpoint, and metrics. It won't change much in the TF Trainer and could be very easily integrated as most of the "state" arguments work the same way for both Trainers. We can clearly imagine this state class for a much broader usage.<|||||>Checked the tests on TPU are passing, so can safely merge this.
transformers
7,489
closed
Use of global attention of Longformer when generating
I'm training Longformer2Roberta, the encoder part of this Seq2Seq model is Longformer. The one feature Longformer brings is global attention, I found the use of it during training, but it is never used at the point of generation. I guess it should be used somewhere here https://github.com/huggingface/transformers/blob/03e46c1de3864b8464a1b40d2a414b35f6b7f0df/src/transformers/generation_utils.py#L402. I guess @patrickvonplaten is working on these models.
09-30-2020 21:06:19
09-30-2020 21:06:19
Hey @alexyalunin - you are completely right! I'm working on a bigger generation refactor at the moment to better handle cases like this. For this case, I would propose to modify the code yourself and do a little hack, something along the lines ```python if "global_attention_mask" in model_kwargs: encoder_outputs: ModelOutput = encoder(input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask, return_dict=True) else: encoder_outputs: ModelOutput = encoder(input_ids, attention_mask=attention_mask, return_dict=True) ``` I don't want to merge this into master because it's quite hacky and the `generate()` function needs a refactor before we start adding more and more hacks. Hope this works for you for now.<|||||>This should be solved soon by the new generate() design: #6949 in like ~1,2 weeks<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,488
closed
[s2s] fix kwargs style
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
09-30-2020 20:59:42
09-30-2020 20:59:42
transformers
7,487
closed
[s2s] Fix t5 warning for distributed eval
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
09-30-2020 19:18:21
09-30-2020 19:18:21
transformers
7,486
closed
Using BERT for spelling correction
I am currently working on the task of spelling correction. I used BERT by masking the misspelled word to get predictions with their probability score. However, the results are not so good. Hence, I want to fine-tune the BERT. For this spelling correction task, I would like to know which is the suitable method to fine-tune the BERT. I would be glad if anyone could help me in this regard.
09-30-2020 19:08:27
09-30-2020 19:08:27
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,485
closed
Tenosrflow Loading the saved Model For GPT2
```py from time import time from transformers import TFGPT2LMHeadModel, GPT2Tokenizer import tensorflow as tf tokenizer = GPT2Tokenizer.from_pretrained("gpt2") model = TFGPT2LMHeadModel.from_pretrained('gpt2') text = "What are you doing after you have finished working?" generated = tokenizer.encode(text) context = tf.constant([generated]) past = None start = time() for i in range(100): output, past = model([context, past]) logits = output[0, -1, :] tok = tf.argmax(logits) generated.append(tok.numpy()) context = tf.expand_dims(tf.expand_dims(tok, 0), 0) sequence = tokenizer.decode(generated) print(time() - start, sequence) #save the model tf.saved_model.save(model,"temp") #loading back the model nm = tf.saved_model.load("temp") infer = nm.signatures['serving_default'] ``` Not able to call the model as shown below ```py text = "What are you doing after you have finished working?" generated = tokenizer.encode(text) context = tf.constant([generated]) past = None start = time() for i in range(100): output, past = **infer**([context, past]) logits = output[0, -1, :] tok = tf.argmax(logits) generated.append(tok.numpy()) context = tf.expand_dims(tf.expand_dims(tok, 0), 0) sequence = tokenizer.decode(generated) print(time() - start, sequence) ```
09-30-2020 17:35:15
09-30-2020 17:35:15
~Hi! What's the problem?~ Edited your message so that we can read it. <|||||>Could you put the error you had, as well as the environment? i.e., everything asked in the issue template.<|||||>I am running it in colab CPU `TypeError` Traceback (most recent call last) <ipython-input-5-0c3c85adea42> in <module>() 5 start = time() 6 for i in range(100): ----> 7 output, past = infer([context, past]) 8 logits = output[0, -1, :] 9 tok = tf.argmax(logits) 2 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/function.py in _call_with_flat_signature(self, args, kwargs, cancellation_manager) 1719 raise TypeError("{}: expected argument #{}(zero-based) to be a Tensor; " 1720 "got {} ({})".format(self._flat_signature_summary(), i, -> 1721 type(arg).__name__, str(arg))) 1722 return self._call_flat(args, self.captured_inputs, cancellation_manager) 1723 TypeError: signature_wrapper(input_ids): expected argument #0(zero-based) to be a Tensor; got list ([<tf.Tensor: shape=(1, 10), dtype=int32, numpy= array([[2061, 389, 345, 1804, 706, 345, 423, 5201, 1762, 30]], dtype=int32)>, None])`` > `<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>What is the resolution for this. Having the same issue.<|||||>If you can open a new issue with the issue template filled out (environment information, code that fails, expected behavior), then we can help you! Thank you.
transformers
7,484
closed
Bump isort version.
# What does this PR do? Had some problem on my local setup with isort wanting to change `test_modeling_deberta.py`. Updating to 5.5.4 (from 5.4.2) fixed the issue so I think we should pin our setup to it.
09-30-2020 17:32:42
09-30-2020 17:32:42
transformers
7,483
closed
Add forgotten return_dict argument in the docs
# What does this PR do? The documentation wasn't updated to `return_dict=True` not being the default for all models. This PR fixes that. Fixes #7482
09-30-2020 17:27:45
09-30-2020 17:27:45
transformers
7,482
closed
Issue with Summary of the tasks - Named Entity Recognition in Docs
- `transformers` version: 3.3.1 - Platform: Darwin-19.6.0-x86_64-i386-64bit - Python version: 3.7.3 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help examples/token-classification: @stefan-it documentation: @sgugger ## Information I am trying to run the Pytorch version of named entity recognition from the "Summary of the tasks" section in the documentation. ## To reproduce Steps to reproduce the behavior: I'm running the exact example from the docs, but will attach the code below ``` from transformers import AutoModelForTokenClassification, AutoTokenizer import torch model = AutoModelForTokenClassification.from_pretrained("dbmdz/bert-large-cased-finetuned-conll03-english") tokenizer = AutoTokenizer.from_pretrained("bert-base-cased") label_list = [ "O", # Outside of a named entity "B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity "I-MISC", # Miscellaneous entity "B-PER", # Beginning of a person's name right after another person's name "I-PER", # Person's name "B-ORG", # Beginning of an organisation right after another organisation "I-ORG", # Organisation "B-LOC", # Beginning of a location right after another location "I-LOC" # Location ] sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \ "close to the Manhattan Bridge." tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence))) inputs = tokenizer.encode(sequence, return_tensors="pt") outputs = model(inputs).logits predictions = torch.argmax(outputs, dim=2) ``` Running this leads to the error: ``` Traceback (most recent call last): File "test.py", line 21, in <module> outputs = model(inputs).logits AttributeError: 'tuple' object has no attribute 'logits' ``` ## Expected behavior I expect this to run successfully and produce predictions for the example sequence. I've run this example before and it succeeded so I'm not sure what's happening differently now. I feel like I'm making a dumb mistake somewhere, but idk. Thanks! Note: changing line 21 to `outputs = model(inputs)[0]` seems to lead to the expected output, but this might not be the kind of behavior you all are looking for.
09-30-2020 16:30:44
09-30-2020 16:30:44
Indeed, this example (and all the others) is missing a `return_dict=True` in the call to `from_pretrained`. Thanks for flagging, the PR mentioned above will fix this.
transformers
7,481
closed
Minor dead code clean-up
Hello, I am not sure how sensitive you generally are about dead code in the repository. I have identified a few places with dead code, where I believe a clean-up would improve readability. - removal of a couple of unused dropouts I came across in Albert and XLNet - removal of an unused code block for relative attention shift for XLNet ## Who can review? @LysandreJik , @TevenLeScao
09-30-2020 16:07:24
09-30-2020 16:07:24
Closing as it seems the changes were added in another PR
transformers
7,480
closed
Upload models using transformers-cli fails
## Environment info - `transformers` version: 3.0.2 - Platform: Linux-4.15.0-112-generic-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.6.0 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help Model Cards: @julien-c T5: @patrickvonplaten ## Information Model I am using T5: The problem arises when using: * [X] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [X] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: Command: `transformers-cli upload ./prot_t5_xl_bfd/ --organization Rostlab` Error: ``` About to upload file /mnt/lsf-nas-1/lsf/job/repo/elnaggar/prot-transformers/models/transformers/prot_t5_xl_bfd/pytorch_model.bin to S3 under filename prot_t5_xl_bfd/pytorch_model.bin and namespace Rostl ab Proceed? [Y/n] y Uploading... This might take a while if files are large 0%|▌ | 48242688/11276091454 [00:02<14:55, 12534308.31it/s] Traceback (most recent call last): File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen httplib_response = self._make_request( File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 392, in _make_request conn.request(method, url, **httplib_request_kw) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1255, in request self._send_request(method, url, body, headers, encode_chunked) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1301, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1250, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1049, in _send_output self.send(chunk) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 971, in send self.sock.sendall(data) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1204, in sendall v = self.send(byte_view[count:]) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1173, in send return self._sslobj.write(data) BrokenPipeError: [Errno 32] Broken pipe Traceback (most recent call last): File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/adapters.py", line 439, in send resp = conn.urlopen( File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 726, in urlopen retries = retries.increment( File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/util/retry.py", line 403, in increment raise six.reraise(type(error), error, _stacktrace) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/packages/six.py", line 734, in reraise raise value.with_traceback(tb) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 670, in urlopen httplib_response = self._make_request( File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/urllib3/connectionpool.py", line 392, in _make_request conn.request(method, url, **httplib_request_kw) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1255, in request self._send_request(method, url, body, headers, encode_chunked) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1301, in _send_request self.endheaders(body, encode_chunked=encode_chunked) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1250, in endheaders self._send_output(message_body, encode_chunked=encode_chunked) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 1049, in _send_output self.send(chunk) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/http/client.py", line 971, in send self.sock.sendall(data) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1204, in sendall v = self.send(byte_view[count:]) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/ssl.py", line 1173, in send return self._sslobj.write(data) urllib3.exceptions.ProtocolError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe')) During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 33, in main service.run() File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/transformers/commands/user.py", line 232, in run access_url = self._api.presign_and_upload( File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/transformers/hf_api.py", line 167, in presign_and_upload r = requests.put(urls.write, data=data, headers={"content-type": urls.type}) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/api.py", line 134, in put return request('put', url, data=data, **kwargs) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/api.py", line 61, in request return session.request(method=method, url=url, **kwargs) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/sessions.py", line 530, in request resp = self.send(prep, **send_kwargs) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/sessions.py", line 643, in send r = adapter.send(request, **kwargs) File "/mnt/lsf-nas-1/lsf/job/repo/elnaggar/anaconda3/envs/transformers_covid/lib/python3.8/site-packages/requests/adapters.py", line 498, in send raise ConnectionError(err, request=request) requests.exceptions.ConnectionError: ('Connection aborted.', BrokenPipeError(32, 'Broken pipe')) ``` ## Expected behavior I am trying to upload our T5-3B model using transformers-cli, but it always fails and gives "BrokenPipeError". It only uploads small files like configuration files but it fails for the model files. I have tried two different machines and both of them gives the same error.
09-30-2020 15:26:14
09-30-2020 15:26:14
Yes this is a known issue with our current system that will be fixed in ~1 month. In the meantime, if you can upload to a different S3 bucket I can cp the files to your account on ours. Would you be able to do this?<|||||>I don't have access to S3. However, I uploaded the model in my dropbox: https://www.dropbox.com/sh/0e7weo5l6g1uvqi/AADBZN_vuawdR3YOUOzZRo8Pa?dl=0 Is it possible to download and upload it from the dropbox folder?<|||||>Super I'll take care of it! <|||||>model is uploaded here: https://huggingface.co/Rostlab/prot_t5_xl_bfd<|||||>Perfect, thanks a lot @patrickvonplaten for your help. This solves my issue 😄 I will test the model to make sure everything is working as expected. Should we close this issue as it solved my current problem, or should we leave it open until the "transformers-cli" uploading problem is solved? I will leave it to you.<|||||>Let's leave it open :-) <|||||>Hi! I'm having an issue uploading a model as well. I've tried several different iterations of the CLI command to get it to work. I'm following the instructions from the [model sharing docs](https://huggingface.co/transformers/model_sharing.html). Here's the info about my setup: - transformers version: 3.3.1 - Platform: Ubuntu (it's a Google Cloud Platform VM) - Python version: 3.8.5 - PyTorch version (GPU?): 1.4.0 (True) - Tensorflow version (GPU?): 2.3.1 (True) - Using GPU in script?: No - Using distributed or parallel set-up in script?: No First, I tried `transformers-cli upload distilbert-for-food-extraction`, as it says to do in the docs. This fails because for some reason the directory is not found, even though `ls distilbert-for-food-extraction` confirms that the directory and its files exist in this location. ``` (hf-nlp) charlenechambliss@charlene-gpu:~/.cache/food-ner/models$ transformers-cli upload chambliss/distilbert-for-food-extraction 2020-10-10 21:43:16.899194: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 Traceback (most recent call last): File "/home/charlenechambliss/anaconda3/envs/hf-nlp/bin/transformers-cli", line 8, in <module> sys.exit(main()) File "/home/charlenechambliss/anaconda3/envs/hf-nlp/lib/python3.8/site-packages/transformers/commands/transformers_cli.py", line 33, in main service.run() File "/home/charlenechambliss/anaconda3/envs/hf-nlp/lib/python3.8/site-packages/transformers/commands/user.py", line 197, in run files = self.walk_dir(rel_path) File "/home/charlenechambliss/anaconda3/envs/hf-nlp/lib/python3.8/site-packages/transformers/commands/user.py", line 180, in walk_dir entries: List[os.DirEntry] = list(os.scandir(rel_path)) FileNotFoundError: [Errno 2] No such file or directory: 'distilbert-for-food-extraction' ``` Then I tried nesting it under a directory matching my HuggingFace username, so now the path is `chambliss/distilbert-for-food-extraction`. Attempting the upload again seems to result in 3 out of 6 files being uploaded, then the process is aborted. Here is the full output I'm getting: ``` (hf-nlp) charlenechambliss@charlene-gpu:~/.cache/food-ner/models$ transformers-cli upload chambliss 2020-10-10 21:43:28.932647: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 About to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/special_tokens_map.json to S3 under filename chambliss/distilbert-for-food-extraction/special_tokens_map.json and namespace chambliss About to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/vocab.txt to S3 under filename chambliss/distilbert-for-food-extraction/vocab.txt and namespace chambliss About to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/pytorch_model.bin to S3 under filename chambliss/distilbert-for-food-extraction/pytorch_model.bin and namespace chambliss About to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/config.json to S3 under filename chambliss/distilbert-for-food-extraction/config.json and namespace chambliss About to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/tokenizer_config.json to S3 under filename chambliss/distilbert-for-food-extraction/tokenizer_config.json and namespace chambliss About to upload file /home/charlenechambliss/.cache/food-ner/models/chambliss/distilbert-for-food-extraction/tf_model.h5 to S3 under filename chambliss/distilbert-for-food-extraction/tf_model.h5 and namespace chambliss Proceed? [Y/n] Y Uploading... This might take a while if files are large Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/chambliss/chambliss/distilbert-for-food-extraction/special_tokens_map.json Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/chambliss/chambliss/distilbert-for-food-extraction/vocab.txt Your file now lives at: https://s3.amazonaws.com/models.huggingface.co/bert/chambliss/chambliss/distilbert-for-food-extraction/pytorch_model.bin 400 Client Error: Bad Request for url: https://huggingface.co/api/presign Filename invalid, model must be at exactly one level of nesting, i.e. "user/model_name". ``` If there is not a fix available for this at the moment, would it be possible to have my model uploaded via Dropbox as well? Thanks! Charlene<|||||>Hey @chambliss - it looks like you are uploading the wrong folder. Instead of running ``` ~/.cache/food-ner/models$ transformers-cli upload chambliss ``` you should run ``` ~/.cache/food-ner/models/chambliss$ transformers-cli upload distilbert-for-food-extraction ``` I think<|||||>I'll second that. If `ls distilbert-for-food-extraction` works and shows the correct files, `transformers-cli upload distilbert-for-food-extraction` should work and would be able to find the correct directory.<|||||>@patrickvonplaten @julien-c Thanks for the response guys! I'm not sure why the directory wasn't found the first time, but I tried it again just now (from inside the /chambliss directory, so `~/.cache/food-ner/models/chambliss$ transformers-cli upload distilbert-for-food-extraction`, as suggested) and it worked. As a user, it is a little confusing for a reference to the correct directory not to work, and to have to be exactly one level above the directory in order for the upload to succeed. The example given on the page (`transformers-cli upload path/to/awesome-name-you-picked/`) implies that you can do the upload from anywhere relative to the folder. If that is a constraint, it may be worth updating the docs to reflect it. Thanks again for the help! <|||||>no, it is indeed supposed to work as you describe, specifying the dir from any point in your filesystem. Let us know if that's not the case.<|||||>Will reopen this for clarity until the fix mentioned in https://github.com/huggingface/transformers/issues/8480#issuecomment-726731046 is deployed<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Ok, closing this for real now! 😎
transformers
7,479
closed
Loading saved model not working
Do we know do to load the saved model pipeline back up and make predictions again locally? The from_pretrained()is not working. Pls provide few instructions how to load the model using from pretrained
09-30-2020 15:15:07
09-30-2020 15:15:07
What do you mean it's not working? Could you provide all the information required in the template? What is your environment? What code are you running? What is the error shown? How do you use `from_pretrained`?<|||||>What is meant is how do we utilize the “from_pretrained” functionality after saving the pipeline? I want to load the pipeline back up to start making predictions locally the saved file(s) below: import transformers ner_original = pipeline("ner") ner = pipeline("ner",grouped_entities=True) path = 'path to folder' ner.save_pretrained(path) I am currently running transformers==2.11.0 and python 3.7.4. I have tried: pipe = transformers.pipeline(task="ner", model="pytorch_model.bin", tokenizer="tokenizer_config.json") This gave an error: ValueError: Unrecognized model in tokenizer_config.json. Should have a `model_type` key in its config.json pipe = transformers.TokenClassificationPipeline(model="pytorch_model.bin", tokenizer="tokenizer_config.json") This gave an error: AttributeError: 'str' object has no attribute 'config' Changing the tokenizer to "config.json" yielded the following error: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte <|||||>This is the code which gives the error from transformers import AutoModel, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained(path) model = AutoModel.from_pretrained(path) label_list = [ "O", # Outside of a named entity "B-MISC", # Beginning of a miscellaneous entity right after another miscellaneous entity "I-MISC", # Miscellaneous entity "B-PER", # Beginning of a person's name right after another person's name "I-PER", # Person's name "B-ORG", # Beginning of an organisation right after another organisation "I-ORG", # Organisation "B-LOC", # Beginning of a location right after another location "I-LOC" # Location ] sequence = "Hugging Face Inc. is a company based in New York City. Its headquarters are in DUMBO, therefore very" \ "close to the Manhattan Bridge." # Bit of a hack to get the tokens with the special tokens tokens = tokenizer.tokenize(tokenizer.decode(tokenizer.encode(sequence))) inputs = tokenizer.encode(sequence, return_tensors="pt") outputs = model(inputs)[0] predictions = torch.argmax(outputs, dim=2) print([(token, label_list[prediction]) for token, prediction in zip(tokens, predictions[0].tolist())]) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,478
closed
Alphabetize model lists
# What does this PR do? The model lists have grown a bit and so, like the doc navbar, I think we'll find our way better by alphabetizing them. Adding new models will be easy in the README since Markdwon supports enumerated lists with 1. for all items. RestructuredText is more annoying but I'll make one script generate the proper part of index.rst automatically to make sure it's in sync with the README while I'm procrastinating something more important.
09-30-2020 14:35:02
09-30-2020 14:35:02
transformers
7,477
closed
[s2strainer] fix eval dataset loading
`eval_dataset` should be loaded if either `--do_eval` or`EvaluationStrategy` is not `no` @sshleifer
09-30-2020 14:31:58
09-30-2020 14:31:58
tiny fix, might as well do this in #7467 and close this one
transformers
7,476
closed
RAG: Can we have a document that explains the fine-tuning mechanism?
I want to fine-tune RUG with a custom dataset. Please help me.
09-30-2020 14:17:25
09-30-2020 14:17:25
https://github.com/huggingface/transformers/tree/master/examples/rag#finetuning should help you :-) <|||||>Thanks
transformers
7,475
closed
Small QOL improvements to TrainingArguments
# What does this PR do? Some small QOL improvements as discussed on the [forum](https://discuss.huggingface.co/t/seq2seqtrainer-questions/1276/): - make `do_eval` defaults to `evaluation_strategy != "no"` so there is no need to pass the two - make `run_name` defaults to `output_dir`.
09-30-2020 13:53:30
09-30-2020 13:53:30
Thanks!
transformers
7,474
closed
[Seq2Seq] Fix a couple of bugs and clean examples
# What does this PR do? ## **IMPORTANT** - BREAKING CHANGES This PR changes the behavior of **T5** and **TFT5** due to 3 bugs and 1 small change in forward API to support onnx and torchscript. It also slighlty changes the behavior of **Bart** and **EncoderDecoderModel** ## Description _1st Bug_: Due to a sloppy review from my part these lines got merged into the PyTorch T5 model: https://github.com/huggingface/transformers/pull/5518/files#r496058908, which set `decoder_input_ids = input_ids` if `decoder_input_ids` were not provided. This is misleading and also just wrong. Never should `decoder_input_ids=input_ids` in T5. This is not done during training nor during inference, so these lines don't make much sense. Because T5 is mostly either used with `.generate()` or in training with `model(input_ids=input_ids, labels=labels)`, in which cases the change has no effect we only received one issue recently about it #7358. The change was done to make T5 work with onnx, but is just wrong IMO. _2nd Bug_: T5 was implemented with a small bug regarding the relative distance bias calculation for the cross-attention layer. It was spotted here: #7323. The correction leads to slightly different results when doing beam search. @sshleifer - if it's easy for you could maybe run a quick eval on WMT to see if bleu improves in this PR? _3rd Bug_: T5 currently cuts the `input_ids` to the last token when `past` is used. This is a convenient function for the user, but has the potential to lead to bugs as mentioned here: https://github.com/huggingface/transformers/issues/4368#issuecomment-630244541. It's not really in the spirit of the library to do some magic under-the-hood which make certain use-cases easier for the user, but prevents other edge cases as shown in the issue above. Feature request: support torchscript and onnx. This PR allows to use T5 with torchscript and onnx. Now the difficult part: **This PR has breaking changes!** For once, all three bugs lead to breaking changes. Then in order to solve the torchscript, onnx problem we are having with T5 (and we actually have with all Seq2Seq models), I had to change the positional ordering of T5's forward pass slightly, which should have minimal breaking changes because I doubt anybody has used T5 with positional arguments as follows: `tf_model(input_ids, None, None, decoder_input_ids)`. We had a couple of issues *e.g.* #5647 about supporting torchscript and onnx for Bart/T5. If we ever want to support onnx and torchscript in the future, I think we need to do this positional reordering. As shown by @mfuntowicz onnx can lead to great speed improvements and we also know now that `torchscript` can give ~30% speed improvement on dynamic input sizes. => I would be really happy if we could accept this slight breaking change here. I thought about this quite a bit and I think it's very important that we agree on ONE positional argument ordering for the forward pass of Seq2Seq models. At the moment the ordering of Bart, EncoderDecoder, T5, ... is not coherent and done in a way that does not support onnx and torchscript. At the moment no seq2seq model really supports torchscript (Bart does in the test, but one cannot provide `decoder_input_ids` when using torchscript which effectively makes torchscript useless for inference). The ordering should be as follows IMO: `input_ids` `attention_mask` `decoder_input_ids` `decoder_attention_mask` `encoder_outputs` ..., meaning that all `required` inputs should come first to comply with onnx and torchscript and optional ones should come after. I changed the ordering of all seq2seq models to comply with this format even though we have some positional ordering breaking changes for `T5`, `Bart` and `EncoderDecoder`. ## UPDATE: Added tests that the encoder decoder forward signature stays the same Applied changes to all Seq2Seq models Cleaned docs Fixed TF slow tests
09-30-2020 13:34:42
09-30-2020 13:34:42
You got a little zero-shot BLEU boost it seems! This Branch: 34.433 (on en-de) Master: 34.4052 <|||||>@patrickvonplaten Is the docstring still wrong? [T5ForConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5forconditional#transformers.T5ForConditionalGeneration), under `decoder_input_ids` says "if both decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of input_ids"<|||||>> @patrickvonplaten > > Is the docstring still wrong? [T5ForConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5forconditional#transformers.T5ForConditionalGeneration), under `decoder_input_ids` says "if both decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of input_ids" I can't find this docstring, can you link to it?<|||||>> > @patrickvonplaten > > Is the docstring still wrong? [T5ForConditionalGeneration](https://huggingface.co/transformers/model_doc/t5.html?highlight=t5forconditional#transformers.T5ForConditionalGeneration), under `decoder_input_ids` says "if both decoder_input_ids and decoder_inputs_embeds are both unset, decoder_input_ids takes the value of input_ids" > > I can't find this docstring, can you link to it? In T5, it's under method `forward` and argument `decoder_input_ids`. Here's [code link](https://github.com/huggingface/transformers/blob/eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d/src/transformers/modeling_t5.py#L869)<|||||>Oh yeah you're right that was a bad copy & past probably! Do you feel like opening a PR to fix it / delete it? That would be amazing :-)
transformers
7,473
closed
Make transformers install check positive
# What does this PR do? When transformers is correctly installed, I feel you should get a positive message. It's called huggingface not angryface, after all ;-) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Given this concerns the documentation, mentioning @sgugger
09-30-2020 11:31:31
09-30-2020 11:31:31
transformers
7,472
closed
Number of GPUs for multi-gpu
Print number of GPUs when running the multi-gpu testing suites.
09-30-2020 10:52:49
09-30-2020 10:52:49
transformers
7,471
closed
Fix LXMERT with DataParallel
This PR fixes LXMERT when using DataParallel, similar to https://github.com/huggingface/transformers/pull/4300
09-30-2020 10:41:14
09-30-2020 10:41:14
transformers
7,470
closed
Seq2SeqDataset: avoid passing src_lang everywhere
Changed Constructor argument for AbstractSeq2SeqDataset to kwargs to avoid passing unwanted parameters to tokenizers, eg. src and tgt lang to T5 tokenizer. # What does this PR do? tokenization_util.py was generating unnecessary warning statements continuously for tokenizer arguments for seq2seq batch processing that shouldn't have been passed in the first place. Fixed this phenomenon and added a test case as suggested. Fixes # (issue) #7454 ## Before submitting - This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). N/A -Did you read the [contributor guideline] Yes - Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. Yes, #7454 - Did you make sure to update the documentation with your changes? Here are the N/A -Did you write any new necessary tests? Yes, added a relevant test in examples/seq2seq/test_datasets.py ## Who can review? @sshleifer
09-30-2020 07:06:09
09-30-2020 07:06:09
transformers
7,469
closed
fix the first chunk's lower triangle
Correct the first chunk's lower triangle. For details, please look at Page 4 in a google shared document. Link is: https://docs.google.com/document/d/12rv879j2m5VkfTvk0F-WSOPFF5gqbE0kgk60PvY5nHc/edit?usp=sharing # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
09-30-2020 06:59:52
09-30-2020 06:59:52
Sorry, I made a mistake, ignore this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,468
closed
Create README.md
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
09-30-2020 06:26:58
09-30-2020 06:26:58
transformers
7,467
closed
[s2sTrainer] test + code cleanup
- add a 10 second test for Seq2SeqTrainer - general code cleanup - pass `data_args` to Seq2SeqTrainer @patil-suraj
09-30-2020 05:32:39
09-30-2020 05:32:39
transformers
7,466
closed
Seq2SeqTrainer: add a fast test that doesn't learn anything but can run on CPU
@patil-suraj do you want to take this or should I?
09-30-2020 03:39:29
09-30-2020 03:39:29
I'll take it :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,465
closed
RAG - reproducing RAG-Sequence QA score
I'm trying to reproduce RAG-Sequence NQ score of 44.5 presented in Table 1 of the paper at https://arxiv.org/abs/2005.11401. I used the command in the examples/rag readme ```bash python examples/rag/eval_rag.py \ --model_name_or_path facebook/rag-sequence-nq \ --model_type rag_sequence \ --evaluation_set path/to/test.source \ --gold_data_path path/to/gold_data \ --predictions_path path/to/e2e_preds.txt \ --eval_mode e2e \ --gold_data_mode qa \ --n_docs 5 \ --print_predictions \ --recalculate \ ``` For gold_data_path I used data.retriever.qas.nq-test from DPR repo, consisting of 3610 questions and answers: https://github.com/facebookresearch/DPR/blob/master/data/download_data.py#L91-L97 For evaluation_set, my understanding it should be the questions, so I extracted just the questions from the qas.nq-test csv file. I tried the above command with n_docs 5 and 10, with the following results: n_docs 5 INFO:__main__:F1: 49.67 INFO:__main__:EM: 42.58 n_docs 10 INFO:__main__:F1: 50.62 INFO:__main__:EM: 43.49 With n_docs 10 it's still 1 point below the score in paper. What would be the proper setup to reproduce the number, is the pretrained model loaded different, higher n_docs, or different test data? Thanks in advance!
09-30-2020 02:35:39
09-30-2020 02:35:39
Gently pinging @ola13 here, she probably knows best which command to run to reproduce the eval results :-) <|||||>Hi @acslk, thanks for your post! You should be able to reproduce paper results for the RAG Token model (44.1 EM on NQ) by evaluating `facebook/rag-token-nq` with 20 docs. As for the RAG Sequence model - we have lost some quality when translating the checkpoint from `fairseq` (the experimentation framework we used to obtain the original paper results) to HuggingFace. We are now working on replicating the paper numbers in HF and we'll update the official `facebook/rag-sequence-nq` model weights once we have that so stay tuned!<|||||>Thanks for the response, I tried the command above with RAG Token model and n_docs 20 on NQ test set and can confirm it matches paper results: INFO:__main__:F1: 51.44 INFO:__main__:EM: 44.10<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,464
closed
Remove config assumption in Trainer
# What does this PR do? This PR tries to limit the access to `model.config` in `Trainer` to the minimum so that it works with regular PyTorch modules (as long as they accept dict inputs and return loss first like our models). The most challenging part was the storing/restoring of the `total_flos`, which I moved to the newly created `TrainerState`. It should work as before and be saved along the rest of the training state.
09-29-2020 23:25:18
09-29-2020 23:25:18
transformers
7,463
closed
Trainer should not modify its TrainingArguments
# What does this PR do? This fixes a bug that took me some time to track in a notebook with several trainings. The bottom line is that `Trainer` should not modify its `TrainingArguments` so this fixes that part by saving the number of `max_steps` desired in the state instead of in the args. Also storing the number of training epochs for easy access in subclasses.
09-29-2020 21:37:04
09-29-2020 21:37:04
Shouldn't we have a properly-typed self.state on the Trainer instance? I always find assigning instance properties within the code a bit messy<|||||>We can put everything in the new `TrainerState`. I just did the same as for `self.epoch` or `self.global_step` (and there are plenty more).<|||||>Yes I think that’d be nice <|||||>Closing this PR as this will require a bit more work then :-)
transformers
7,462
closed
RAG - how to precompute custom document index?
Was wondering if there was any code snippet / blog post showing how one could load their own documents and index them, so they can be used by the RAG retriever. Cheers!
09-29-2020 21:28:00
09-29-2020 21:28:00
Second this. https://github.com/deepset-ai/haystack may be useful to you. They leverage huggingface and have an DPR implementation with an end-to-end example. Will not be surprised to see RAG implemented soon. <|||||>@Weilin37 Thanks. I'm also looking at the Faiss docs now (https://github.com/facebookresearch/faiss/wiki/Faiss-indexes).<|||||>@lhoestq can maybe help here as well<|||||>Yep I'm thinking of adding a script in `examples/rag` that shows how to create an indexed dataset for RAG. I'll let you know how it goes<|||||>@lhoestq Can you please let me know on how we can index the custom datasets? Appreciate your help on this<|||||>@lhoestq I have a bunch of documents to perform Q&A and currently, in the config it says, dataset (str, optional, defaults to "wiki_dpr") – A dataset identifier of the indexed dataset on HuggingFace AWS bucket (list all available datasets and ids using datasets.list_datasets()). So how can we create an indexed file and input that to the pretrained model for evaluation. <|||||>> @lhoestq I have a bunch of documents to perform Q&A and currently, in the config it says, > dataset (str, optional, defaults to "wiki_dpr") – A dataset identifier of the indexed dataset on HuggingFace AWS bucket (list all available datasets and ids using datasets.list_datasets()). So how can we create an indexed file and input that to the pretrained model for evaluation. Yes right... We'll have to edit the `RagRetriever` and the `HfIndex` to accept custom ones. If you wanto to give it a try in the meantime, feel free to do so :)<|||||>Any progress on this @lhoestq @patrickvonplaten ? Awesome work guys :)<|||||>@tholor @Timoeller Do you reckon you guys could integrate this work into haystack?<|||||>@aced125 Yep, we will integrate RAG in Haystack soon (https://github.com/deepset-ai/haystack/issues/443).<|||||>> Any progress on this @lhoestq @patrickvonplaten ? Awesome work guys :) You can expect a PR by tomorrow<|||||>Awesome thanks everyone @tholor @lhoestq @patrickvonplaten !!!!<|||||>Thank you @lhoestq . Really appreciate for getting back quickly on this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Hello everyone, I am interesting in studying how RAG behaves without the DPR retriever. For example in the code below ``from transformers import RagRetriever from transformers import RagTokenizer, RagRetriever, RagTokenForGeneration retriever = RagRetriever.from_pretrained('./rag-token-nq', indexed_dataset=dataset) tokenizer = RagTokenizer.from_pretrained("./rag-token-nq") model = RagTokenForGeneration.from_pretrained("./rag-token-nq", retriever=retriever) **input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt")** input_ids = input_dict["input_ids"] model = RagTokenForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) generated_ids = model.generate(input_ids=input_ids, labels=input_dict["labels"]) generated_string = tokenizer.batch_decode(generated_ids, skip_special_tokens=True) print(generated_string) `` In the line '' **input_dict = tokenizer.prepare_seq2seq_batch("How many people live in Paris?", "In Paris, there are 10 million people.", return_tensors="pt")** ``, I want to use "How many people live in Paris ?" as the question and "In Paris, there are 10 million people." the passage / context which should be used to generate the answer. Kindly let me know how to do this? Is my understanding of the code correct and if not, how to go about it? Thanks, Krishanu<|||||>For RAG you can pass both your question as `input_ids` and your context as `context_input_ids` to `model.generate`. You can provide several contexts for one question. You can find more information in the documentation [here](https://huggingface.co/transformers/model_doc/rag.html#transformers.RagTokenForGeneration.generate)<|||||>@lhoestq Thanks for the reply. There is this doc_score parameter in the model.generate function. Is it necessary or optional?<|||||>If you pass the `context_input_ids` you also need to provide the `doc_scores` indeed.
transformers
7,461
closed
Distributed Trainer: 2 little fixes
1) fix DDP access to `model.config`. We could also set `self.config = model.config` earlier in `__init__` 2) switch torch.Tensor -> torch.tensor. The latter "infers the dtype automatically" After which the command in #7460 works. CC @patil-suraj , @TevenLeScao
09-29-2020 21:17:22
09-29-2020 21:17:22
Can we see when the config is accessed (in your error message)? `model.config` should be accessed as sparsely as possible in `Trainer` to work with any kind of model and I'll probably remove the requirement entirely soon.<|||||>`Seq2SeqTrainer` uses model.config 8 times. Mostly `pad_token_id` to avoid counting padding in the loss func.<|||||>It should add an assert the model is a `PreTrainedModel` at __init__ just to be clean, then for your specific problem, it should use the function `self._actual_model()` to grab the config to avoid your error (e.g., `self.model.config` -> `self._actual_model().config`). `Trainer` is on its way to fully handle models without config, see #7464.<|||||>OK. I reduced scope of this PR to just the `Tensor` -> `tensor`.
transformers
7,460
closed
Seq2SeqTrainer Distributed: AttributeError and the RuntimeError
The following command (on 8 GPUS) fails with ```python AttributeError: DistributedDataParallel has no attribute "config" ``` ### Command ```bash export WANDB_PROJECT=dmar export BS=64 export GAS=1 export m=sshleifer/student_marian_en_ro_6_3 export MAX_LEN=128 python -m torch.distributed.launch --nproc_per_node=8 finetune_trainer.py \ --tokenizer_name $m --model_name_or_path $m \ --data_dir wmt_mar_pl \ --output_dir marian_en_ro_6_3 --overwrite_output_dir --predict_with_generate \ --learning_rate=3e-4 \ --warmup_steps 500 --sortish_sampler \ --fp16 \ --gradient_accumulation_steps=$GAS \ --per_device_train_batch_size=$BS --per_device_eval_batch_size=$BS \ --freeze_encoder --freeze_embeds \ --num_train_epochs=6 \ --save_steps 3000 --eval_steps 3000 \ --max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \ --do_train --do_eval --do_predict --evaluate_during_training\ --predict_with_generate --logging_first_step \ --task translation --label_smoothing 0.1 --n_gpu 8 \ --run_name builtin_trainer_63_v8_pl \ "$@" ```
09-29-2020 21:16:49
09-29-2020 21:16:49
@sgugger After that bug fix, the next bug is: ``` RuntimeError: Precision loss when unpacking double tensorized_scalar = torch.Tensor(scalars).cuda() RuntimeError: Precision loss when unpacking double Traceback (most recent call last): File "finetune_trainer.py", line 442, in <module> main() File "finetune_trainer.py", line 383, in main model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None File "/home/shleifer/transformers_fork/src/transformers/trainer.py", line 809, in train self.log(logs) File "/home/shleifer/transformers_fork/src/transformers/trainer.py", line 1031, in log total_flos = distributed_broadcast_scalars([self.total_flos]).sum().item() File "/home/shleifer/transformers_fork/src/transformers/trainer_utils.py", line 206, in distributed_broadcast_scalars tensorized_scalar = torch.Tensor(scalars).cuda() RuntimeError: Precision loss when unpacking double ``` ### Env Apex installed. ``` - `transformers` version: 3.3.1 - Platform: Linux-4.9.0-11-amd64-x86_64-with-debian-9.12 - Python version: 3.7.4 - PyTorch version (GPU?): 1.5.1+cu101 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ```<|||||>The next error is on @TevenLeScao <|||||>I fixed it, will stuff into 1 PR when everything is working.
transformers
7,459
closed
Update README.md
Update/reference v2 model
09-29-2020 20:04:09
09-29-2020 20:04:09
Thanks! FYI we'll have proper model versioning in ~1 month or so
transformers
7,458
closed
Fix Trainer tests in a multiGPU env
# What does this PR do? Should fix the multiple GPU CI test (tests are passing locally). Will merge as soon as the CI passes to make the CI green.
09-29-2020 17:56:11
09-29-2020 17:56:11
transformers
7,457
closed
Get a better error when check_copies fails
# What does this PR do? Prints a cleaner error message when `check_copies.py` encounters a bad copy.
09-29-2020 17:33:30
09-29-2020 17:33:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=h1) Report > Merging [#7457](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52e8392b7ebd4ebc7b796e8f14b9dae271139f5f?el=desc) will **increase** coverage by `2.09%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7457/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7457 +/- ## ========================================== + Coverage 77.07% 79.17% +2.09% ========================================== Files 181 181 Lines 35858 35858 ========================================== + Hits 27638 28391 +753 + Misses 8220 7467 -753 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.75% <0.00%> (-66.38%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `66.34% <0.00%> (-28.85%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.71% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (+39.78%)` | :arrow_up: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `89.67% <0.00%> (+68.14%)` | :arrow_up: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `94.60% <0.00%> (+77.88%)` | :arrow_up: | | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7457/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <0.00%> (+78.37%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=footer). Last update [52e8392...f4761cf](https://codecov.io/gh/huggingface/transformers/pull/7457?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,456
closed
Catch import datasets common errors
# What does this PR do? This PR adds more checks when trying to import datasets to check we actually are using the datasets library and not a local folder/module. Fixes #7430
09-29-2020 17:31:49
09-29-2020 17:31:49
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=h1) Report > Merging [#7456](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/52e8392b7ebd4ebc7b796e8f14b9dae271139f5f?el=desc) will **decrease** coverage by `0.23%`. > The diff coverage is `75.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7456/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7456 +/- ## ========================================== - Coverage 77.07% 76.84% -0.24% ========================================== Files 181 181 Lines 35858 35860 +2 ========================================== - Hits 27638 27555 -83 - Misses 8220 8305 +85 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.97% <75.00%> (-0.40%)` | :arrow_down: | | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: | | [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: | | [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: | | [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.68% <0.00%> (-0.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.90% <0.00%> (-0.32%)` | :arrow_down: | | ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/7456/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=footer). Last update [52e8392...c20be03](https://codecov.io/gh/huggingface/transformers/pull/7456?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,455
closed
Adding the Streamlit demo app code for the RAG model
# Adding RAG demo code This PR shares the code for the RAG demo running [here](https://huggingface.co/rag/) for future reference. The code is added in `examples/rag`
09-29-2020 17:10:58
09-29-2020 17:10:58
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=h1) Report > Merging [#7455](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/9e9a1fb8c75e2ef00fea9c4c0dc511fc0178081c?el=desc) will **increase** coverage by `2.31%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7455/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7455 +/- ## ========================================== + Coverage 76.60% 78.91% +2.31% ========================================== Files 181 181 Lines 35865 35865 ========================================== + Hits 27473 28302 +829 + Misses 8392 7563 -829 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `17.46% <0.00%> (-81.13%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <0.00%> (-55.46%)` | :arrow_down: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `81.81% <0.00%> (-18.19%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.31% <0.00%> (-10.12%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.91% <0.00%> (+0.27%)` | :arrow_up: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.61% <0.00%> (+0.50%)` | :arrow_up: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `83.33% <0.00%> (+4.16%)` | :arrow_up: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7455/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=footer). Last update [9e9a1fb...291cea0](https://codecov.io/gh/huggingface/transformers/pull/7455?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Thank you for the demo! I added some comments with an issue I faced when running it regarding Streamlit.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
7,454
closed
Seq2seq example for T5 keeps on generating warning
In the latest version from Master, on running finetune.sh for T5, was getting the following warning continuously: **Keyword arguments {'src_lang':None,'tgt_lang':None,'add_prefix_space':False} not recognized.** I found out this is because the translation and BART parameters are being passed to prepare_seq2seq_batch of T5Tokenizer which it cannot handle and the tokenizer in the end spits out warning for unused kwargs. I made a small change in utils.py to the constructor at https://github.com/huggingface/transformers/blob/9e9a1fb8c75e2ef00fea9c4c0dc511fc0178081c/examples/seq2seq/utils.py#L100: ```python **dataset_kwargs ): super().__init__() self.src_file = Path(data_dir).joinpath(type_path + ".source") self.tgt_file = Path(data_dir).joinpath(type_path + ".target") self.len_file = Path(data_dir).joinpath(type_path + ".len") if os.path.exists(self.len_file): self.src_lens = pickle_load(self.len_file) self.used_char_len = False else: self.src_lens = self.get_char_lens(self.src_file) self.used_char_len = True self.max_source_length = max_source_length self.max_target_length = max_target_length assert min(self.src_lens) > 0, f"found empty line in {self.src_file}" self.tokenizer = tokenizer self.prefix = prefix if prefix is not None else "" if n_obs is not None: self.src_lens = self.src_lens[:n_obs] self.pad_token_id = self.tokenizer.pad_token_id self.dataset_kwargs = dataset_kwargs dataset_kwargs.update({'add_prefix_space' : True} if isinstance(self.tokenizer, BartTokenizer) else {}) ``` since src_lang and tgt_lang weren't being used anywhere else other than passing on to prepare_seq2seq_batch as parameters. While calling the method I used dataset_kwargs as the paremeter which sorted out the issue: ```python self.tokenizer.prepare_seq2seq_batch( [x["src_texts"] for x in batch], tgt_texts=[x["tgt_texts"] for x in batch], max_length=self.max_source_length, max_target_length=self.max_target_length, return_tensors="pt", **self.dataset_kwargs ) ``` If this seems reasonable I can raise a PR and check it in? @sshleifer @patil-suraj
09-29-2020 16:41:54
09-29-2020 16:41:54
Yes, great PR! Send it and tag me! Bonus points for adding a test that used to break but now doesn't maybe to seq2seq/test_datasets.py<|||||>Here you go! #7470 Added a simple test case as well to test the arguments that would be sent to collate()
transformers
7,453
closed
Multi-GPU Testing setup
# What does this PR do? This PR contributes a testing suite that runs on a multi-GPU machine. The machine has two T4 GPUs (better than k80 in almost every way, and cheaper), and the testing suite is identical to the single-GPU machine testing suite. Two jobs are run: - One job on each commit to the `master` branch - One job on a scheduled basis, that additionally runs all the slow tests.
09-29-2020 14:29:19
09-29-2020 14:29:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=h1) Report > Merging [#7453](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `2.51%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7453/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7453 +/- ## ========================================== - Coverage 79.35% 76.83% -2.52% ========================================== Files 181 181 Lines 35800 35800 ========================================== - Hits 28410 27508 -902 - Misses 7390 8292 +902 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-39.79%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.79%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.96% <0.00%> (-1.51%)` | :arrow_down: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7453/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=footer). Last update [1fc4de6...5a26051](https://codecov.io/gh/huggingface/transformers/pull/7453?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,452
closed
LayoutLM: add exception handling for bbox values
# What does this PR do? Fixes unhandled error when trying to use bbox balues greater than maximum allowed threshold 1000. To replicate error: - In `test_modelling_layoutlm.py` set `range_bbox=1025`, i.e. greater 1024 - Run `pytest tests/test_modeling_layoutlm.py` Requirement for bbox values to be within the range 0-1000 is documented but if it is violated then it is not clear what is the issue from error message. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> @sgugger @liminghao1630 @vblagoje
09-29-2020 12:45:12
09-29-2020 12:45:12
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=h1) Report > Merging [#7452](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `1.65%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7452/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7452 +/- ## ========================================== - Coverage 79.35% 77.70% -1.66% ========================================== Files 181 181 Lines 35800 35801 +1 ========================================== - Hits 28410 27819 -591 - Misses 7390 7982 +592 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `94.47% <100.00%> (+69.40%)` | :arrow_up: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `18.69% <0.00%> (-74.15%)` | :arrow_down: | | [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: | | [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `81.81% <0.00%> (-18.19%)` | :arrow_down: | | [src/transformers/modeling\_tf\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9jdHJsLnB5) | `83.74% <0.00%> (-14.14%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `76.94% <0.00%> (-9.53%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `95.16% <0.00%> (-2.42%)` | :arrow_down: | | ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/7452/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=footer). Last update [1fc4de6...6162c88](https://codecov.io/gh/huggingface/transformers/pull/7452?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,451
closed
T5 unsupervised training
I want to train T5 in a new language from scratch an I think the best way to do this is through the unsupervised denoising task (because you can have all text you want! no labels required! hurra!) However I have some doubts, I've seen [the HuggingFace documentation about this](https://huggingface.co/transformers/model_doc/t5.html#training) and I wonder how to create the train data, I mean, is there any function in the library to add the sentinel tokens? In my own research I've worked with [the original T5 library](https://github.com/google-research/text-to-text-transfer-transformer) and I've seen that in this library they have some functions to do that, but this functions not apply the sentinel_tokens to the text but it replace the "noising" tokens by " always. My questions are: 1. It exists some functions to do that in HuggingFace? 2. If the question 1 answer is not, anybody have any? 3. If the question 1 and 2 answers are not, What are the rules I must follow to create this function? Rules like: - How much concatenated tokens must the sentinel tokens mask? I mean, in the sentence "The cute dog walks in the park” they put the sentinel tokens in "cute dog" and my question is why this words election? I can take 1 token forever? - In each sentence I must start using the sentinel token number 1? Thank you in advance! :)
09-29-2020 12:42:13
09-29-2020 12:42:13
__UPDATE:__ I'm trying to make my own masking_function for this task. According with the T5 original paper, if you have two consecutive tokens to masking you must mask them using only one sentinel token so I need a function that searches the consecutive tokens ("rachas" in my language) in the random indices choosed. Here you have my code: ``` def racha_detection(lista): # It returns a list of lists where each sub-list contains the consecutive tokens in the list rachas = [] racha = [] for i, element in enumerate(lista): if (i<len(lista)-1) and (lista[i+1] == element+1): racha.append(element) else: if len(racha)>0: rachas.append(racha + [element]) else:# (i!=len(lista)-1): rachas.append([element]) racha = [] return rachas def masking(tokenized_sentence, rachas): # Function to mask a tokenized_sentence (token ids) following the rachas described in rachas # Only one sentinel_token per racha sent_token_id = 0 enmascared = tokenized_sentence.copy() for racha in rachas: sent_token = f'<extra_id_{sent_token_id}>' sent_id = tokenizer.encode(sent_token)[0] for i, idx in enumerate(racha): if i==0: enmascared[idx] = sent_id else: enmascared[idx] = -100 sent_token_id += 1 enmascared = [t for t in enmascared if t!=-100] return enmascared def add_noise(sentence, tokenizer, percent=0.15): # Function that takes a sentence, tokenizer and a noise percentage and returns # the masked input_ids and masked target_ids accordling with the T5 paper and HuggingFace docs # To see the process working uncomment all the prints ;) tokenized_sentence = tokenizer.encode(sentence) #print('PRE-MASKED:') #print('INPUT: {}'.format(tokenizer.convert_ids_to_tokens(tokenized_sentence))) idxs_2_mask = sorted(random.sample(range(len(tokenized_sentence)), int(len(tokenized_sentence)*percent))) rachas = racha_detection(idxs_2_mask) enmascared_input = masking(tokenized_sentence, rachas) #print('RACHAS INPUT: {}'.format(rachas)) idxs_2_mask = [idx for idx in range(len(tokenized_sentence)) if idx not in idxs_2_mask] rachas = racha_detection(idxs_2_mask) enmascared_target = masking(tokenized_sentence, rachas) #print('RACHAS TARGET: {}'.format(rachas)) #print('POST-MASKED:') #print('INPUT: {}'.format(tokenizer.convert_ids_to_tokens(enmascared_input))) #print('TARGET: {}'.format(tokenizer.convert_ids_to_tokens(enmascared_target))) return enmascared_input, enmascared_target ``` I dont know if it is correct but it generates sequences like the sequences in the examples. What do you think?<|||||>Another question comes to my mind, is it neccesary to add the pad token at the beggining of the label in this task too? I'm using the "labels" argument to add the targets_ids to the model, I mean: `model(input_ids=input_ids, labels=target_ids)` Thank you in advance!<|||||>Hey @amlarraz, As far as I know there is no pre-written function or script for unsupervised "sentinel masking" for T5. But it shouldn't be too difficult to do so. The innovation of T5's sentinel masking is exactly that you can mask multiple tokens with a single masking token which has been shown to yield better results as norrmal single token masking (a la BERT). So to answer your questions: 1) The data should be pre-processed as described in the paper and in the example in the docs, here: https://huggingface.co/transformers/model_doc/t5.html#training . The forum: http://discuss.huggingface.co/ is probably a better place to ask more specific questions about your code. 2) You don't need to add a padding token to the labels - this is done automatically here: https://github.com/huggingface/transformers/blob/2977bd528f06bada54afcf740219e65afd1c0883/src/transformers/modeling_t5.py#L638<|||||>Hi @patrickvonplaten ! Many thanks for answer me. As you said I've moved the question to the huggingface forum. If there are somebody interested in follow this topic this is the[ link to the conversation.](https://discuss.huggingface.co/t/train-t5-from-scratch/1781?u=amlarraz)
transformers
7,450
closed
deleted
Deleted
09-29-2020 10:28:30
09-29-2020 10:28:30
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=h1) Report > Merging [#7450](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/1fc4de69ed024e18b88cb6f040021630599de2f7?el=desc) will **decrease** coverage by `0.00%`. > The diff coverage is `0.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7450/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7450 +/- ## ========================================== - Coverage 79.35% 79.35% -0.01% ========================================== Files 181 181 Lines 35800 35801 +1 ========================================== Hits 28410 28410 - Misses 7390 7391 +1 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7450/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.00% <0.00%> (-0.07%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7450/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7450/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=footer). Last update [1fc4de6...c5759b1](https://codecov.io/gh/huggingface/transformers/pull/7450?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,449
closed
What's the most straightforward way to initialise BertForSequenceClassification for different token rather than [CLS]?
# ❓ Questions & Help ## Details BertForSequenceClassification uses [CLS] token's representation to feed a linear classifier. I want to leverage another token (say [X] in the input sequence) rather than [CLS]. What's the most straightforward way to implement that in Transformers? **A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/64094098/how-to-initialize-bertforsequenceclassification-for-different-input-rather-than
09-29-2020 09:42:47
09-29-2020 09:42:47
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,448
closed
v3.3.0 - Issue with name conflict in transformers & datasets - AttributeError: module 'datasets' has no attribute '__version__'
Version 3.3.0 tries to import the module [datasets](https://pypi.org/project/datasets/): https://github.com/huggingface/transformers/blob/v3.3.0/src/transformers/file_utils.py#L69 However, this can cause some undesirable behavior if there is a "datasets" folder in the same folder. An example to re-produce the error: ``` datasets/ <= Folder that contains your own data files myscript.py ``` myscript.py with the following content: ``` import transformers ``` This produces the following error: ``` python myscript.py Traceback (most recent call last): File "myscript.py", line 1, in <module> import transformers File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/integrations.py", line 42, in <module> from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun # isort:skip File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/trainer_utils.py", line 6, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/home/user/miniconda3/envs/sberttest/lib/python3.7/site-packages/transformers/file_utils.py", line 72, in <module> logger.debug(f"Succesfully imported datasets version {datasets.__version__}") AttributeError: module 'datasets' has no attribute '__version__' ``` The issue is with the import logic of Python. The datasets-folder will be treated as a module and transformers tries to load this module. This obviously fails, as we talk here about the datasets-folder and not [datasets package](https://pypi.org/project/datasets/). As *datasets* is a quite common folder name in many setups to contain the files for the own datasets, I can image that this name collision will appear frequently. As soon as there is a datasets folder, you can no-longer import transformers. ## Solution I am not sure what the best solution is for this. One quick fix would be to change: https://github.com/huggingface/transformers/blob/v3.3.0/src/transformers/file_utils.py#L74 to ``` except: _datasets_available = False ``` This would catch all exceptions. Old scripts, that have a `datasets/` folder would then still be working. ## Environment info - `transformers` version: 3.3.0 - Platform: Linux-4.15.0-39-generic-x86_64-with-debian-buster-sid - Python version: 3.7.9 - PyTorch version (GPU?): 1.6.0 (False) - datasets package is not installed
09-29-2020 09:40:04
09-29-2020 09:40:04
Indeed we'll fix this and release a patch soon.<|||||>The bug has been fixed in #7456 and v3.3.1 is out with this fix. The problem should be solved for now, let us know if that's not the case!<|||||>Great, thanks for the quick fix and release of a new version. It is now working fine in my case :)<|||||>I had the same error but my setup only included a `data/` folder but now **it works fine** with version `3.3.1`.
transformers
7,447
closed
Getting Bert Embeddings in Batch
Hi all I have a list of sentences(a batch during training) and for every word in each sentence, I need an aligned bert embedding which should be the mean of every word-piece that word was split into. Right now, I am doing it sentence by sentence and obtain the aligned embedding for every word by reiterating over the sentence, tokenise the individual word, note the number of word-pieces it was split into and lookup into the Bert embedding matrix to average out those rows of the matrix. Following is the code to get the aligned embeddings: ` def get_bert_aligned_embeddings(self, last_hidden_states, tokens): count = 0 aligned_embeddings = [] for i in tokens: tokenisation_length = len(self.tokenizer.tokenize(i)) emb = torch.mean(last_hidden_states[count:count+tokenisation_length], axis = 0) count += tokenisation_length aligned_embeddings.append(emb) aligned_embeddings = torch.stack(aligned_embeddings) return aligned_embeddings ` tokens is the list of words in a sentence, last_hidden_states are the embeddings I obtained from Bert. So, the above function runs for every sentence and every sentence is passed one by one to Bert Model. I want to know if there is any faster way of doing this? Can this entire process be done in batches? Any suggestions that could help me speed up this process would be great. Thanks
09-29-2020 09:09:48
09-29-2020 09:09:48
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,446
closed
Adding gradient checkpointing to GPT2
This PR adds gradient checkpointing capabilities to GPT-2, imitating the Longformer and Bert checkpointing code. It also disables `find_unused_parameters` in Trainer if the model is using gradient checkpointing, as per #4659 they are incompatible.
09-29-2020 08:17:29
09-29-2020 08:17:29
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=h1) Report > Merging [#7446](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7dfdf793bb5e3a865f33ed597b10fc4526364af9?el=desc) will **decrease** coverage by `1.92%`. > The diff coverage is `100.00%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7446/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7446 +/- ## ========================================== - Coverage 80.98% 79.06% -1.93% ========================================== Files 181 181 Lines 35750 35757 +7 ========================================== - Hits 28953 28271 -682 - Misses 6797 7486 +689 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.70% <ø> (ø)` | | | [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.36% <100.00%> (+0.07%)` | :arrow_up: | | [src/transformers/modeling\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ncHQyLnB5) | `87.03% <100.00%> (+0.20%)` | :arrow_up: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.39% <0.00%> (-51.59%)` | :arrow_down: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `65.26% <0.00%> (-33.64%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.79%)` | :arrow_down: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7446/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=footer). Last update [7dfdf79...6139c24](https://codecov.io/gh/huggingface/transformers/pull/7446?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>The slow tests are passing - I've also added a test for generation with checkpointing, although of course to be sure, one should also check the contents of the backwards pass.
transformers
7,445
closed
Add is_split_into_words as an argument to tokenize
# What does this PR do? Two calls to `self.tokenize` in `tokenization_utils.py` were missing the argument `is_split_into_words`. Since `is_split_into_words` is not present in the `kwargs`, `self.tokenize` resorts to the default behavior of not adding a space before every word. This PR fixes this issue by adding the missing arguments in the calls to self.tokenize(). ## Before submitting - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Environment info - `transformers` version: 3.3.0 - Platform: Linux-4.15.0-99-generic-x86_64-with-debian-buster-sid - Python version: 3.7.7 - PyTorch version (GPU?): 1.3.0 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?:No - Using distributed or parallel set-up in script?: No ### Who can help tokenizers: @mfuntowicz Trainer: @sgugger (because this might be related to https://github.com/huggingface/transformers/pull/7236) ## Information `tokenizer.encode`, `tokenizer.encode_plus`, and `tokenizer.batch_encode_plus` ignore the flag `is_split_into_words`. ## To reproduce Steps to reproduce the behavior: ```py from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-large') print(tokenizer.encode("happened", is_split_into_words=True)) print(tokenizer.encode_plus("happened", is_split_into_words = True) ) print(tokenizer.batch_encode_plus(["happened"], is_split_into_words=True)) ``` ## Actual behavior The word ``happened`` is tokenized without a space prefix which would be expected because of `is_split_into_words=True`: ``` [0, 298, 3340, 4490, 2] {'input_ids': [0, 298, 3340, 4490, 2], 'attention_mask': [1, 1, 1, 1, 1]} {'input_ids': [[0, 298, 3340, 4490, 2]], 'attention_mask': [[1, 1, 1, 1, 1]]} ``` ## Expected behavior a space should be prefixed in front of "happened" before tokenization, giving the following outputs: ``` [0, 1102, 2] {'input_ids': [0, 1102, 2], 'attention_mask': [1, 1, 1]} {'input_ids': [[0, 1102, 2]], 'attention_mask': [[1, 1, 1]]} ```
09-29-2020 07:10:27
09-29-2020 07:10:27
The failing test `FAILED tests/test_pipelines.py::MonoColumnInputTestCase::test_tf_text_generation` passes locally for me.<|||||>The `is_split_into_words` flag means that instead of passing a string defining a sequence: `This happened to me`, you're instead passing an array of words: `['This', 'happened', 'to', 'me']`. If instead of passing strings to the tokenizers you passed an array of words, do you get the same behaviour? Something like: ```py from transformers import RobertaTokenizer tokenizer = RobertaTokenizer.from_pretrained('roberta-large') print(tokenizer.encode(["happened"], is_split_into_words=True)) print(tokenizer.encode_plus(["happened"], is_split_into_words = True)) print(tokenizer.batch_encode_plus([["happened"]], is_split_into_words=True)) ```<|||||>>If instead of passing strings to the tokenizers you passed an array of words, do you get the same behaviour? Ah yes, that works. I guess I was just using these functions wrong :) Thanks! <|||||>No worries, thanks a lot for opening a PR and proposing a fix!
transformers
7,444
closed
Update README.md
Hi, just corrected the example code, add 2 links and fixed some typos # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
09-29-2020 06:54:46
09-29-2020 06:54:46
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=h1) Report > Merging [#7444](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74d8d69bd42c253c255dc69904ee1fbd1eece0cf?el=desc) will **increase** coverage by `0.92%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7444/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7444 +/- ## ========================================== + Coverage 77.73% 78.65% +0.92% ========================================== Files 181 181 Lines 35800 35800 ========================================== + Hits 27830 28160 +330 + Misses 7970 7640 -330 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG5ldC5weQ==) | `20.85% <0.00%> (-71.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `20.38% <0.00%> (-67.72%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.58% <0.00%> (+1.59%)` | :arrow_up: | | [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <0.00%> (+2.22%)` | :arrow_up: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7444/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=footer). Last update [74d8d69...fb96b01](https://codecov.io/gh/huggingface/transformers/pull/7444?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,443
closed
Error training GPT-2 from scratch on Hindi
I was trying to retrain GPT-2 from scratch. I was able able to train a tokenizer but was facing issues while running run_language_modeling.py. ``` 2020-09-29 06:00:14.450718: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1 /usr/local/lib/python3.6/dist-packages/transformers/training_args.py:299: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, 09/29/2020 06:00:16 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1, distributed training: False, 16-bits training: False 09/29/2020 06:00:16 - INFO - __main__ - Training/evaluation parameters TrainingArguments(output_dir='/content/data', overwrite_output_dir=True, do_train=True, do_eval=True, do_predict=False, evaluate_during_training=True, evaluation_strategy=<EvaluationStrategy.STEPS: 'steps'>, prediction_loss_only=False, per_device_train_batch_size=2, per_device_eval_batch_size=2, per_gpu_train_batch_size=None, per_gpu_eval_batch_size=None, gradient_accumulation_steps=1, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=5.0, max_steps=-1, warmup_steps=0, logging_dir='runs/Sep29_06-00-16_fcba31604e1d', logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=2, no_cuda=False, seed=108, fp16=False, fp16_opt_level='O1', local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=None, disable_tqdm=False, remove_unused_columns=True, label_names=None) Traceback (most recent call last): File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 313, in <module> main() File "/content/transformers/examples/language-modeling/run_language_modeling.py", line 205, in main tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, cache_dir=model_args.cache_dir) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_auto.py", line 251, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1428, in from_pretrained return cls._from_pretrained(*inputs, **kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1575, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_albert.py", line 155, in __init__ self.sp_model.Load(vocab_file) File "/usr/local/lib/python3.6/dist-packages/sentencepiece.py", line 367, in Load return self.LoadFromFile(model_file) File "/usr/local/lib/python3.6/dist-packages/sentencepiece.py", line 177, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string ``` Here is the link to my colab file. https://colab.research.google.com/drive/1rWHwWCB_U_rTOnfGyXdgZ9Kb4HpbNdVs?usp=sharing
09-29-2020 06:15:41
09-29-2020 06:15:41
You seem to have a `"model_type": "albert"` in your config.json which should be a `gpt2`. Also, I would suggest using the Trainer directly instead of shelling out to `run_language_modeling.py`, as described in https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb<|||||>Finally, this question's probably better suited to the [Forum](http://discuss.huggingface.co/), please ask over there!
transformers
7,442
closed
Setting up transformers/examples/seq2seq
@sshleifer I'm following the instructions in the README for seq2seq. In particular, I forked, cloned, and then ran "pip install -e ." But then when I tried to run finetune.sh, a number of libraries had not been installed. I had to just manually install them with pip (e.g. rouge_score, git, sacrebleu). Should these all have been included automatically? Or was there a better way to get them? Is this still the best way to go about finetuning a seq2seq model on a custom task (in my case I am doing T5)?
09-29-2020 05:38:53
09-29-2020 05:38:53
You can install those libraries by doing `pip install -r requirements.txt` in the `transformers/examples` folder!<|||||>Aha, thank you! Could I submit a pull request with updates to the README as I go through and encounter these little snags? Also, is there a reason that the README in the examples/seq2seq directory doesn't show up here? https://huggingface.co/transformers/model_doc/t5.html#t5forconditionalgeneration I couldn't find the (quite helpful) README (in the examples/seq2seq dir) and example scripts for training a seq2seq model until I started exploring the github repository!<|||||>I think all we are missing is some pointer to examples/README.md. Would that have helped you?<|||||>I would add - mention to both pip install -e ., pip instal -r requirements.txt - reference the seq2seq readme from the transformers T5 conditional generation page The only other bug I've encountered so far, #7426, was fixed already! finetune.py ended up being a great script for me. But I wasn't sure if I needed to do any other special things. Is there a place where it would be good to write up the couple of steps I did need? For example: - load tokenizer /model and modify the vocab (special tokens), then resave them locally (then use load_from_pretrained() on these files) - prep data files ({val|test|train}.{source|target}) and put into directory - mod the finetune.sh command And I do have one other question: It seems sort of weird that these models (e.g. SummarizationModel) are hidden in the examples directory. Is it because you guys don't generally expect them to be subclassed? I would have expected them to be in, say, transformers/language_generation or similar. <|||||>`SummarizationModule` is a pytorch_lightning.Module. The models under src/ are `nn.Module`. The main package under `src/` does not depend on pytorch_lightning, or have scripts to train things. Everything that does is under examples/ You could write a forums post with your steps needed: would be super helpful! https://discuss.huggingface.co/<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,441
closed
Faced the TypeError:forward() got an unexpected keyword argument 'output_all_encoded_layers'
## Environment info - `transformers` version: 3.0.2 - Platform: Windows-10-10.0.18362-SP0 - Python version: 3.7.3 - PyTorch version (GPU?): 1.1.0 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Trainer: @sgugger TransfoXL/XLNet: @TevenLeScao --> ## Information Model I am using XLNet...: The problem arises when using: * [ ] my own modified scripts like below url: [chinese-bert-pytorch](https://github.com/649453932/Bert-Chinese-Text-Classification-Pytorch) The tasks I am working on is: My own dataset Chinese Text Classification ## To reproduce Steps to reproduce the behavior: 1.Firstly, load the xlnet model and tokenize: ```heisenberg from pytorch_transformers import XLNetModel,XLNetTokenizer,XLNetConfig class Model(nn.Module): def __init__(self, config): super(Model, self).__init__() self.bert =XLNetModel.from_pretrained(config.bert_path) for param in self.bert.parameters(): param.requires_grad = True self.fc = nn.Linear(config.hidden_size, config.num_classes) def forward(self, x): context = x[0] # 输入的句子 mask = x[2] # 对padding部分进行mask,和句子一个size,padding部分用0表示,如:[1, 1, 1, 1, 0, 0] _, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False) out = self.fc(pooled) return out ``` 2.Then start training finetuning and evaluating: ``` def train(config, model,model_name, train_iter, dev_iter, test_iter): model.train() param_optimizer = list(model.named_parameters()) no_decay = ['bias', 'LayerNorm.bias', 'LayerNorm.weight'] optimizer_grouped_parameters = [ {'params': [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)], 'weight_decay': 0.01}, {'params': [p for n, p in param_optimizer if any(nd in n for nd in no_decay)], 'weight_decay': 0.0}] # optimizer = torch.optim.Adam(model.parameters(), lr=config.learning_rate) optimizer = BertAdam(optimizer_grouped_parameters, lr=config.learning_rate,warmup=0.05,t_total=len(train_iter) * config.num_epochs) total_batch = 0 dev_best_loss = float('inf') last_improve = 0 flag = False model.train() for epoch in range(config.num_epochs): print('Epoch [{}/{}]'.format(epoch + 1, config.num_epochs)) for i, (trains, labels) in enumerate(train_iter): outputs = model(trains) model.zero_grad() loss = F.cross_entropy(outputs, labels) loss.backward() optimizer.step() if total_batch % 100 == 0: true = labels.data.cpu() predic = torch.max(outputs.data, 1)[1].cpu() train_acc = metrics.accuracy_score(true, predic) dev_acc, dev_loss = evaluate(config, model, dev_iter) ``` 3. But Error Occured: ``` $ python run.py --model xlnet_base Loading data... 401it [00:01, 225.08it/s] 140it [00:00, 260.37it/s] 135it [00:00, 240.91it/s] Time usage: 0:00:03 Epoch [1/1] Traceback (most recent call last): File "run.py", line 40, in <module> train(config, model,model_name, train_iter, dev_iter, test_iter) File "F:\PycharmProjects\Bert-Chinese-Text-Classification-Pytorch-master\train_eval.py", line 52, in train outputs = model(trains) File "D:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) File "F:\PycharmProjects\Bert-Chinese-Text-Classification-Pytorch-master\models\xlnet_base.py", line 47, in forward _, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False) File "D:\anaconda3\lib\site-packages\torch\nn\modules\module.py", line 493, in __call__ result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'output_all_encoded_layers' ``` 4. I googled the error and found the [issuses ](https://github.com/huggingface/transformers/issues/3541) in transformers: So I changed the Model Load code like below: ``` class Model(nn.Module): def __init__(self, config): super(Model, self).__init__() model_config = XLNetConfig.from_pretrained(config.bert_path,output_hidden_states=False) self.bert = XLNetModel.from_pretrained(config.bert_path,config=model_config) for param in self.bert.parameters(): param.requires_grad = True self.fc = nn.Linear(config.hidden_size, config.num_classes) def forward(self, x): context = x[0] mask = x[2] _, pooled = self.bert(context, attention_mask=mask, output_all_encoded_layers=False) out = self.fc(pooled) return out ``` BUT I STILL encounter the same problem, I dunno why. Hope Ur Reply. Thanks A Lot!
09-29-2020 03:53:44
09-29-2020 03:53:44
I think you're using a script that's intended to be used with a different library. The argument `output_all_encoded_layers ` does not exist with `transformers`, it is named `output_hidden_states`.<|||||>Thanks a Lot, I will check it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,440
closed
creating readme for bert-base-mongolian-uncased
I am adding the model card for bert-base mongolian uncased. Can you review this for me please!
09-29-2020 03:31:25
09-29-2020 03:31:25
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=h1) Report > Merging [#7440](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74d8d69bd42c253c255dc69904ee1fbd1eece0cf?el=desc) will **decrease** coverage by `0.88%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7440/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7440 +/- ## ========================================== - Coverage 77.73% 76.85% -0.89% ========================================== Files 181 181 Lines 35800 35800 ========================================== - Hits 27830 27513 -317 - Misses 7970 8287 +317 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.52%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: | | [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.22% <0.00%> (+2.23%)` | :arrow_up: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7440/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=footer). Last update [74d8d69...38f3ad5](https://codecov.io/gh/huggingface/transformers/pull/7440?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>@julien-c hi, I am wondering if you guys accepting new model cards?<|||||>If you'd like, it'd be awesome if you could add default input texts in Mongolian for https://github.com/huggingface/widgets-server/blob/master/DefaultWidget.ts (you can open a PR) so the inference widget on your model pages is correctly populated
transformers
7,439
closed
Creating readme for bert-base-mongolian-cased
I am adding pretrained BERT-base models to model hub. Please review this for me
09-29-2020 03:28:15
09-29-2020 03:28:15
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=h1) Report > Merging [#7439](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/74d8d69bd42c253c255dc69904ee1fbd1eece0cf?el=desc) will **increase** coverage by `0.99%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7439/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7439 +/- ## ========================================== + Coverage 77.73% 78.72% +0.99% ========================================== Files 181 181 Lines 35800 35800 ========================================== + Hits 27830 28185 +355 + Misses 7970 7615 -355 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mdW5uZWwucHk=) | `18.53% <0.00%> (-75.51%)` | :arrow_down: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: | | [src/transformers/modeling\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sYXlvdXRsbS5weQ==) | `25.06% <0.00%> (-69.40%)` | :arrow_down: | | [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: | | [src/transformers/activations\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9uc190Zi5weQ==) | `54.16% <0.00%> (-20.84%)` | :arrow_down: | | [src/transformers/configuration\_layoutlm.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xheW91dGxtLnB5) | `80.00% <0.00%> (-20.00%)` | :arrow_down: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.79% <0.00%> (-6.04%)` | :arrow_down: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: | | [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.05% <0.00%> (-0.54%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.27% <0.00%> (+0.16%)` | :arrow_up: | | ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/7439/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=footer). Last update [74d8d69...b3a55c8](https://codecov.io/gh/huggingface/transformers/pull/7439?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,438
closed
CUDA out of memory (ALBERT) - run_squad.py ignores --per_gpu_train_batch_size
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.0 - Platform: Colab Pro (P100) / Anaconda (Windows / 2080 Ti) - Python version: 3.6 - PyTorch version (GPU?): 1.6.0 / CUDA 10.2 - Tensorflow version (GPU?): - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ## Information Model I am using (Bert, XLNet ...): ALBERT The problem arises when using: * [x] the official example scripts: run_squad.py * [ ] my own modified scripts: (give details below) The tasks I am working on is: * [x] an official GLUE/SQUaD task: SQUaD 2.0 * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. Fine-tune ALBERT-xlarge or xxlarge and set --per_gpu_train_batch_size 8 or 10 2. try to finetune ``` !python transformers\examples\question-answering\run_squad.py \ --model_type albert \ --model_name_or_path albert-large-v2 \ --do_train \ --do_eval \ --do_lower_case \ --train_file train-v2.0.json \ --predict_file dev-v2.0.json \ --per_gpu_train_batch_size 8 \ --learning_rate 3e-5 \ --num_train_epochs 1.0 \ --max_seq_length 384 \ --doc_stride 128 \ --output_dir /content/model_output \ --save_steps 1000 \ --threads 4 \ --version_2_with_negative \ --overwrite_output_dir ``` ## Expected behavior The model should be training, but despite the 8gb limit there is an out of memory error: ``` RuntimeError: CUDA out of memory. Tried to allocate 36.00 MiB (GPU 0; 15.90 GiB total capacity; 15.01 GiB already allocated; 7.88 MiB free; 15.03 GiB reserved in total by PyTorch) ```
09-29-2020 00:25:30
09-29-2020 00:25:30
Do you get the same error when using a batch size of 1?<|||||>Thanks, with a batch size of 1 it works! I never thought that even the "A light BERT" models are so big. :)<|||||>The `large` "light BERT" model is quite large indeed ;) The `base` model is smaller if you want to use bigger batch sizes.
transformers
7,437
closed
RAG Retriever (NameError: name 'load_dataset' is not defined in retrieval_rag.py)
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.0 - Platform: Linux-4.19.0-11-cloud-amd64-x86_64-with-debian-10.6 - Python version: 3.7.3 - PyTorch version (GPU?): 1.6.0+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: NO - Using distributed or parallel set-up in script?: NO ### Who can help @sshleifer RAG model is not on the list, but this is summarization related --> ## Information Model I am using RAG The problem arises when using: * [ +] the official example scripts: (give details below) ``` python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration import torch tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) # initialize with RagRetriever to do everything in one forward call model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) ``` The tasks I am working on is: Model coudln't load, didn't perform any task ## To reproduce Steps to reproduce the behavior: 1. run the code ``` python from transformers import RagTokenizer, RagRetriever, RagSequenceForGeneration import torch tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) # initialize with RagRetriever to do everything in one forward call model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) ``` ## Expected behavior Raise a NameError, load_dataset is not defined. ```python NameError Traceback (most recent call last) <ipython-input-6-752205d4a1c8> in <module> 3 4 tokenizer = RagTokenizer.from_pretrained("facebook/rag-sequence-nq") ----> 5 retriever = RagRetriever.from_pretrained("facebook/rag-sequence-nq", index_name="exact", use_dummy_dataset=True) 6 # initialize with RagRetriever to do everything in one forward call 7 model = RagSequenceForGeneration.from_pretrained("facebook/rag-token-nq", retriever=retriever) /mnt/disks/nlp/env_nlp_main/lib/python3.7/site-packages/transformers/retrieval_rag.py in from_pretrained(cls, retriever_name_or_path, **kwargs) 307 generator_tokenizer = rag_tokenizer.generator 308 return cls( --> 309 config, question_encoder_tokenizer=question_encoder_tokenizer, generator_tokenizer=generator_tokenizer 310 ) 311 /mnt/disks/nlp/env_nlp_main/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, config, question_encoder_tokenizer, generator_tokenizer) 287 config.retrieval_vector_size, 288 config.index_path, --> 289 config.use_dummy_dataset, 290 ) 291 ) /mnt/disks/nlp/env_nlp_main/lib/python3.7/site-packages/transformers/retrieval_rag.py in __init__(self, dataset_name, dataset_split, index_name, vector_size, index_path, use_dummy_dataset) 218 219 logger.info("Loading passages from {}".format(self.dataset_name)) --> 220 self.dataset = load_dataset( 221 self.dataset_name, with_index=False, split=self.dataset_split, dummy=self.use_dummy_dataset 222 ) NameError: name 'load_dataset' is not defined ```
09-28-2020 21:52:19
09-28-2020 21:52:19
Try with `pip install transformers datasets faiss-cpu psutil` (or see the [requirements.txt](https://github.com/huggingface/transformers/blob/master/examples/rag/requirements.txt) file). Had the same issue and it fixed it for me.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,436
closed
Create README.md
MagBERT-NER : Added widget (Text)
09-28-2020 21:06:11
09-28-2020 21:06:11
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=h1) Report > Merging [#7436](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1a8ffa5126ced93c12dfb677cbe3a069f48dcf3?el=desc) will **increase** coverage by `1.67%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7436/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7436 +/- ## ========================================== + Coverage 76.85% 78.52% +1.67% ========================================== Files 181 181 Lines 35800 35800 ========================================== + Hits 27513 28112 +599 + Misses 8287 7688 -599 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: | | [src/transformers/tokenization\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `53.33% <0.00%> (-17.78%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `81.66% <0.00%> (-13.34%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.36% <0.00%> (-0.56%)` | :arrow_down: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7436/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=footer). Last update [a1a8ffa...66b5582](https://codecov.io/gh/huggingface/transformers/pull/7436?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>done!<|||||>Thanks!
transformers
7,435
closed
[s2s] consistent output format across eval scripts
09-28-2020 20:22:50
09-28-2020 20:22:50
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=h1) Report > Merging [#7435](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7f4115c0990b5121878e38069d386f168fac6b7b?el=desc) will **increase** coverage by `2.05%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7435/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7435 +/- ## ========================================== + Coverage 76.89% 78.94% +2.05% ========================================== Files 181 181 Lines 35800 35800 ========================================== + Hits 27530 28264 +734 + Misses 8270 7536 -734 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: | | [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: | | [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.69% <0.00%> (-34.60%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: | | [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: | | [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `86.01% <0.00%> (-1.04%)` | :arrow_down: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.68% <0.00%> (-0.67%)` | :arrow_down: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7435/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=footer). Last update [7f4115c...59133e7](https://codecov.io/gh/huggingface/transformers/pull/7435?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,434
closed
Document new features of make fixup
# What does this PR do? This is a small follow-up on #7403 documenting the behavior it introduced, as instructed by @stas00.
09-28-2020 20:14:16
09-28-2020 20:14:16
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=h1) Report > Merging [#7434](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1a8ffa5126ced93c12dfb677cbe3a069f48dcf3?el=desc) will **increase** coverage by `1.15%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7434/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7434 +/- ## ========================================== + Coverage 76.85% 78.00% +1.15% ========================================== Files 181 181 Lines 35800 35800 ========================================== + Hits 27513 27926 +413 + Misses 8287 7874 -413 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: | | [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `80.95% <0.00%> (-5.27%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.12% <0.00%> (-3.52%)` | :arrow_down: | | [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.36% <0.00%> (-0.56%)` | :arrow_down: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7434/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=footer). Last update [a1a8ffa...a4e4de6](https://codecov.io/gh/huggingface/transformers/pull/7434?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,433
closed
Add a code of conduct
# What does this PR do? This PR adds a code of conduct to the project, inspired by the [Contributor Covenant](https://www.contributor-covenant.org/). To make it clearly visible it also adds: - a badge that displays under Transformers on the main README. - a link in the contributing guide.
09-28-2020 20:11:28
09-28-2020 20:11:28
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=h1) Report > Merging [#7433](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a1a8ffa5126ced93c12dfb677cbe3a069f48dcf3?el=desc) will **decrease** coverage by `0.01%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7433/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7433 +/- ## ========================================== - Coverage 76.85% 76.84% -0.02% ========================================== Files 181 181 Lines 35800 35800 ========================================== - Hits 27513 27509 -4 - Misses 8287 8291 +4 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `66.34% <0.00%> (-28.85%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: | | [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7433/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `96.82% <0.00%> (+39.68%)` | :arrow_up: | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=footer). Last update [a1a8ffa...f8d87a2](https://codecov.io/gh/huggingface/transformers/pull/7433?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>(before merging, let's check out the thread we had internally on this)
transformers
7,432
closed
Fine-tune BERTForMaskedLM
Hello, I am doing a project on spelling correction. I used pre-trained "bert-base-cased" model. However, the results are not that accurate. Therefore, I planned to fine-tune the BERT for Masked LM task. I couldn't find any examples for fine-tuning BERT model for Masked LM. I tried to use "run_language_modeling.py" for fine-tuning. But, I came across with the following error: ``` C:\Users\ravida6d\spell_correction\transformers\examples\language-modeling>python run_language_modeling.py --output_dir ="C:\\Users\\ravida6d\\spell_correction\\contextualSpellCheck\\fine_tune\\" --model_type = bert --model_name_or_path = bert-base-cased --do_train --train_data_file =$TRAIN_FILE --do_eval --eval_data_file =$TEST_FILE –mlm C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\training_args.py:291: FutureWarning: The `evaluate_during_training` argument is deprecated in favor of `evaluation_strategy` (which has more options) FutureWarning, Traceback (most recent call last): File "run_language_modeling.py", line 313, in <module> main() File "run_language_modeling.py", line 153, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\hf_argparser.py", line 151, in parse_args_into_dataclasses raise ValueError(f"Some specified arguments are not used by the HfArgumentParser: {remaining_args}") ValueError: Some specified arguments are not used by the HfArgumentParser: ['bert', 'bert-base-cased'] ``` I am not understanding how to use this script. Can anyone give some information for understanding the fine-tuning of BERT Masked LM.
09-28-2020 19:43:50
09-28-2020 19:43:50
Can you try removing spaces between `--model_type`, `=` and `bert`? Same for `--model_name_or_path `, `=` and `bert-base-cased`<|||||>@LysandreJik Yes, it works now. Thank you :). I tried the [example ](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) as it is with the same dataset specified, but, now I am facing GPU out of memory issue. Do you know how can I change the batch size in "run_language_modeling.py". Here is the snippet of the error: `09/29/2020 13:11:35 - INFO - filelock - Lock 2508759984840 acquired on C:\\Users\\ravida6d\\Desktop\\spellcheck\\wikitext\cached_lm_BertTokenizer_510_wiki.train.raw.lock 09/29/2020 13:11:35 - INFO - filelock - Lock 2508759984840 released on C:\\Users\\ravida6d\\Desktop\\spellcheck\\wikitext\cached_lm_BertTokenizer_510_wiki.train.raw.lock 09/29/2020 13:11:35 - INFO - filelock - Lock 2508759984560 acquired on C:\\Users\\ravida6d\\Desktop\\spellcheck\\wikitext\cached_lm_BertTokenizer_510_wiki.test.raw.lock 09/29/2020 13:11:36 - INFO - filelock - Lock 2508759984560 released on C:\\Users\\ravida6d\\Desktop\\spellcheck\\wikitext\cached_lm_BertTokenizer_510_wiki.test.raw.lock C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\spellcheck\lib\site-packages\transformers\trainer.py:266: FutureWarning: Passing `prediction_loss_only` as a keyword argument is deprecated and won't be possible in a future version. Use `args.prediction_loss_only` instead. FutureWarning, You are instantiating a Trainer but Tensorboard is not installed. You should consider installing it. Epoch: 0%| | 0/3 [00:00<?, ?it/s] Iteration: 0%| | 0/583 [00:00<?, ?it/s] Iteration: 0%|▏ | 1/583 [00:01<11:16, 1.16s/it]Traceback (most recent call last): File "fine_tune.py", line 313, in <module> main() File "fine_tune.py", line 277, in main trainer.train(model_path=model_path) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\spellcheck\lib\site-packages\transformers\trainer.py", line 755, in train tr_loss += self.training_step(model, inputs) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\spellcheck\lib\site-packages\transformers\trainer.py", line 1081, in training_step loss.backward() File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\spellcheck\lib\site-packages\torch\tensor.py", line 198, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\spellcheck\lib\site-packages\torch\autograd\__init__.py", line 100, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: CUDA out of memory. Tried to allocate 454.00 MiB (GPU 0; 11.00 GiB total capacity; 8.60 GiB already allocated; 132.32 MiB free; 8.70 GiB reserved in total by PyTorch) (malloc at ..\c10\cuda\CUDACachingAllocator.cpp:289) (no backtrace available) Epoch: 0%| | 0/3 [00:01<?, ?it/s] Iteration: 0%|▏ | 1/583 [00:01<13:12, 1.36s/it]` And also I would like to know, which argument defines that we are training or fine-tuning in "run_langauge_modeling.py".<|||||>Hello @LysandreJik, I reduced the --per_gpu_train_batch_size to 1, then I could fine-tune the BERT model. The result was stored as pytorch_model.bin. I wanted to load the model using Autotokenizer.from_pretrained class method but I faced this error: ``` Traceback (most recent call last): File "C:/Users/ravida6d/Desktop/Darshan/spell_correction/contextualSpellCheck/contextualSpellCheck.py", line 587, in <module> checker = ContextualSpellCheck(model_name="C:/Users/ravida6d/Desktop/Darshan/spell_correction/contextualSpellCheck/pytorch_model.bin", debug=True, max_edit_dist=3) File "C:/Users/ravida6d/Desktop/Darshan/spell_correction/contextualSpellCheck/contextualSpellCheck.py", line 113, in _init_ self.BertTokenizer = AutoTokenizer.from_pretrained(self.model_name) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\tokenization_auto.py", line 210, in from_pretrained config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\configuration_auto.py", line 303, in from_pretrained config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\configuration_utils.py", line 357, in get_config_dict config_dict = cls._dict_from_json_file(resolved_config_file) File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\site-packages\transformers\configuration_utils.py", line 439, in _dict_from_json_file text = reader.read() File "C:\Users\ravida6d\AppData\Local\Continuum\anaconda3\envs\contextualSpellCheck\lib\codecs.py", line 321, in decode (result, consumed) = self._buffer_decode(data, self.errors, final) UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0: invalid start byte ``` Can you please help me with this? <|||||>I got it worked and the following files must be in the same folder and the path should be projected to the folder (not to the pytorch_model.bin): vocab.txt - vocabulary file pytorch_model.bin - the Pytorch-compatible (and converted) model config.json - json-based model configuration<|||||>While fine-tuning, we can only see loss and perplexity which is useful. Is it also possible to see the accuracy of the model and also the tensorboard when using the "run_language_modeling.py" script? It would be really helpful if anyone could explain how the "loss" is calculated for BERTForMaskedLM task (as there are no labels provided while fine-tuning). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>hi,dear how to use Spelling Error Correction with this rp? could you pls help me ?
transformers
7,431
closed
Add automatic best model loading to Trainer
# What does this PR do? This PR cleans up a bit the part that saves the training state inside `Trainer` and adds an API that can track which was the best model during any of the evaluation phases to load it back at the end. When fine-tuning a model on a dataset that can easily overfit the model, it's quite common to have the last model not be the best one (in terms of metrics). This PR adds a `TrainingArgument` named `load_best_model_at_end` that triggers the following behavior: - `save_steps` gets ignored and the model is saved every time there is an evaluation (determined by `evaluation_strategy` and `eval_steps`) - It keeps track in a `TrainerState` of when the best model was encountered (that state is saved along the checkpoints so it can work with resuming a training) - The best model is determined by the new `TrainingArgument`s `metric_for_best_model` (defaults to the loss) and `greater_is_better` (default to False for the loss, True otherwise). - The best model is loaded once the training is finished. In passing I've added some tests of the saving API in Trainer and made sure it can handle both `PreTrainedModel` and regular `nn.Module` (a feature asked in #6901). Both are now tested in the CI, as is the new API. Fixes #6901 Those newly introduced arguments and APIs can then be leveraged to have early stopping supported in `Trainer`.
09-28-2020 19:41:37
09-28-2020 19:41:37
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=h1) Report > Merging [#7431](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f2ffdcc2df75cf01438bebc7ae281d921d21d?el=desc) will **increase** coverage by `0.53%`. > The diff coverage is `76.74%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7431/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7431 +/- ## ========================================== + Coverage 78.17% 78.71% +0.53% ========================================== Files 181 181 Lines 35800 35858 +58 ========================================== + Hits 27986 28224 +238 + Misses 7814 7634 -180 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `63.23% <73.43%> (+7.80%)` | :arrow_up: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `63.30% <80.00%> (+2.66%)` | :arrow_up: | | [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.72% <100.00%> (+0.45%)` | :arrow_up: | | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: | | [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `61.53% <0.00%> (-33.66%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: | | ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/7431/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=footer). Last update [f62f2ff...738935a](https://codecov.io/gh/huggingface/transformers/pull/7431?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>IMO this closes #4186<|||||>@sgugger how does this work together with `save_total_limit` ? If it is set might it happen that the best model gets deleted? well - see here https://github.com/huggingface/transformers/issues/7556<|||||>The best model is not deleted with `save_total_limit`. It is always put at the top of the list after sorting the chceckpoints.
transformers
7,430
closed
import error in version 3.3.0, conflict with local directory "datasets"
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.3.0 - Platform: Google Colab Model I am using :Bert ## To reproduce Steps to reproduce the behavior: Traceback (most recent call last): File "train.py", line 19, in <module> from mydataset import load_data,dist_load_data,load_data2 File "/content/drive/My Drive/mrc4ner/mydataset.py", line 5, in <module> from transformers import BertTokenizer File "/usr/local/lib/python3.6/dist-packages/transformers/__init__.py", line 22, in <module> from .integrations import ( # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/integrations.py", line 42, in <module> from .trainer_utils import PREFIX_CHECKPOINT_DIR, BestRun # isort:skip File "/usr/local/lib/python3.6/dist-packages/transformers/trainer_utils.py", line 6, in <module> from .file_utils import is_tf_available, is_torch_available, is_torch_tpu_available File "/usr/local/lib/python3.6/dist-packages/transformers/file_utils.py", line 72, in <module> logger.debug(f"Succesfully imported datasets version {datasets.__version__}") AttributeError: module 'datasets' has no attribute '__version__' ## Expected behavior <!-- A clear and concise description of what you would expect to happen. --> My code works well before, and there is a "datasets" folder in my working directory. When my transformers version upgraded to 3.3.0, I get this error. If I change the name of the folder "datasets" or downgrade transformers to version 3.2.0, the error is get fixed. Is this a bug? Because it doesn't allow me to use "datasets" as a folder name.
09-28-2020 17:59:00
09-28-2020 17:59:00
Sadly that is how python works, it will try to import the datasets library from a local folder if you have a folder named like this in the path your are working in. However, this should only work if there is a `__init__.py` in your folder named datasets. Removing that file should then solve the bug.<|||||>This change just broke [DeepChem](https://github.com/deepchem/deepchem). In the short term we can work around it by pinning to an older version, but that's not a reasonable long term solution. Directories called "datasets" are very common, and this will impact a lot of people. Using a common, generic word as the top level package violates the [PEP 423](https://www.python.org/dev/peps/pep-0423/) guidelines for package naming.<|||||>Indeed, we are working on a fix and will release soon.<|||||>Great, thanks!<|||||>The patched release is on PyPi, tell us if you have any issue.<|||||>Works perfectly. Thanks so much for the super fast fix!
transformers
7,429
closed
Update README.md
Add links to models fine-tuned on a downstream task # What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors which may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao Blenderbot, Bart, Marian, Pegasus: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger -->
09-28-2020 17:14:19
09-28-2020 17:14:19
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=h1) Report > Merging [#7429](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f2ffdcc2df75cf01438bebc7ae281d921d21d?el=desc) will **decrease** coverage by `1.31%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7429/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7429 +/- ## ========================================== - Coverage 78.17% 76.85% -1.32% ========================================== Files 181 181 Lines 35800 35800 ========================================== - Hits 27986 27515 -471 - Misses 7814 8285 +471 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: | | [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `71.21% <0.00%> (-12.88%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.46% <0.00%> (-1.51%)` | :arrow_down: | | ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/7429/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=footer). Last update [f62f2ff...1defbb1](https://codecov.io/gh/huggingface/transformers/pull/7429?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>There should be a special metadata block for this at some point! ("Fine-tune button" instead of GitHub's "Fork")
transformers
7,428
closed
Train T5 in Tensoflow 2 Community Notebook
# What does this PR do? This adds a link to **Training T5 in Tensoflow 2 Community Notebook** under the notebooks/Readme.md community notebook section. This notebook demonstrates how to train T5 for any task using Tensorflow 2. Trains a Question & Answer task implemented in Tensorflow 2 using SQUAD ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dimiss the other checks if that's the case). **Yes** - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? **Yes** - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to the it if that's the case. [Forum Link](https://discuss.huggingface.co/t/how-to-train-t5-with-tensorflow/641) - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). **Not Applicable** - [ ] Did you write any new necessary tests? **Not Applicable** ## Who can review? @patrickvonplaten @jplu
09-28-2020 16:53:05
09-28-2020 16:53:05
Hello! Thanks a lot for your awesome notebook! Just two tiny updates, can you clean your list of imports and use `datasets` instead of `tfds`?<|||||>@jplu much appreciated. i will clean up the imports. by datasets do you mean an alias for tensorflow datasets instead of tfds?<|||||>No, I mean using https://github.com/huggingface/datasets instead.<|||||>@jplu i have done the necessary changes. I have also switched to [datasets](https://github.com/huggingface/datasets) as requested. for this i have created a [new colab notebook](https://colab.research.google.com/github/snapthat/TF-T5-text-to-text/blob/master/snapthatT5/notebooks/TF-T5-Datasets%20Training.ipynb) which uses datasets as its primary source.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=h1) Report > Merging [#7428](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f62f2ffdcc2df75cf01438bebc7ae281d921d21d?el=desc) will **increase** coverage by `0.48%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7428/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7428 +/- ## ========================================== + Coverage 78.17% 78.66% +0.48% ========================================== Files 181 181 Lines 35800 35800 ========================================== + Hits 27986 28161 +175 + Misses 7814 7639 -175 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `24.25% <0.00%> (-73.56%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `25.32% <0.00%> (-51.72%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <0.00%> (-18.70%)` | :arrow_down: | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `83.58% <0.00%> (-8.96%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `97.77% <0.00%> (-2.23%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <0.00%> (-0.51%)` | :arrow_down: | | ... and [19 more](https://codecov.io/gh/huggingface/transformers/pull/7428/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=footer). Last update [f62f2ff...d9df829](https://codecov.io/gh/huggingface/transformers/pull/7428?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Awesome!! Thanks a lot for your notebook!!<|||||>Thanks a lot @HarrisDePerceptron the community has been asking about such a notebook for a long time :-) <|||||>Thanks @patrickvonplaten !! it was long over due :)
transformers
7,427
closed
Problem while using tokenizer.encode_plus for sentence pairs
# ❓ Questions & Help <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details Hi, thanks for this great work. I was trying to use tokenizer.encode_plus to encode sentence pairs. The code looks like ```py training_encoded_dict = tokenizer.encode_plus( seq0, seq1, add_spicial_tokens = True, max_length = 256, truncation_strategy = 'only_second', pad_to_max_length = True, return_attention_mask = True, return_token_type_ids = True, return_tensors = 'pt', ) ``` However, the problem looks like ``` --------------------------------------------------------------------------- AssertionError Traceback (most recent call last) <ipython-input-17-ae3ce93d62ba> in <module>() 24 return_attention_mask = True, 25 return_token_type_ids = True, ---> 26 return_tensors = 'pt', 27 28 ) 2 frames /usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils.py in truncate_sequences(self, ids, pair_ids, num_tokens_to_remove, truncation_strategy, stride) 2068 ids = ids[:-num_tokens_to_remove] 2069 elif truncation_strategy == "only_second": -> 2070 assert pair_ids is not None and len(pair_ids) > num_tokens_to_remove 2071 window_len = min(len(pair_ids), stride + num_tokens_to_remove) 2072 overflowing_tokens = pair_ids[-window_len:] AssertionError: ``` This issue doesn't occur while using truncation_strategy = 'longest_first', and it happens with other truncation_strategy such as 'only_second' and 'only_first'. I was wondering does anyone have the same issue or have any idea about how to fix it? Thanks a lot in advance. **A link to original question on the forum/Stack Overflow**:
09-28-2020 16:28:51
09-28-2020 16:28:51
Could you provide some `seq0` and `seq1` values so that we may investigate further?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,426
closed
[T5] Automatic setting of decoder_input_ids is misleading and does not correspond to the expected behavior of T5
These lines: https://github.com/huggingface/transformers/blob/f62f2ffdcc2df75cf01438bebc7ae281d921d21d/src/transformers/modeling_t5.py#L1020 were added in this PR: https://github.com/huggingface/transformers/pull/5518 . @mfuntowicz - do we need this hack to make onnx work? I would prefer to revert this change. The lines do not make much sense IMO and should be deleted. Also the T5 error message when only `input_ids` are passed should be updated to be clearer. And the docs should be cleaned as well: https://huggingface.co/transformers/model_doc/t5.html#t5model. Also see:https://github.com/huggingface/transformers/issues/7358
09-28-2020 16:03:14
09-28-2020 16:03:14
transformers
7,425
closed
Getting Import error ImportError: cannot import name 'quantize' from 'transformers.convert_graph_to_onnx' (/opt/conda/lib/python3.7/site-packages/transformers/convert_graph_to_onnx.py)
--------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-19-60bf04c6de64> in <module> ----> 1 from simpletransformers.classification import MultiLabelClassificationModel 2 3 4 model = MultiLabelClassificationModel('roberta', 'roberta-base', num_labels=6, args={'train_batch_size':2, 'gradient_accumulation_steps':16, 'learning_rate': 3e-5, 'num_train_epochs': 3, 'max_seq_length': 512}) /opt/conda/lib/python3.7/site-packages/simpletransformers/classification/__init__.py in <module> ----> 1 from simpletransformers.classification.classification_model import ClassificationModel 2 from simpletransformers.classification.multi_label_classification_model import MultiLabelClassificationModel 3 from simpletransformers.classification.multi_modal_classification_model import MultiModalClassificationModel 4 from simpletransformers.config.model_args import ( 5 ClassificationArgs, /opt/conda/lib/python3.7/site-packages/simpletransformers/classification/classification_model.py in <module> 62 get_linear_schedule_with_warmup, 63 ) ---> 64 from transformers.convert_graph_to_onnx import convert, quantize 65 66 from simpletransformers.classification.classification_utils import ( ImportError: cannot import name 'quantize' from 'transformers.convert_graph_to_onnx' (/opt/conda/lib/python3.7/site-packages/transformers/convert_graph_to_onnx.py)
09-28-2020 15:27:35
09-28-2020 15:27:35
Any Updates on the above issue ?<|||||>@mfuntowicz any updates on this ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,424
closed
[draft] codecov no comment
09-28-2020 14:03:33
09-28-2020 14:03:33
transformers
7,423
closed
Reorganize documentation navbar
With the library containing so many models now, the documentation navigation bar was started to get unreadable, especially if someone was looking for a specific model. To make this cleaner I: - removed the PACKAGE REFENCE caption (not happy about this but there is no way to have a caption with empty content :-( ) - added three captions to separate the documentation between main classes, models and internals - sorted alphabetically the sections Also, make the background color of the section headers a bit darker to make the distinction with the rest of the toc clearer. The result can be checked [here](https://92916-155220641-gh.circle-artifacts.com/0/docs/_build/html/index.html).
09-28-2020 13:53:35
09-28-2020 13:53:35
Cool, thanks @sgugger
transformers
7,422
closed
Custom TF weights loading
This PR provides a custom weight loading function in order to take into account dynamic model architecture building. More precisely, the brand new loading function takes into account the `authorized_unexpected_keys` and `authorized_missing_keys` class attributes enabling the possibility to ignore some layers in the models.
09-28-2020 09:44:42
09-28-2020 09:44:42
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=h1) Report > Merging [#7422](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/95f792afb0f0ce5a7b4f0e8df108b10157a69134?el=desc) will **decrease** coverage by `0.21%`. > The diff coverage is `87.50%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7422/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7422 +/- ## ========================================== - Coverage 78.51% 78.30% -0.22% ========================================== Files 184 181 -3 Lines 36734 35917 -817 ========================================== - Hits 28843 28125 -718 + Misses 7891 7792 -99 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_pytorch\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9weXRvcmNoX3V0aWxzLnB5) | `88.37% <50.00%> (-1.94%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `88.06% <92.30%> (+0.84%)` | :arrow_up: | | [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.91% <100.00%> (+<0.01%)` | :arrow_up: | | [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.59% <0.00%> (-72.35%)` | :arrow_down: | | [src/transformers/configuration\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `26.47% <0.00%> (-70.59%)` | :arrow_down: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `23.51% <0.00%> (-65.93%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: | | [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `69.91% <0.00%> (-20.82%)` | :arrow_down: | | [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `63.30% <0.00%> (-5.35%)` | :arrow_down: | | [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `63.23% <0.00%> (-1.59%)` | :arrow_down: | | ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/7422/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=footer). Last update [95f792a...6f52cc9](https://codecov.io/gh/huggingface/transformers/pull/7422?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Just merged your suggestions :)<|||||>@patrickvonplaten I have done some updates, let me know if it looks like what you have in mind.<|||||>There is an issue with Longformer apparently.<|||||>Ok, I found why, and I should have thought about this much before.... 😣 We cannot have `None` into a tuple, the logic works only when the `return_dict` is True.<|||||>@LysandreJik are we able to merge?<|||||>Good to merge for me<|||||>Ran the slow tests, they pass.
transformers
7,421
closed
"Sequence Classification with IMDb Reviews " error, when using "bert-base-multilingual-cased" model.
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.1.0 - Platform:macos - Python version:3.6 - PyTorch version (GPU?):CPU - Tensorflow version (GPU?):CPU - Using GPU in script?:No - Using distributed or parallel set-up in script?:No ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): The problem arises when using: * [+] the official example scripts: (give details below) * [+] my own modified scripts: (give details below) The tasks I am working on is: * [ ] an official GLUE/SQUaD task: (give the name) * [ ] my own task or dataset: (give details below) ## To reproduce Steps to reproduce the behavior: 1. reference the code [https://huggingface.co/transformers/custom_datasets.html#seq-imdb](url) 2. modify the code ``` # coding:utf-8 """ """ from pathlib import Path from sklearn.model_selection import train_test_split from transformers import DistilBertTokenizerFast import torch from transformers import Trainer, TrainingArguments from nlp import load_dataset from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("bert-base-multilingual-cased") model = AutoModelWithLMHead.from_pretrained("bert-base-multilingual-cased") def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in ["pos", "neg"]: for text_file in (split_dir/label_dir).iterdir(): texts.append(text_file.read_text()) labels.append(0 if label_dir is "neg" else 1) return texts, labels train_texts, train_labels = read_imdb_split('dataset/aclImdb/train') test_texts, test_labels = read_imdb_split('dataset/aclImdb/test') train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2) train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=100) val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=100) test_encodings = tokenizer(test_texts, truncation=True, padding=True, max_length=100) class IMDbDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_dataset = IMDbDataset(train_encodings, train_labels) val_dataset = IMDbDataset(val_encodings, val_labels) test_dataset = IMDbDataset(test_encodings, test_labels) training_args = TrainingArguments( output_dir='./results', num_train_epochs=1, per_device_train_batch_size=16, per_device_eval_batch_size=64, warmup_steps=500, weight_decay=0.01, evaluate_during_training=True, logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset ) trainer.train() ``` 3. the error info ``` ValueError: Expected input batch_size (1600) to match target batch_size (16). ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior <!-- A clear and concise description of what you would expect to happen. -->
09-28-2020 09:29:48
09-28-2020 09:29:48
It looks like you are using a model for language modeling (`AutoModelWithLMHead`) instead of a model for sequence classification (`AutoModelForSequenceClassification`) which is why you have that shape error.<|||||>@sgugger thank you! I modify my code, then everything well. ``` # coding:utf-8 """ """ from pathlib import Path from sklearn.model_selection import train_test_split import torch from transformers import Trainer, TrainingArguments from nlp import load_dataset from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("model/bert-base-multilingual-cased") model = AutoModelForSequenceClassification.from_pretrained("model/bert-base-multilingual-cased") def read_imdb_split(split_dir): split_dir = Path(split_dir) texts = [] labels = [] for label_dir in ["pos", "neg"]: for text_file in (split_dir/label_dir).iterdir(): texts.append(text_file.read_text()) labels.append(0 if label_dir is "neg" else 1) return texts, labels train_texts, train_labels = read_imdb_split('dataset/ChnSentiCorp') test_texts, test_labels = read_imdb_split('dataset/ChnSentiCorp') train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2) train_encodings = tokenizer(train_texts, truncation=True, padding=True, max_length=100, verbose=False) val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=100, verbose=False) test_encodings = tokenizer(test_texts, truncation=True, padding=True, max_length=100, verbose=False) class IMDbDataset(torch.utils.data.Dataset): def __init__(self, encodings, labels): self.encodings = encodings self.labels = labels def __getitem__(self, idx): item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()} item['labels'] = torch.tensor(self.labels[idx]) return item def __len__(self): return len(self.labels) train_dataset = IMDbDataset(train_encodings, train_labels) val_dataset = IMDbDataset(val_encodings, val_labels) test_dataset = IMDbDataset(test_encodings, test_labels) training_args = TrainingArguments( output_dir='./results', num_train_epochs=16, per_device_train_batch_size=1, per_device_eval_batch_size=1, warmup_steps=500, weight_decay=0.01, evaluate_during_training=True, logging_dir='./logs', ) trainer = Trainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=test_dataset ) trainer.train() ``` ``` Epoch: 0%| | 0/16 [00:00<?, ?it/s] Iteration: 2%|█▏ | 65/3200 [03:04<2:26:14, 2.80s/it] ```
transformers
7,420
closed
[RAG] Model cards - clean cards
Clean the four model cards
09-28-2020 09:08:17
09-28-2020 09:08:17
transformers
7,419
closed
Cannot reproduce example token classification GermEval 2014 (German NER) dataset
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.2.0 - Platform: Linux-4.4.0-131-generic-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.8 - PyTorch version (GPU?): 1.3.1 (True) - Tensorflow version (GPU?): 2.2.0 (False) - Using GPU in script?: True - Using distributed or parallel set-up in script?: False ### Who can help @stefan-it Please help. <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz Trainer: @sgugger Speed and Memory Benchmarks: @patrickvonplaten Model Cards: @julien-c Translation: @sshleifer Summarization: @sshleifer TextGeneration: @TevenLeScao examples/distillation: @VictorSanh nlp datasets: [different repo](https://github.com/huggingface/nlp) rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Text Generation: @TevenLeScao blenderbot: @mariamabarham Bart: @sshleifer Marian: @sshleifer T5: @patrickvonplaten Longformer/Reformer: @patrickvonplaten TransfoXL/XLNet: @TevenLeScao examples/seq2seq: @sshleifer examples/bert-loses-patience: @JetRunner tensorflow: @jplu examples/token-classification: @stefan-it documentation: @sgugger --> ## Information Model I am using (Bert, XLNet ...): bert-base-multilingual-cased The problem arises when using: * [x] the official example scripts: (give details below) I am running the pytorch version: transformers/examples/token-classification/run_ner.py The tasks I am working on is: * [x] an official GLUE/SQUaD task: (give the name) GermEval 2014 (German NER) dataset ## To reproduce Steps to reproduce the behavior: 1. download dataset: https://drive.google.com/drive/folders/1kC0I2UGl2ltrluI9NqDjaQJGw5iliw_J?usp=sharing 2. Because our accese for training model can not access Internet, I download the pretrained model here: [https://huggingface.co/bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased), and put all the downloaded files at transformers/examples/token-classification/bert-base-multilingual-cased 3. ``` cat NER-de-train.tsv | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > train.txt.tmp cat NER-de-dev.tsv | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > dev.txt.tmp cat NER-de-test.tsv | grep -v "^#" | cut -f 2,3 | tr '\t' ' ' > test.txt.tmp export MAX_LENGTH=128 export BERT_MODEL=./bert-base-multilingual-cased python3 scripts/preprocess.py train.txt.tmp $BERT_MODEL $MAX_LENGTH > train.txt python3 scripts/preprocess.py dev.txt.tmp $BERT_MODEL $MAX_LENGTH > dev.txt python3 scripts/preprocess.py test.txt.tmp $BERT_MODEL $MAX_LENGTH > test.txt cat train.txt dev.txt test.txt | cut -d " " -f 2 | grep -v "^$"| sort | uniq > labels.txt export OUTPUT_DIR=germeval-model export BATCH_SIZE=32 export NUM_EPOCHS=3 export SAVE_STEPS=750 export SEED=1 python3 run_ner.py --data_dir ./ \ --labels ./labels.txt \ --model_name_or_path $BERT_MODEL \ --output_dir $OUTPUT_DIR \ --max_seq_length $MAX_LENGTH \ --num_train_epochs $NUM_EPOCHS \ --per_device_train_batch_size $BATCH_SIZE \ --save_steps $SAVE_STEPS \ --seed $SEED \ --do_train \ --do_eval \ --do_predict ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ## Expected behavior The f1 score on evaluation and test should be `0.8784592370979806` and `0.8624150210424085` as the README write. However, by runing the script above, on one V100 GPU, I get `0.83919` on evaluation, and `0.81673` on test, much lower than expected. <!-- A clear and concise description of what you would expect to happen. -->
09-28-2020 06:43:00
09-28-2020 06:43:00
I find that after deleting the cache, the results can be reproduced. I guess that is because in my first attempt, I used the wrong arguments setting, and something is cached. Then although I fixed the setting later, the code alway loads from the wrong cache.
transformers
7,418
closed
Blenderbot
Continued from https://github.com/huggingface/transformers/pull/4803 Co-authored by @mariamabarham New models: `facebook/blenderbot-3B` and `facebook/blenderbot-90M`. They produce similar, but not always identical outputs to their facebook counterparts, with the differences due to length penalty implementations. They are identical to bart, besides one layernorm change for the blenderbot 90M checkpoint ``` if self.do_blenderbot_90_layernorm: x = self.layernorm_embedding(x) x += positions else: x += positions x = self.layernorm_embedding(x) ``` I also wrote a [gist](https://gist.github.com/sshleifer/cb245b8739420724a32fc0c22344aee0) explaining the various layernorm sequences. Will update it once this is finalized. The blenderbot 3b tests can run on 1 GPU, but are ridiculously slow on CPU. Additionally, `test_feedforward_chunking` and `test_model_outputs_equivalence` were flaky locally, and are currently skipped. #### Done [x] forward pass in one file [x] passing integration tests #### TODO: - [ ] `blenderbot.rst` - [ ] model cards ### Ways to avoid new if statement - Don't port bbot-90m. - separate `Blenderbot90Model`. - There are also solutions where we parametrize out EncoderLayer/DecoderLayer, but these seem more confusing/harder to understand/less consistent to me.
09-28-2020 05:31:29
09-28-2020 05:31:29
I don't really understand why we don't completely separate Blenderbot from Bart here. I thought we kind of agreed on not adding any if statements to existing models (also if it's only one) to make them work with new models. With @sgugger's recent PRs that completely separate model files from each other (Roberta, Longformer, Electra from BERT), I don't see why we would not do the same here?<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=h1) Report > Merging [#7418](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/7296fea1d689f47de69fd45e438e42d65ca5a393?el=desc) will **increase** coverage by `1.89%`. > The diff coverage is `96.66%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7418/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7418 +/- ## ========================================== + Coverage 76.45% 78.35% +1.89% ========================================== Files 181 184 +3 Lines 35781 35928 +147 ========================================== + Hits 27355 28150 +795 + Misses 8426 7778 -648 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_blenderbot.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmxlbmRlcmJvdC5weQ==) | `95.83% <95.83%> (ø)` | | | [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.39% <100.00%> (+0.01%)` | :arrow_up: | | [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.34% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.11% <100.00%> (+0.11%)` | :arrow_up: | | [src/transformers/configuration\_blenderbot.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JsZW5kZXJib3QucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `86.08% <100.00%> (-0.97%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.98% <100.00%> (+0.87%)` | :arrow_up: | | [src/transformers/modeling\_blenderbot.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19ibGVuZGVyYm90LnB5) | `100.00% <100.00%> (ø)` | | | [src/transformers/modeling\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYmFydC5weQ==) | `100.00% <100.00%> (ø)` | | | [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `92.64% <100.00%> (+0.10%)` | :arrow_up: | | ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/7418/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=footer). Last update [7296fea...978290a](https://codecov.io/gh/huggingface/transformers/pull/7418?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Spent 2H getting Blender+Bart docs more consistent. `blenderbot.rst` just points there instead of repeating. ![image](https://user-images.githubusercontent.com/6045025/94772210-23249380-0387-11eb-964a-a4d0337eae99.png) <|||||>Thanks for all the help @sgugger and @LysandreJik and sorry for being difficult.<|||||>Thanks for cleaning the docstrings and making all nice and shiny to go with the rest of the docs! Hopefully it's going to be easier now that the templates have been updated.<|||||><img src="https://media.giphy.com/media/osjgQPWRx3cac/giphy.gif"/><|||||>@sshleifer @stephenroller is there a particular reason why the 9.4B one wasn't ported over? I know it was mentioned in the paper that the 9.4B wasn't statistically any better than the 2.7B one in human evaluations, but it'd still be a useful release IMO.<|||||>I think it was just a matter of prioritization? I didn't directly work on it. I can only speak for myself, not HF, but I would welcome a PR adding support for the 9.4. It should only be a configuration change compared the 2.7, I would think.<|||||>@stephenroller I see, can you point me to the 9.4B model artifact? I'll see if I can load that using the HF class with some tweaks.<|||||>If you manage to load it, we'd love to host it on https://huggingface.co/facebook cc @patrickvonplaten and others<|||||>Think @patil-suraj is working on it :-)<|||||>Hey @patil-suraj are you actively working on this and have an ETA? I want to start playing with the 9.4B within HF asap.<|||||>The files model files are in this tarfile: https://dl.fbaipublicfiles.com/parlai/_models/blender/BST9B.tgz Compared to the 2.7B, the following hyperparameters are expected to change (2.7B setting -> 9.4B setting): - embedding size: 2560 -> 4096 - hidden state size (ffn size): 10240 -> 16384 - number of encoder layers: 2 -> 4 - number of decoder layers: 24 -> 32 - number of heads: 32 -> 32 (unchanged, but one might expect it to) The dictionary and formatting is _exactly_ the same as the 2.7B model.<|||||>Hi @g-karthik , I'm working on it, it should be on hub by the end of next week<|||||>@patil-suraj can you please share the PR so I can take a look? I do not see the model on the model hub.
transformers
7,417
closed
Add adapter support
# 🚀 Feature request Add [adapter](https://arxiv.org/abs/1902.00751) support to transformers. ## Motivation Adapters are great time-and-memory-savers for multitask use cases and would be a great addition to this library. Some very kind folks added support for them ([AdapterHub](https://adapterhub.ml/)) on top of transformers library but unfortunately in order to use it one needs to use their [fork](https://github.com/Adapter-Hub/adapter-transformers) which is slightly inconvenient. ## Your contribution They've done the integration already so I hope it's straightforward. I've posted [an issue](https://github.com/Adapter-Hub/adapter-transformers/issues/65) on their end as well and would be happy to help in any way I can.
09-28-2020 05:21:13
09-28-2020 05:21:13
That'd be great!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>Just voicing my support of this too. Not sure yet if adapter-hub will end up having the resources to merge back into `transformers`. If they become inactive, me (and a lot of the community I'm sure) will want to try to take up that effort.
transformers
7,416
closed
Possible error in MBart Tokenization script -- target lang code is only present in seq once
## Environment info - `transformers` version: current - Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic - Python version: 3.6.9 - PyTorch version (GPU?): 1.6.0+cu101 (False) - Using GPU in script?: No. - Using distributed or parallel set-up in script?: No. ### Who can help MBart: @sshleifer ## Information Model I am using is MBart. The problem arises when using: * [x ] the official example scripts: (give details below) * [ ] my own modified scripts: (give details below) ## To reproduce Steps to reproduce the behavior: ```py from transformers import MBartTokenizer tokenizer = MBartTokenizer.from_pretrained('facebook/mbart-large-en-ro') example_english_phrase = " UN Chief Says There Is No Military Solution in Syria" expected_translation_romanian = "Şeful ONU declară că nu există o soluţie militară în Siria" batch: dict = tokenizer.prepare_seq2seq_batch( example_english_phrase, src_lang="en_XX", tgt_lang="ro_RO", tgt_texts=expected_translation_romanian ) ``` ``` -snip- 'labels': tensor([[ 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])} ``` The target language code is only present once in the target sequence. `print(tokenizer.lang_code_to_id["ro_RO"])` `250020` ## Expected behavior ``` 'labels': tensor([[ 250020, 47711, 7844, 127666, 8, 18347, 18147, 1362, 315, 42071, 36, 31563, 8454, 33796, 451, 346, 125577, 2, 250020]])} ``` Here, the target language code is first and last, as I believe MBart (https://arxiv.org/pdf/2001.08210.pdf, top of page 3) says. MBart Excerpt: ``` For each instance of a batch we sample a language id symbol <LID> ... sentences in the instance are separated by the end of sentence (</S>) token. Then, we append the selected<LID> ``` Here is the code I believe is wrong: ```py def set_tgt_lang_special_tokens(self, lang: str) -> None: """Reset the special tokens to the target language setting. Prefix [tgt_lang_code], suffix =[eos].""" self.cur_lang_code = self.lang_code_to_id[lang] self.prefix_tokens = [] self.suffix_tokens = [self.eos_token_id, self.cur_lang_code] ``` To me, the comment implies the language code should be first as well. I tested it locally, and merely adding `self.cur_lang_code` to `self.prefix_tokens` resolves the issue. I do not know if I am misunderstanding the purpose of this script or misuing it. My above code is copied from the "MBartTokenizer" example at https://huggingface.co/transformers/master/model_doc/mbart.html#overview If I didn't make a mistake, I'd be more than happy to open a PR to change that one lines and fix it.
09-28-2020 00:57:41
09-28-2020 00:57:41
Note: I did find (https://github.com/pytorch/fairseq/issues/2258), a related issue. As far as I can tell, the behavior there (attempting to zero-shot translate without the model having translated before, and merely getting the input as output regardless of language ID in target), is expected behavior (some fine-tuning is required on at least one language pair). I believe, for the target, `lang_code, text, <\s>, lang_code` is correct, and matches the paper. <|||||>I've spent a fair amount of time on the `mBart` tokenization. It's very complicated. I ran the finetuning command documented in the README [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart#finetune-on-en-ro) and set a breakpoint and looked at the various tensors: https://gist.github.com/sshleifer/cba08bc2109361a74ac3760a7e30e4f4 What you can clean from that is our `labels` match fairseq `samples['target']`. ```python sample['target'][0] tensor([ 9345, 202, 10, 181684, 36, 21635, 8454, 48993, 45587, 21, 57476, 1283, 98748, 451, 346, 8916, 202, 28, 9, 7, 451, 11650, 128402, 5, 2, 250020], device='cuda:0') ``` So huggingface batches match fairseq code, but not the paper. This seems to improve translation finetuning and inference accuracy.<|||||>> I've spent a fair amount of time on the `mBart` tokenization. It's very complicated. > I ran the finetuning command documented in the README [here](https://github.com/pytorch/fairseq/tree/master/examples/mbart#finetune-on-en-ro) and set a breakpoint and looked at the various tensors: https://gist.github.com/sshleifer/cba08bc2109361a74ac3760a7e30e4f4 > > What you can clean from that is our `labels` match fairseq `samples['target']`. > > ```python > sample['target'][0] > tensor([ 9345, 202, 10, 181684, 36, 21635, 8454, 48993, 45587, > 21, 57476, 1283, 98748, 451, 346, 8916, 202, 28, > 9, 7, 451, 11650, 128402, 5, 2, 250020], > device='cuda:0') > ``` > > So huggingface batches match fairseq code, but not the paper. This seems to improve translation finetuning and inference accuracy. Is there any chance to update the documentation, indicating this discrepancy? I am using this in a multilingual translation setting -- having the lang_code only last, not first and last, means the model is not told what language to translate to. This is not an issue in bilingual settings. I expected the tokenization to match the paper and the documentation. ``` The source text format is X [eos, src_lang_code] where X is the source text. The target text format is `[tgt_lang_code] X [eos]` ``` Like before, if I'm not misunderstanding something, I'd be willing to open a PR for this. Thanks for the quick response.<|||||>It's also worth noting that if there is no dedicated BOS (like MBart), then during inference, you have no natural way to tell the decoder to start generating during inference -- the model never has predicted the first token of a sequence. The example at https://huggingface.co/transformers/master/model_doc/mbart.html#overview prepends the language code during inference, but if that is not done during training as well, this causes domain shift. Unless the decoder (or something else) is editing targets behind-the-scenes (beyond "shifting" indexes one during training), I believe the current method of preparing batches is introducing domain shift.<|||||>You are missing the distinction between `decoder_input_ids` and `labels` I think. For `mbart-large-en-ro` we have `decoder_start_token_id=250020` for this reason. Then in [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L147): ```python decoder_input_ids = shift_tokens_right(tgt_ids, pad_token_id) outputs = self(src_ids, attention_mask=src_mask, decoder_input_ids=decoder_input_ids, use_cache=False) ``` `shift_tokens_right` moves the language code to the 0th column of `decoder_input_ids`. You can also read [this](https://github.com/huggingface/transformers/issues/6156#issuecomment-678537995) which is related. I would definitely welcome a contribution to the docs that explained this clearly!
transformers
7,415
closed
Colab pro -fine RoBERTa error tcmalloc: large alloc 6325288960
I want to fine-tune RoBERTa on my newspaper data (around 8GB) using Colab pro. It works fine on small data. I have given my codes below. Are my codes correct? **Is there any way to handle this memory problem**? The error crushes the colab. tcmalloc: large alloc 6325288960 bytes == 0x447fa000 @ 0x7f3438dcc1e7 0x59221c 0x4ca6f4 0x566daa 0x5a4df1 0x5a5eea 0x4ce082 0x566c02 0x5a4df1 0x5a60ae 0x5bd138 0x50a47f 0x50c1f4 0x507f24 0x509202 0x594b01 0x54a17f 0x5517c1 0x5a9eec 0x50a783 0x50cfd6 0x507f24 0x509c50 0x50a64d 0x50cfd6 0x509918 ``` !python "/content/transformers/examples/language-modeling/run_language_modeling.py" \ --output_dir "/content/drive/My Drive/EPU-NLP/FinetuneModel/output" \ --model_type roberta \ --model_name_or_path roberta-base \ --do_train \ --per_gpu_train_batch_size 16 \ --seed 22 \ --train_data_file "/content/drive/My Drive/EPU-NLP/FinetuneModel/data_all.txt" \ --block_size 256 \ --line_by_line \ --weight_decay 0.01 \ --adam_epsilon 1e-6 \ --save_total_limit 500 \ --learning_rate 6e-4 \ --num_train_epochs 3 \ --save_total_limit 500 \ --save_steps 500 \ --mlm ```
09-27-2020 19:06:26
09-27-2020 19:06:26
Got the same with StyleGAN using Colab Pro<|||||>What is your data size? First, try it with 2GB of data. Also check again by decreasing the GPU batch size to 8 or 4.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
7,414
closed
GPT2LMHeadModel forward input
# ❓ Questions & Help Hello, I would like to fine-tune the GPT2 model on EmpatheticDialogues doing kind of conditional generation as like in this paper: https://arxiv.org/pdf/1911.11161.pdf What concerns me is the format of the input_ids and labels in the forward function. I think that concatenating the input with the target is a good solution separating them with a special token (e.g. "hi! how are you? <endofinput> I am fine!) However I am not sure what to do with the labels. Shall I mask all the input part and the padded tokens with -100 index and leave only the target part as is? or shall I mask with -100 only the padded tokens? Thank you in advance :)
09-27-2020 16:23:59
09-27-2020 16:23:59
Hello! Your question would get more answers if you asked it over at https://discuss.huggingface.co, which are the forums for broad questions like this one. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,413
closed
[RAG] Clean Rag readme in examples
Improving RAG README. Additionally I'm adding a script creating a standalone RAG checkpoint from a generator and a question encoder checkpoints.
09-27-2020 15:57:03
09-27-2020 15:57:03
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=h1) Report > Merging [#7413](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **decrease** coverage by `0.33%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7413/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7413 +/- ## ========================================== - Coverage 77.06% 76.72% -0.34% ========================================== Files 181 181 Lines 35781 35781 ========================================== - Hits 27575 27454 -121 - Misses 8206 8327 +121 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.22% <0.00%> (+0.31%)` | :arrow_up: | | [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.89%)` | :arrow_up: | | ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/7413/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=footer). Last update [e50a931...e1fa8e9](https://codecov.io/gh/huggingface/transformers/pull/7413?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,412
closed
Unable to load pipeline for question answering
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 2.11.0 - Platform: Linux-4.19.112+-x86_64-with-debian-buster-sid - Python version: 3.7.6 - PyTorch version (GPU?): 1.5.1 (False) - Tensorflow version (GPU?): 2.3.0 (False) - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help <!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. albert, bert, GPT2, XLM: @LysandreJik tokenizers: @mfuntowicz examples/distillation: @VictorSanh documentation: @sgugger --> ## Information Model I am using pipeline for question answering: The problem arises when using: * [ ] the official example scripts: (give details below) from transformers import pipeline nlp_qa = pipeline('question-answering') ## To reproduce Steps to reproduce the behavior: 1. Ran the below snippet on kaggle ``` from transformers import pipeline nlp_qa = pipeline('question-answering') ``` ### Error message I got ``` OSError: Can't load config for 'distilbert-base-cased'. Make sure that: - 'distilbert-base-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'distilbert-base-cased' is the correct path to a directory containing a config.json file ``` <!-- If you have code snippets, error messages, stack traces please provide them here as well. Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.--> ### Full error ``` --------------------------------------------------------------------------- OSError Traceback (most recent call last) /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 242 if resolved_config_file is None: --> 243 raise EnvironmentError 244 config_dict = cls._dict_from_json_file(resolved_config_file) OSError: During handling of the above exception, another exception occurred: OSError Traceback (most recent call last) <ipython-input-24-a978a087c38f> in <module> 1 from transformers import pipeline 2 ----> 3 nlp_qa = pipeline('question-answering') # 1st try 4 # nlp_qa = pipeline('question-answering', model=model, tokenizer = tokenizer, device=torch.cuda.current_device()) /opt/conda/lib/python3.7/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, **kwargs) 1787 if isinstance(tokenizer, tuple): 1788 # For tuple we have (tokenizer name, {kwargs}) -> 1789 tokenizer = AutoTokenizer.from_pretrained(tokenizer[0], **tokenizer[1]) 1790 else: 1791 tokenizer = AutoTokenizer.from_pretrained(tokenizer) /opt/conda/lib/python3.7/site-packages/transformers/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs) 193 config = kwargs.pop("config", None) 194 if not isinstance(config, PretrainedConfig): --> 195 config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs) 196 197 if "bert-base-japanese" in pretrained_model_name_or_path: /opt/conda/lib/python3.7/site-packages/transformers/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs) 194 195 """ --> 196 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs) 197 198 if "model_type" in config_dict: /opt/conda/lib/python3.7/site-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs) 250 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a {CONFIG_NAME} file\n\n" 251 ) --> 252 raise EnvironmentError(msg) 253 254 except json.JSONDecodeError: OSError: Can't load config for 'distilbert-base-cased'. Make sure that: - 'distilbert-base-cased' is a correct model identifier listed on 'https://huggingface.co/models' - or 'distilbert-base-cased' is the correct path to a directory containing a config.json file ```
09-27-2020 14:23:49
09-27-2020 14:23:49
Hello! Do you have internet access in the environment where your script is run? Can you do the following: ```py from transformers import DistilBertModel model = DistilBertModel.from_pretrained("distilbert-base-cased") ``` ?<|||||>Sorry my bad. Internet was off. It is working fine. Thank you.
transformers
7,411
closed
Error: isTensor() INTERNAL ASSERT FAILED from traced RoBERTa model on iOS using LibTorch
I've exported RoBERTa from a traced model for running on iOS using LibTorch and I'm getting this error when running prediction in the app: `isTensor() INTERNAL ASSERT FAILED at /Users/distiller/project/aten/src/ATen/core/ivalue_inl.h:86, please report a bug to PyTorch. Expected Tensor but got Tuple (toTensor at /Users/distiller/project/aten/src/ATen/core/ivalue_inl.h:86)`. I'm using the BertTokenizer because I have a small, fixed vocabulary (not natural language), and found It easier to use this vocabulary with the Bert tokenizer (happy to be corrected on this). I can train and test the model without issue in Python. My conversion code is as follows (it's very possible I've done something wrong here!): ``` tokenizer = BertTokenizer('./data/vocab.txt') config = RobertaConfig( vocab_size=858, max_position_embeddings=258, num_attention_heads=6, num_hidden_layers=4, type_vocab_size=1, torchscript=True ) model = RobertaForMaskedLM(config=config).from_pretrained('./trained_RoBERTa') model.cpu() model.eval() example_input = torch.LongTensor(1, 256).random_(0, 857).cpu() traced_model = torch.jit.trace(model, example_input) traced_model.save('./exports/trained_RoBERTa.pt') ``` Transformers version: 3.2.0 Ubuntu 18.04 Python 3.7.2 PyTorch 1.5 Cuda 10.2 I should mention that if there's a relatively painless path to using CoreML instead of LibTorch I'd love to hear about it.
09-27-2020 00:09:37
09-27-2020 00:09:37
Ack! The error was actually in my Obj-C++ code, which had `auto outputTensor = _impl.forward({tensor}).toTensor();`... that will have to become `auto outputTuple = _impl.forward({tensor}).toTuple();`. Apologies for the spam, but hopefully this helps someone else some day. I found the hint here: https://github.com/pytorch/pytorch/issues/32039#issuecomment-573167212<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,410
closed
[s2s] rougeLSum expects \n between sentences
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #6808 Continues #7356 from @swethmandava Coauthor: @swethmandava + `add_newline_sep` kwarg controls whether to add newlines between sentences + test coverage + can pass bootstrap=False to see raw scores, make scoring deterministic. + Verified metrics improvement for bart on CNN/Dailymail, no change for XSUM
09-26-2020 22:12:55
09-26-2020 22:12:55
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=h1) Report > Merging [#7410](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eab5f59682cf197cd5fd19d499b3670dbef67000?el=desc) will **decrease** coverage by `0.87%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7410/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7410 +/- ## ========================================== - Coverage 77.77% 76.89% -0.88% ========================================== Files 181 181 Lines 35781 35781 ========================================== - Hits 27828 27514 -314 - Misses 7953 8267 +314 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: | | [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: | | [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `62.79% <0.00%> (-34.89%)` | :arrow_down: | | [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `87.04% <0.00%> (+1.03%)` | :arrow_up: | | [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `82.25% <0.00%> (+1.29%)` | :arrow_up: | | [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <0.00%> (+2.22%)` | :arrow_up: | | [src/transformers/modeling\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19jdHJsLnB5) | `97.58% <0.00%> (+2.41%)` | :arrow_up: | | ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/7410/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=footer). Last update [eab5f59...ee83da0](https://codecov.io/gh/huggingface/transformers/pull/7410?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
transformers
7,409
closed
[T5] allow config.decoder_layers to control decoder size
<!-- This line specifies which issue to close after the pull request is merged. --> #### Problem arxiv.org/abs/2006.10369, among others, shows that models with fewer decoder layers than encoder layers can perform well and run generation much faster. Right now it is difficult to do distillation on t5 because there is only `T5Config.num_layers` which controls encoder layers and decoder layers. #### Solution - add `config.decoder_layers` to control decoder num layers - maintain 100% backwards compatibility by defaulting `config.decoder_layers = num_layers` - add tests - 4 line PR besides tests+ docs :) ### Testing - slow t5 tests pass
09-26-2020 18:43:53
09-26-2020 18:43:53
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=h1) Report > Merging [#7409](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2c8ecdf8a87019c438262d8c692e1bdffe05149f?el=desc) will **decrease** coverage by `0.73%`. > The diff coverage is `98.05%`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7409/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7409 +/- ## ========================================== - Coverage 77.58% 76.85% -0.74% ========================================== Files 181 181 Lines 35725 35784 +59 ========================================== - Hits 27719 27501 -218 - Misses 8006 8283 +277 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `74.14% <81.81%> (ø)` | | | [src/transformers/configuration\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2xvbmdmb3JtZXIucHk=) | `100.00% <100.00%> (ø)` | | | [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `96.55% <100.00%> (+0.12%)` | :arrow_up: | | [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `97.35% <100.00%> (+0.68%)` | :arrow_up: | | [src/transformers/modeling\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hbGJlcnQucHk=) | `84.33% <100.00%> (+0.17%)` | :arrow_up: | | [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.48% <100.00%> (+0.11%)` | :arrow_up: | | [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `89.43% <100.00%> (+0.04%)` | :arrow_up: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `96.62% <100.00%> (+0.06%)` | :arrow_up: | | [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.83% <100.00%> (+0.06%)` | :arrow_up: | | [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `97.35% <100.00%> (+0.01%)` | :arrow_up: | | ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/7409/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=footer). Last update [eab5f59...ac38a32](https://codecov.io/gh/huggingface/transformers/pull/7409?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>Renamed it to `num_decoder_layers`, and fixed docstring!
transformers
7,408
closed
Allow creation of asymmetrical T5
https://arxiv.org/abs/2006.10369, among others, shows that models with fewer decoder layers than encoder layers can perform well and run generation much faster. Right now it is difficult to do distillation on t5 because there is only `T5Config.num_layers`
09-26-2020 17:58:21
09-26-2020 17:58:21
transformers
7,407
closed
How to train a model based on CTRL
# ❓ Questions & Help I am wondering how to train A Conditional Transformer Language Model for Controllable Generation (CTRL)? Thanks <!-- The GitHub issue tracker is primarly intended for bugs, feature requests, new models and benchmarks, and migration questions. For all other questions, we direct you to the Hugging Face forum: https://discuss.huggingface.co/ . You can also try Stack Overflow (SO) where a whole community of PyTorch and Tensorflow enthusiast can help you out. In this case, make sure to tag your question with the right deep learning framework as well as the huggingface-transformers tag: https://stackoverflow.com/questions/tagged/huggingface-transformers --> ## Details <!-- Description of your issue --> I saw that there is a code for text generation based on CTRL, but did not find any for the training phase! <!-- You should first ask your question on the forum or SO, and only if you didn't get an answer ask it here on GitHub. --> **A link to original question on the forum/Stack Overflow**:
09-26-2020 14:16:13
09-26-2020 14:16:13
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,406
closed
Bert base chinese model gives error :- EagerTensor object has no attribute 'size'
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 3.2.0 - Platform: - Python version: 3.7 - PyTorch version (GPU?): - Tensorflow version (GPU?): 3.2.1 - Using GPU in script?: - Using distributed or parallel set-up in script?: No ## Information I am just trying to get BERT embedding using the Chinese BERT base as explained in GitHub but getting an error. Model I am using (Bert, XLNet ...): BERT (Chinese) ## To reproduce ``` from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese") model = AutoModel.from_pretrained("bert-base-chinese") inputs = tokenizer("和 管理 , 发挥 公路", return_tensors="tf") outputs = model(**inputs) ``` Error ``` AttributeError Traceback (most recent call last) <ipython-input-30-481c0ebb1173> in <module> 1 inputs = tokenizer("和 管理 , 发挥 公路", return_tensors="tf") 2 ----> 3 outputs = model(**inputs) ~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 720 result = self._slow_forward(*input, **kwargs) 721 else: --> 722 result = self.forward(*input, **kwargs) 723 for hook in itertools.chain( 724 _global_forward_hooks.values(), ~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states, return_dict) 789 raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time") 790 elif input_ids is not None: --> 791 input_shape = input_ids.size() 792 elif inputs_embeds is not None: 793 input_shape = inputs_embeds.size()[:-1] AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'size' ```
09-26-2020 11:46:31
09-26-2020 11:46:31
You are mixing the Pytorch and tf API. You should have "return_tensors="pt"" if you use PyTorch or use TFAutoModel for TensorFlow.<|||||>@esp32wrangler is correct!
transformers
7,405
closed
Add summarization support to ONNX conversion
<!-- This line specifies which issue to close after the pull request is merged. --> Fixes #7404
09-26-2020 03:58:14
09-26-2020 03:58:14
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=h1) Report > Merging [#7405](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **decrease** coverage by `0.61%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7405/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7405 +/- ## ========================================== - Coverage 77.06% 76.45% -0.62% ========================================== Files 181 181 Lines 35781 35781 ========================================== - Hits 27575 27356 -219 - Misses 8206 8425 +219 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `20.34% <0.00%> (-74.90%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: | | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `79.16% <0.00%> (-4.17%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.64% <0.00%> (-0.28%)` | :arrow_down: | | [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.26%)` | :arrow_down: | | [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.10% <0.00%> (-0.17%)` | :arrow_down: | | [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <0.00%> (+0.24%)` | :arrow_up: | | ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/7405/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=footer). Last update [e50a931...ee12607](https://codecov.io/gh/huggingface/transformers/pull/7405?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
transformers
7,404
closed
Add support for exporting summarization models to ONNX
# 🚀 Feature request Add support for exporting summarization models to ONNX. ## Motivation I want to serve summarization models on edge, through an ONNX runtime. However, I am unable to convert facebook/bart-large-cnn(using class BartModelForConditionalGeneration) to ONNX as the provided script doesn't support the summarization pipeline, due to PyTorch not being able to export the triu operator to ONNX. However, there are workarounds listed at https://github.com/pytorch/pytorch/issues/32968, and could be used to make this possible. ## Your contribution I don't really know the internals of PyTorch that well, so I don't think I can make any direct contributions.
09-25-2020 21:42:18
09-25-2020 21:42:18
I just realized that the triu operation was fixed recently, sorry about that. I will create a PR to add summarization to the ONNX conversion script.<|||||>@sagarreddypatil Did you need to make any other changes to get summarization working with ONNX Runtime? It only appears to work for text with five tokens for me, which I believe it due to <strike>the [dummy inputs](https://huggingface.co/transformers/serialization.html#dummy-inputs-and-standard-lengths)</strike> (edit: looks to be due to the `infer_shapes` method) ```python from transformers import AutoTokenizer import onnxruntime as rt import numpy as np text = "one two three four five" tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn") tokens = tokenizer(text) input = { 'input_ids': np.array([tokens['input_ids']]), 'attention_mask': np.array([tokens['attention_mask']]) } sess = rt.InferenceSession("bart-large-cnn.onnx") output = sess.run(None, input) print(output) ``` Other token counts fail with: ```text 2020-10-05 16:44:25.884944 [E:onnxruntime:, sequential_executor.cc:318 Execute] Non-zero status code returned while running Reshape node. Name:'Reshape_62' Status Message: /Users/runner/work/1/s/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:43 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape &, std::vector<int64_t> &) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,7}, requested shape:{6} ```<|||||>I actually did encounter that issue. In my "hotfix", I simply added the summarization option to the list, but I believe the implementation of how the ONNX model is made needs to be changed. But yes, it does not work for more than 5 tokens. I am not really sure how to fix that, but you seem to be better with this than I am.<|||||>Any chance someone was able to solve this issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm receiving this error `Error while converting the model: The type of axis index is expected to be an integer ` when trying to convert bart-large-cnn `python3 -m transformers.convert_graph_to_onnx --model facebook/bart-large-cnn --framework pt bart-large-cnn.onnx `<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> I'm receiving this error > > `Error while converting the model: The type of axis index is expected to be an integer ` > > when trying to convert bart-large-cnn > > `python3 -m transformers.convert_graph_to_onnx --model facebook/bart-large-cnn --framework pt bart-large-cnn.onnx ` I'm encountering the same issue when exporting a gpt2 model using --pipeline text-generation.<|||||>We're currently working on a rework of the ONNX implementation within Transformers, which is available here: https://github.com/huggingface/transformers/pull/11786 Instead of offering a script to enable conversions for all models (which was not kept up to date with recent model releases), we're opting for a case-by-case approach, while offering the tools to convert models manually in a straightforward and simple manner; by creating `OnnxConfig` configuration objects to specify the input and output types of each model. Please take a look at the PR and give us your feedback.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.
transformers
7,403
closed
[makefile] 10x speed up checking/fixing
I present to you an updated `fixup` target which is now super-fast, as it only fixes and validates files that were modified since the branching point, which usually is ~5 out of ~1000. Whoah! Give it a try: ``` make fixup ``` Because of the start up overhead and the 2 customs scripts aren't optimized yet (to check only modified files), currently it's about 10 times faster. before this PR: ``` time make fixup real 0m19.272s user 2m28.253s sys 0m2.794s ``` after this PR: ``` time make fixup real 0m2.864s user 0m2.849s sys 0m0.778s ``` So what's happening here: 1. `git merge-base --fork-point master` - get a sha of the branching point 2. `git diff --name-only $(git merge-base --fork-point master)` - give us all the filenames that were modified since the branching point (regardless of whether they were staged, and/or pushed or they are still local). The only missing parts would be newly added files that aren't under git yet - shouldn't be a problem though. 3. finally we want to check only specific top-folders so: `git diff --name-only $(git merge-base --fork-point master) | egrep '^(examples|templates|tests|src|utils)'` 4. now feed that to flake8, black, isort, etc.: `flake8 $(git diff --name-only $(git merge-base --fork-point master) | egrep '^(examples|templates|tests|src|utils)')` but only if there were modified files, so there is an `if` check in the Makefile. otherwise if we get no match - it runs wild on the whole repo and reports things we don't care about. @sgugger, so if you modify `utils/check_copies.py` to optionally get specific filenames - we can unleash its full power. If you are up for it, do the required modifications and I will take care of the Makefile to pass them on. ---- ## refactor repeated dir listings This PR also refactors repeated dirs into a single variable on top of the file @sgugger, @LysandreJik, @sshleifer
09-25-2020 19:29:20
09-25-2020 19:29:20
# [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=h1) Report > Merging [#7403](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e50a931c118b9f55f77a743bf703f436bf7a7c29?el=desc) will **increase** coverage by `1.83%`. > The diff coverage is `n/a`. [![Impacted file tree graph](https://codecov.io/gh/huggingface/transformers/pull/7403/graphs/tree.svg?width=650&height=150&src=pr&token=9qOlN6Hb1c)](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=tree) ```diff @@ Coverage Diff @@ ## master #7403 +/- ## ========================================== + Coverage 77.06% 78.90% +1.83% ========================================== Files 181 181 Lines 35781 35781 ========================================== + Hits 27575 28233 +658 + Misses 8206 7548 -658 ``` | [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=tree) | Coverage Δ | | |---|---|---| | [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.14% <0.00%> (-72.41%)` | :arrow_down: | | [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `58.52% <0.00%> (-34.74%)` | :arrow_down: | | [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: | | [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `65.38% <0.00%> (-29.59%)` | :arrow_down: | | [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.70% <0.00%> (-22.68%)` | :arrow_down: | | [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `81.51% <0.00%> (-15.11%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: | | [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.04% <0.00%> (-12.69%)` | :arrow_down: | | [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `83.11% <0.00%> (-10.39%)` | :arrow_down: | | [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.33% <0.00%> (-7.58%)` | :arrow_down: | | ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/7403/diff?src=pr&el=tree-more) | | ------ [Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=continue). > **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta) > `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data` > Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=footer). Last update [e50a931...9c60161](https://codecov.io/gh/huggingface/transformers/pull/7403?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments). <|||||>This looks cool! Can add the file argument to `check_copies` since it's easy (you can do it too if you prefer to have it in one PR) but since `check_copies` is super fast now, it probably won't add much.<|||||>I meant if you wanted to re-enable the blackify functions that were slowing things down. <|||||>I'm waiting to see if there actual use cases for that before re-enabling it as there are some ways to make it faster by rewriting the whole script. It would slow down your fast command by quite a bit if a file like roberta has been changed.<|||||>Hmm, I haven't thought of the scenario of when a branch gets re-based, since if that is done - currently it will add all the files modified since the branching and not just the files modified by the PR. There must be a way to subtract those changes in the master. I will have to think some more. If we can successfully get that minimal list of files then any developer not working on roberta shouldn't get impacted by its slowdown. **edit**: it works just fine with rebasing - it only shows other files if you rebased and haven't committed the change. So all is good here.