repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 7,002 | closed | How to bypass "Special tokens have been added in the vocabulary..." warning? | Is there a way to avoid always getting:
> Special tokens have been added in the vocabulary, make sure the associated word embedding are fine-tuned or trained
@ https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1614
(other than turning logging off)
As far as I can see from stepping through that code, there are always special tokens (e.g. eos, pad, etc.), i.e. there is nothing special about it. What purpose does this warning serve when loading a tokenizer?
I'm not sure how the end user can act on the suggestion:
> make sure the associated word embedding are fine-tuned or trained
when they just want to run, say, the `generate` function on a pre-trained model, other than just learning to ignore this warning and not paying heed to when a warning is really saying something crucial.
Thoughts?
| 09-08-2020 02:19:40 | 09-08-2020 02:19:40 | Hello @stas00 ,
Can you show us a sample of your code ?
Did you explicitely add special tokens to your tokenizer ?
From my understanding this warning appears when the method _sanitize_special_tokens_ of the tokenizer returns a strictly positive integer.
The docstring of the method is:
```
"""
Make sure that all the special tokens attributes of the tokenizer (:obj:`tokenizer.mask_token`,
:obj:`tokenizer.cls_token`, etc.) are in the vocabulary.
Add the missing ones to the vocabulary if needed.
Return:
:obj:`int`: The number of tokens added in the vocaulary during the operation.
"""
```
So this warning appears when you add special tokens to the vocabulary **after** loading the tokenizer. If you use a model trained on the first version of the tokenizer (before adding the new tokens), you might feed it tokens it has not been trained on, which would lead to a random embedding and worse performance.
If you load your model and your tokenizer with the same training, for example:
```
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
```
then I suggest not providing special tokens, as the basic ones are already present.
Please let me know if it helps.<|||||>You're absolutely correct, the tokenizer was adding special tokens (copied from another tokenizer, but wasn't really needing them), so I removed them now and the warning is gone.
And, yes, I forgot to add the code - wasn't my best!
Much appreciating your follow up, @nassim-yagoub <|||||>The same warning also happens with this code:
```
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('vinai/bertweet-base')
model = AutoModel.from_pretrained('vinai/bertweet-base')
```
```
python --version
Python 3.9.13
```
I do not understand where tokens are added to the vocabulary after loading the tokenizer.<|||||>@sbocconi Any idea how to solve the warnings in the case of BERTTweet?
<|||||>No unfortunately @codepujan, it has been a while I have not used this functionality |
transformers | 7,001 | closed | typo | apologies for the tiny PRs, just sending those as I find them.
| 09-08-2020 02:01:34 | 09-08-2020 02:01:34 | |
transformers | 7,000 | closed | access to the embeddings for query and text used in a downstream NLP task | Newbie question -
is there any way to access the embeddings that are generated for the passage(and query) that is fed to a BertForQuestionAnswering model ? | 09-07-2020 21:28:49 | 09-07-2020 21:28:49 | I realized that in QA, word/token embeddings are used, while I was looking for multi-sentence level embeddings. Pl. ignore my question. |
transformers | 6,999 | closed | fixed trainer tr_loss memory leak | Fixes #6939
The Trainer class contains the memory leak described [here](https://discuss.pytorch.org/t/cpu-ram-usage-increasing-for-every-epoch/24475/6). It is not specific to any particular model type and will occur with any model trained using the trainer class.
It is fixed in this pull request.
The issue is demonstrated in [this Colab notebook](https://colab.research.google.com/drive/1KQZEiZtfY14sDiAQnfw0gkKj9eC6XpTm?usp=sharing). The model trains for around 20 000 steps (this takes around 10 minutes on a t4) before using up all 12GB of RAM. | 09-07-2020 21:00:26 | 09-07-2020 21:00:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=h1) Report
> Merging [#6999](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90ec78b5140251f093f658ebd4d2925e8c03f5e6?el=desc) will **decrease** coverage by `1.44%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6999 +/- ##
==========================================
- Coverage 80.58% 79.14% -1.45%
==========================================
Files 161 161
Lines 30123 30123
==========================================
- Hits 24276 23841 -435
- Misses 5847 6282 +435
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.95% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+5.66%)` | :arrow_up: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: |
| ... and [1 more](https://codecov.io/gh/huggingface/transformers/pull/6999/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=footer). Last update [90ec78b...240b7da](https://codecov.io/gh/huggingface/transformers/pull/6999?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think this was touched by @jysohn23 recently so pinging him here<|||||>I think the problem is not that we don't call `.item()` but that we don't call `.detach()`, which means some variables are kept forever for a backward pass (that is never called).
The `.item()` are removed because it's needed for faster TPU training.<|||||>This ends up calling `.item()` every step which ends up hurting performance by like 2X. What @sgugger sounds promising. Can we try that out instead?<|||||>I have updated it to create a detached tensor instead of calling item().<|||||>Can you confirm it fixes the memory leak? This is the right fix IMO (@LysandreJik this might be the fix for the TPU memory leak we have in another issue too.)<|||||>> Can you confirm it fixes the memory leak? This is the right fix IMO (@LysandreJik this might be the fix for the TPU memory leak we have in another issue too.)
Great! Yes, I have tested to make sure that this fixes the leak. |
transformers | 6,998 | closed | Fix TF Trainer loss calculation | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6968
## Description
Issue #6968 is about the incorrect loss calculation due to dividing the per example losses by the number of sentences, rather than by the number of tokens (not ignored, i.e. label != -100) - if the task is a token level task.
## Implementation
Before a whole batch being distributed to replicas, we compute the number of instances in that batch. Depending on the task types (sentence level or token level), the word `instance` means different things:
- sentence level task: it means examples
- token level task: it means the tokens with label != -100
This information (number of instances) is injected into global batches. While each replica receives a small batch, it use this information to correctly computing the scaled losses.
If no information is provided in the dataset, the default behavior is to use the number of examples in a global batch.
This way, the code change is minimal.
## Test code
import os
import random
import shutil
shutil.rmtree("./tmp/", ignore_errors=True)
os.mkdir("./tmp/")
nb_sentences = 70
words = ["i", "like", "dogs", "but", "you", "prefer", "cats"]
with open("./tmp/train.txt", "w", encoding="UTF-8") as fp:
for i in range(nb_sentences):
if i == 0:
for word in words:
fp.write(f"{word} O\n")
else:
fp.write(f" \n")
fp.write("\n")
os.system("cp ./tmp/train.txt ./tmp/dev.txt")
labels = ["O", "B-MISC", "I-MISC", "B-PER", "I-PER", "B-ORG", "I-ORG", "B-LOC", "I-LOC"]
with open("./tmp/test.txt", "w", encoding="UTF-8") as fp:
for i in range(nb_sentences):
for word in words:
label = random.choice(labels)
fp.write(f"{word} {label}\n")
fp.write("\n")
command = (
"python run_tf_ner.py "
+ "--model_name_or_path distilbert-base-uncased "
+ "--data_dir ./tmp/ --seed 2020 --output_dir ./tmp/ "
+ "--overwrite_output_dir --logging_steps 1 "
+ "--do_train --do_eval --do_predict "
+ "--num_train_epochs 1 "
+ f"--per_device_train_batch_size {nb_sentences} "
+ f"--per_device_eval_batch_size {nb_sentences} --labels '' "
+ "--max_seq_length 16"
)
print(command)
os.system(command)
Testing it against master, you will see the loss values is smaller (~0.3) than testing against this PR code, you will (1.0 ~ 2.0), because on master, the denominator is `70` (but 69 of them having only ignored tokens) while on this PR, the denominator is `7` (the number of tokens). | 09-07-2020 19:49:53 | 09-07-2020 19:49:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=h1) Report
> Merging [#6998](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0054a48cdd64e7309184a64b399ab2c58d75d4e5?el=desc) will **increase** coverage by `0.19%`.
> The diff coverage is `20.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6998 +/- ##
==========================================
+ Coverage 80.53% 80.72% +0.19%
==========================================
Files 168 168
Lines 32179 32197 +18
==========================================
+ Hits 25915 25991 +76
+ Misses 6264 6206 -58
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3RmLnB5) | `12.46% <20.00%> (+0.24%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.92% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.77% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `93.26% <0.00%> (+4.84%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6998/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=footer). Last update [0054a48...b5c5bdc](https://codecov.io/gh/huggingface/transformers/pull/6998?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>>
>
> Thanks! It still needs some refactoring but in overall it looks ok. Can you also try the examples scripts for sequence classification and multiple choices to see if there is a drop in performance or not and be sure to do not bring any inconsistency.
>
> Optionaly if you have the possibility to run it, also the examples scripts, on a multi-gpu environment and check the same thing it would be appreciated, otherwise, no worries I will do it before merging :)
I will try to (and learn to) run the other scripts later today, but I can only run on a single-gpu environment.<|||||>No problem I can do it in multiple gpu env.<|||||>The trainer will still be task-agnostic, the goal is just to add a new parameter to the training_step function (or possibly a class field) to handle the value that will be used to compute the scaled loss instead. It should work for all the tasks.
This computation cannot be done in the loss, because the loss computation is done over a per replica batch size and not over the global batch size.
I'm not in favor of having different trainers. I don't mind having few differences between the two trainers as long as the external usage is the same.<|||||>I don't understand your last comment. Having users subclass Trainer for specific behavior is what is indicated in the documentation. We cannot offer everything any user can think of in the training loop so this the way of customizing one. There is a `Seq2SeqTrainer` in preparation on the PyTorch side for instance, for code that is relevant to this only.
Opening the door to have task-specific components in the main tf_trainer file will make it unreadable in a few months, when every user will have added their own, and then users that rely on a custom Trainers won't use that class anymore, because they won't understand it.
Tagging @julien-c for his advice.<|||||>@sgugger , if fact, I also doubt that the pytorch trainer, while working with token level tasks, have some inaccurate loss computation - when we have distributed training and/or gradient accumulation. For example, [in DistilBertForTokenClassification](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L820), we use
loss_fct = CrossEntropyLoss()
and
loss = loss_fct(active_logits, active_labels)
[CrossEntropyLoss](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#crossentropyloss) has default reduction `mean`, so basically we compute the averaged loss over the tokens (label !=-100) on each single batch. Then we accumulate it `gradient_accumulation_steps`, and finally average again by dividing `gradient_accumulation_steps`, see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L1041).
However, this is no longer the same as the `per example losses over the global batch divided the number of tokens on that global batch`. In practice, it may not harm the training, but theoretically, it is not what exactly the gradient accumulation mean to be.
But this should be discussed in another thread, not here.<|||||>> pening the door to have task-specific components in the main tf_trainer file will make it unreadable in a few months, when every user will have added their own, and then users that rely on a custom Trainers won't use that class anymore, because they won't understand it.
The current draft code requires a task to compute the number of `instances` in each `example`. For token level tasks, it will be the number of tokens with labels != -100. Fore sentence level, in general, it will just be the number of sentences. The tf trainer will just use this information for further calculation, but it doesn't need to know what the tasks are.<|||||>@jplu , this is not a finalized version. However, I want to have some feedback from you about if this approach is OK. Thanks.<|||||>For the sake of simplicity, I suspect this would be best left as a user-space extension/custom loss (via a subclass of TFTrainer for instance)
If needed we can add/improve extension points in TFTrainer to make it easier.
What do you think?<|||||>@julien-c
My thoughts are:
- Hugging Face's ` transformers` has `run_ner_tf.py` which is assumed to give the correct training/evaluation losses
- (maybe I misunderstand the roles of scripts in examples dir?)
- If they are meant to be correct, and let's say the right way for token level tasks is to count the number of tokens (not ignored), then the current script won't give the correct results. In this case, the majority of users who use it won't know they are supposed to subclass TFTrainer class
- Even they are aware of the necessity of counting tokens rather than examples, in TF, it is not easy to make it right - if it has to work correctly in a distributed strategy and/or gradient accumulation. It is not just about modifying the loss calculation - the token counting has to be in a global batch, not a batch already distributed to a single replica. Then in a single replica, the per example losses (on that small batch) has to be divided by the number of tokens computed in the big batch before being distributed.
- If the scripts in examples dir have only the purpose of demonstration of the library's usage - leaving users to customize trainers is fine, and adding/improving extension points seems good to me (although I don't know what it looks like for now). Maybe it is also good to have some warning and a brief tutorial to let users know how to do things.
Please let me know the team's decision about this issue (how to continue or if to close it). Thanks.<|||||>@chiapas Now it looks great! I really like it. Did you try it in context of single and multi-replica?
@julien-c @sgugger I think there is a misunderstanding here, this PR is not to add a new feature or any refactoring, this is a **bugfix** in the loss computation, that means that anybody that are currently using the current version of the trainer get a wrong loss value for token classifications class. And as @chiapas said the way it is computed in the PyTorch trainer might need a fix as well. There are only few changes and do not impact the readability for TF users.<|||||>>
>
> @chiapas Now it looks great! I really like it. Did you try it in context of single and multi-replica?
@jplu Thanks. I haven't tried testing yet. I preferred to have your feedback about this new way of fix before finalizing it and testing. By the way, for multi-replica, I can only run on Kaggle or Google colab. I will let you know once the test is done.
<|||||>This is ok no worries, with TPU is fine as well 👍 <|||||>@chiapas As far as I can say this PR should also fix the issue #6969 right? As we compute the total number of example at every step.<|||||>>
>
> @chiapas As far as I can say this PR should also fix the issue #6969 right? As we compute the total number of example at every step.
Yes. However, after I opened up that issue and work on this PR, I found that we have
ds = (
self.train_dataset.repeat()
.shuffle(self.num_train_examples, seed=self.args.seed)
.batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)
.prefetch(tf.data.experimental.AUTOTUNE)
)
Since the dataset is repeated first, it has no ending, and the drop_remainder has no actual effect (other than set the batch shpae from None to a fixed number along batch dimension), so issue #6969 will never occur. However, in this case, `args.dataloader_drop_last` is somehow confusing.<|||||>Indeed, having the `repeat` has the advantage to avoid the potential last partial batch in each epoch, so users don't need to think about scaling the gradients based on the actual batch size and makes the `dataloader_drop_last` useless.<|||||>>
>
> Indeed, having the `repeat` has the advantage to avoid the potential last partial batch in each epoch, so users don't need to think about scaling the gradients based on the actual batch size and makes the `dataloader_drop_last` useless.
Yes, I am ok with this. BTW, `dataloader_drop_last` might still have an effect - if `True`, the batch dimension will be fixed in the compiled graph. If it is set to `False`, even we repeat the dataset first, the batch dimension will still be `None` . In this case, while working with TPU with the way we do gradient accumulation int trainer_tf.py, we will get an error message that complains TPU can't handle slice or shape - I don't remember the exact message, but I can reproduce one quickly.<|||||>The default value of `drop_reminder` is False, which result in an unknown batch size because the last batch may not be full, this is exactly why `drop_reminder` on TPU has to be set to `True` if no repeat is applied otherwise we can let it to False.
Small example:
```python
>>> dataset = tf.data.Dataset.range(100)
>>> dataset.batch(4)
<BatchDataset shapes: (None,), types: tf.int64>
>>> dataset = tf.data.Dataset.range(100)
>>> dataset.batch(4, drop_remainder=True)
<BatchDataset shapes: (4,), types: tf.int64><|||||>>
>
> The default value of `drop_reminder` is False, which result in an unknown batch size because the last batch may not be full, this is exactly why `drop_reminder` on TPU has to be set to `True` if no repeat is applied otherwise we can let it to False.
otherwise we can let it to False
What I am saying is: We can't let it to `False` even repeat is applied in `trainer_tf.py`. In general, if `repeat` is used, we don't have to drop. But due to the gradient accumulation implementation, if TPU is used and, if we set `drop_remainde=False`, even the `repeat` is applied, we will still get
<PrefetchDataset shapes: ((None, 512, 512, 3), (None,)), types: (tf.float32, tf.int32)>
NotFoundError: 3 root error(s) found.
(0) Not found: {{function_node __inference_train_step_1_epoch_192920}} No proto found for key <<NO PROGRAM AS COMPILATION FAILED>>
[[{{node TPUVariableReshard/reshard/_16819633198340046116/_31}}]]
(1) Not found: {{function_node __inference_train_step_1_epoch_192920}} No proto found for key <<NO PROGRAM AS COMPILATION FAILED>>
[[{{node TPUVariableReshard/reshard/_17949385379616849075/_19}}]]
(2) Unimplemented: {{function_node __inference_train_step_1_epoch_192920}} Compilation failure: Dynamic input dimension to reshape that is both splitted and combined is not supported: output: f32[0,512,512,3], input: f32[<=0,512,512,3], input_dim: 0
[[{{node strided_slice_2}}]]
[[while/body/_1/while]]
TPU compilation failed
[[tpu_compile_succeeded_assert/_17625000101377989734/_5]]
0 successful operations.
6 derived errors ignored.
You can test it if you want on
https://www.kaggle.com/yihdarshieh/tpu-gradient-accumulation?scriptVersionId=41324202
by changing one line in
def get_training_dataset(batch_size):<|||||>Hummm nice catch, I haven't tested this case with gradient accumulation, thanks!<|||||>@chiapas for me it looks ok, do you want to add anything else? If no can you switch the PR to open, in order to be able for us to merge it.<|||||>>
>
> @chiapas for me it looks ok, do you want to add anything else? If no can you switch the PR to open, in order to be able for us to merge it.
@jplu , I haven't done any test yet - I pushed the code immediately after writting it (in order to have your feedback), so from my side, I feel more comfortable to check a few things before merged to master. Unless you have done some testings and are eager to merge, maybe wait a bit please? I think tomorrow at some point would be ready. <|||||>I have tested it on single and 4 GPU training with the example script for NER and was ok, but take the time you need the more we can test the better. I will be happy to know how it works on TPU as well, even through a colab.<|||||>@jplu I had to fix bugs, and now I have a working version - I checked some intermediate values to make sure the calculation of number of `instances` is correct and sent to replica(s).
Due to the bugs I found - I think there might be a chance that your test yesterday didn't use my code - probably forgot to uninstall the original `transformers` and reinstall `transformers` that is based on my version? I have this doubt because those bugs would throw errors, and the training wouldn't be successful. Sorry about this. If possible, could you test it again with the latest version on multiple GPU env.? Thanks.
Otherwise, I tested with CPU and 1 GPU on colab. It works fine. You can check here
https://colab.research.google.com/drive/148whpTObbF53qU_ec0bVkyVWUsLlwppn?usp=sharing
However, using TPU with tf 2.2 or 2.3, I had different errors, for which I think irrelevant to this PR code See below for the error messages. We can probably open a bug report and fix it later.
Also, the CI is not green because of some problems from the master branch. From my side, the code is ready to be merged (if you can test on multiple GPU again would be better). Thanks.
Traceback (most recent call last):
File "run_tf_ner.py", line 299, in <module>
main()
File "run_tf_ner.py", line 128, in main
training_args.n_replicas,
File "/content/transformers/examples/token-classification/transformers/src/transformers/file_utils.py", line 926, in wrapper
return func(*args, **kwargs)
File "/content/transformers/examples/token-classification/transformers/src/transformers/training_args_tf.py", line 161, in n_replicas
return self._setup_strategy.num_replicas_in_sync
File "/content/transformers/examples/token-classification/transformers/src/transformers/file_utils.py", line 904, in __get__
cached = self.fget(obj)
File "/content/transformers/examples/token-classification/transformers/src/transformers/file_utils.py", line 926, in wrapper
return func(*args, **kwargs)
File "/content/transformers/examples/token-classification/transformers/src/transformers/training_args_tf.py", line 132, in _setup_strategy
tf.tpu.experimental.initialize_tpu_system(tpu)
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/tpu/tpu_strategy_util.py", line 103, in initialize_tpu_system
serialized_topology = output.numpy()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 961, in numpy
maybe_arr = self._numpy() # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 929, in _numpy
six.raise_from(core._status_to_exception(e.code, e.message), None)
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef expected inputs 'string' do not match 0 inputs specified; Op<name=_Send; signature=tensor:T -> ; attr=T:type; attr=tensor_name:string; attr=send_device:string ..........
and TPU with tf 2.3 gives different error
Traceback (most recent call last):
File "run_tf_ner.py", line 299, in <module>
main()
File "run_tf_ner.py", line 230, in main
trainer.train()
File "/content/transformers/src/transformers/trainer_tf.py", line 474, in train
train_ds = self.get_train_tfdataset()
File "/content/transformers/src/transformers/trainer_tf.py", line 137, in get_train_tfdataset
self.num_train_examples = tf.data.experimental.cardinality(self.train_dataset).numpy()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1063, in numpy
maybe_arr = self._numpy() # pylint: disable=protected-access
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py", line 1031, in _numpy
six.raise_from(core._status_to_exception(e.code, e.message), None) # pylint: disable=protected-access
File "<string>", line 3, in raise_from
tensorflow.python.framework.errors_impl.UnimplementedError: File system scheme '[local]' not implemented (file: 'runs/Sep11_12-45-28_1d2f5ee8ee35')
Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.
Error in atexit._run_exitfuncs:
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/tpu_strategy.py", line 540, in async_wait
context.async_wait()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py", line 2319, in async_wait
context().sync_executors()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py", line 658, in sync_executors
pywrap_tfe.TFE_ContextSyncExecutors(self._context_handle)
tensorflow.python.framework.errors_impl.UnimplementedError: File system scheme '[local]' not implemented (file: 'runs/Sep11_12-45-28_1d2f5ee8ee35')
Encountered when executing an operation using EagerExecutor. This error cancels all future operations and poisons their output tensors.
2020-09-11 12:46:28.864320: W ./tensorflow/core/distributed_runtime/eager/destroy_tensor_handle_node.h:57] Ignoring an error encountered when deleting remote tensors handles: Invalid argument: Unable to find the relevant tensor remote_handle: Op ID: 9416, Output num: 0
Additional GRPC error information from remote target /job:worker/replica:0/task:0:
:{"created":"@1599828388.860894650","description":"Error received from peer ipv4:10.42.193.26:8470","file":"external/com_github_grpc_grpc/src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Unable to find the relevant tensor remote_handle: Op ID: 9416, Output num: 0","grpc_status":3}<|||||>>Due to the bugs I found - I think there might be a chance that your test yesterday didn't use my code - probably forgot to uninstall the original transformers and reinstall transformers that is based on my version? I have this doubt because those bugs would throw errors, and the training wouldn't be successful. Sorry about this. If possible, could you test it again with the latest version on multiple GPU env.? Thanks.
Ah! Might be possible I forgot to install your version. I will re-test it, to be sure.
> However, using TPU with tf 2.2 or 2.3, I had different errors, for which I think irrelevant to this PR code See below for the error messages. We can probably open a bug report and fix it later.
The second error you get means that you cannot load data from localhost, the files have to be hosted on a GCS to make it works.
> Also, the CI is not green because of some problems from the master branch.
Can you try to rebase on the current master and see if the CI error still occurs?
> From my side, the code is ready to be merged (if you can test on multiple GPU again would be better). Thanks.
I will test that ASAP today and will let you know.<|||||>I have been able to run a NER task over 4 GPUs. Without gradient accumulation:
```
***** Running training *****
Num examples = 24000
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 128
Gradient Accumulation steps = 1
Steps per epoch = 188
Total optimization steps = 564
{'loss': 18.6387, 'learning_rate': 4.9113474e-05, 'epoch': 0.05851063829787234, 'step': 10}
{'loss': 14.222743, 'learning_rate': 4.822695e-05, 'epoch': 0.11170212765957446, 'step': 20}
{'loss': 12.079448, 'learning_rate': 4.734042e-05, 'epoch': 0.16489361702127658, 'step': 30}
{'loss': 10.455857, 'learning_rate': 4.6453897e-05, 'epoch': 0.21808510638297873, 'step': 40}
{'loss': 9.295527, 'learning_rate': 4.5567373e-05, 'epoch': 0.2712765957446808, 'step': 50}
{'loss': 8.374706, 'learning_rate': 4.468085e-05, 'epoch': 0.324468085106383, 'step': 60}
{'loss': 7.674021, 'learning_rate': 4.379432e-05, 'epoch': 0.3776595744680851, 'step': 70}
{'loss': 7.106626, 'learning_rate': 4.29078e-05, 'epoch': 0.4308510638297872, 'step': 80}
{'loss': 6.6170573, 'learning_rate': 4.202128e-05, 'epoch': 0.48404255319148937, 'step': 90}
{'loss': 6.226298, 'learning_rate': 4.113475e-05, 'epoch': 0.5372340425531915, 'step': 100}
{'loss': 5.859087, 'learning_rate': 4.0248226e-05, 'epoch': 0.5904255319148937, 'step': 110}
{'loss': 5.560567, 'learning_rate': 3.93617e-05, 'epoch': 0.6436170212765957, 'step': 120}
{'loss': 5.2810636, 'learning_rate': 3.8475173e-05, 'epoch': 0.6968085106382979, 'step': 130}
{'loss': 5.040142, 'learning_rate': 3.758865e-05, 'epoch': 0.75, 'step': 140}
{'loss': 4.830164, 'learning_rate': 3.6702124e-05, 'epoch': 0.8031914893617021, 'step': 150}
{'loss': 4.6353145, 'learning_rate': 3.5815603e-05, 'epoch': 0.8563829787234043, 'step': 160}
{'loss': 4.446635, 'learning_rate': 3.492908e-05, 'epoch': 0.9095744680851063, 'step': 170}
{'loss': 4.300565, 'learning_rate': 3.4042554e-05, 'epoch': 0.9627659574468085, 'step': 180}
2020-09-11 23:08:59.187632: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23119 of 24000
2020-09-11 23:08:59.564647: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
{'loss': 1.8294506, 'learning_rate': 3.3156026e-05, 'epoch': 1.0106382978723405, 'step': 190}
{'loss': 1.576384, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 200}
{'loss': 1.4572238, 'learning_rate': 3.1382977e-05, 'epoch': 1.1170212765957448, 'step': 210}
{'loss': 1.4322406, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 220}
{'loss': 1.3880556, 'learning_rate': 2.960993e-05, 'epoch': 1.2234042553191489, 'step': 230}
{'loss': 1.3613675, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 240}
{'loss': 1.3514798, 'learning_rate': 2.7836879e-05, 'epoch': 1.3297872340425532, 'step': 250}
{'loss': 1.3266419, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 260}
{'loss': 1.3012911, 'learning_rate': 2.606383e-05, 'epoch': 1.4361702127659575, 'step': 270}
{'loss': 1.2993147, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 280}
{'loss': 1.2913059, 'learning_rate': 2.4290777e-05, 'epoch': 1.5425531914893615, 'step': 290}
{'loss': 1.2822802, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 300}
{'loss': 1.2839314, 'learning_rate': 2.2517732e-05, 'epoch': 1.648936170212766, 'step': 310}
{'loss': 1.2641081, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 320}
{'loss': 1.2524884, 'learning_rate': 2.0744681e-05, 'epoch': 1.7553191489361701, 'step': 330}
{'loss': 1.2450953, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 340}
{'loss': 1.2448001, 'learning_rate': 1.897163e-05, 'epoch': 1.8617021276595744, 'step': 350}
{'loss': 1.2407304, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 360}
{'loss': 1.2282307, 'learning_rate': 1.719858e-05, 'epoch': 1.9680851063829787, 'step': 370}
2020-09-11 23:14:15.677977: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23228 of 24000
2020-09-11 23:14:16.010139: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
{'loss': 0.85539556, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 380}
{'loss': 0.94433624, 'learning_rate': 1.542553e-05, 'epoch': 2.074468085106383, 'step': 390}
{'loss': 0.93659353, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 400}
{'loss': 0.8978215, 'learning_rate': 1.365248e-05, 'epoch': 2.1808510638297873, 'step': 410}
{'loss': 0.9126247, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 420}
{'loss': 0.9438525, 'learning_rate': 1.1879432e-05, 'epoch': 2.2872340425531914, 'step': 430}
{'loss': 0.9655043, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 440}
{'loss': 0.9694119, 'learning_rate': 1.0106382e-05, 'epoch': 2.393617021276596, 'step': 450}
{'loss': 0.95613927, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 460}
{'loss': 0.9483009, 'learning_rate': 8.333333e-06, 'epoch': 2.5, 'step': 470}
{'loss': 0.93453395, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 480}
{'loss': 0.92573655, 'learning_rate': 6.5602835e-06, 'epoch': 2.6063829787234045, 'step': 490}
{'loss': 0.9156919, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 500}
{'loss': 0.9098517, 'learning_rate': 4.787233e-06, 'epoch': 2.7127659574468086, 'step': 510}
{'loss': 0.9110744, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 520}
{'loss': 0.89820355, 'learning_rate': 3.014183e-06, 'epoch': 2.8191489361702127, 'step': 530}
{'loss': 0.8982555, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 540}
{'loss': 0.8997822, 'learning_rate': 1.2411356e-06, 'epoch': 2.925531914893617, 'step': 550}
{'loss': 0.89611685, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 560}
Training took: 0:17:32.846350
Saving model in /home/jplu/model
Configuration saved in /home/jplu/model/config.json
Model weights saved in /home/jplu/model/tf_model.h5
***** Running Evaluation *****
Num examples = 2200
Batch size = 32
{'eval_loss': 1.4825085626132246, 'eval_precision': 0.8298914945747288, 'eval_recall': 0.8713708195516354, 'eval_f1': 0.8501254930082466, 'epoch': 3.0, 'step': 564}
```
With gradient accumulation:
```
***** Running training *****
Num examples = 24000
Num Epochs = 3
Instantaneous batch size per device = 32
Total train batch size (w. parallel, distributed & accumulation) = 256
Gradient Accumulation steps = 2
Steps per epoch = 94
Total optimization steps = 282
{'loss': 17.029345, 'learning_rate': 4.822695e-05, 'epoch': 0.11702127659574468, 'step': 10}
{'loss': 13.583568, 'learning_rate': 4.6453897e-05, 'epoch': 0.22340425531914893, 'step': 20}
{'loss': 11.413903, 'learning_rate': 4.468085e-05, 'epoch': 0.32978723404255317, 'step': 30}
{'loss': 9.977048, 'learning_rate': 4.29078e-05, 'epoch': 0.43617021276595747, 'step': 40}
{'loss': 8.904137, 'learning_rate': 4.113475e-05, 'epoch': 0.5425531914893617, 'step': 50}
{'loss': 8.056796, 'learning_rate': 3.93617e-05, 'epoch': 0.648936170212766, 'step': 60}
{'loss': 7.339738, 'learning_rate': 3.758865e-05, 'epoch': 0.7553191489361702, 'step': 70}
{'loss': 6.7678766, 'learning_rate': 3.5815603e-05, 'epoch': 0.8617021276595744, 'step': 80}
{'loss': 6.2809086, 'learning_rate': 3.4042554e-05, 'epoch': 0.9680851063829787, 'step': 90}
2020-09-11 23:36:37.142279: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000
2020-09-11 23:36:37.311979: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
{'loss': 2.2556372, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 100}
{'loss': 2.0573573, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 110}
{'loss': 1.9544038, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 120}
{'loss': 1.8848253, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 130}
{'loss': 1.837054, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 140}
{'loss': 1.7885295, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 150}
{'loss': 1.7535, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 160}
{'loss': 1.7068337, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 170}
{'loss': 1.6874169, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 180}
2020-09-11 23:41:05.123380: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000
2020-09-11 23:36:05.332080: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
{'loss': 1.189175, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 190}
{'loss': 1.2820004, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 200}
{'loss': 1.2558761, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 210}
{'loss': 1.2865627, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 220}
{'loss': 1.2853887, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 230}
{'loss': 1.2650005, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 240}
{'loss': 1.2414478, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 250}
{'loss': 1.243986, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 260}
{'loss': 1.2295542, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 270}
{'loss': 1.2296557, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 280}
Training took: 0:19:10.070496
Saving model in /home/jplu/model
Configuration saved in /home/jplu/model/config.json
Model weights saved in /home/jplu/model/tf_model.h5
***** Running Evaluation *****
Num examples = 2200
Batch size = 32
{'eval_loss': 1.5663623533387114, 'eval_precision': 0.8107354478912513, 'eval_recall': 0.8548327820654171, 'eval_f1': 0.8322003577817532, 'epoch': 3.0, 'step': 282}
```
Looks ok to me. I have also tested for text classification and question answering without any error.
<|||||>>
>
> I have been able to run a NER task over 4 GPUs. Without gradient accumulation:
>
> ```
> ***** Running training *****
> Num examples = 24000
> Num Epochs = 3
> Instantaneous batch size per device = 32
> Total train batch size (w. parallel, distributed & accumulation) = 128
> Gradient Accumulation steps = 1
> Steps per epoch = 188
> Total optimization steps = 564
> {'loss': 18.6387, 'learning_rate': 4.9113474e-05, 'epoch': 0.05851063829787234, 'step': 10}
> {'loss': 14.222743, 'learning_rate': 4.822695e-05, 'epoch': 0.11170212765957446, 'step': 20}
> {'loss': 12.079448, 'learning_rate': 4.734042e-05, 'epoch': 0.16489361702127658, 'step': 30}
> {'loss': 10.455857, 'learning_rate': 4.6453897e-05, 'epoch': 0.21808510638297873, 'step': 40}
> {'loss': 9.295527, 'learning_rate': 4.5567373e-05, 'epoch': 0.2712765957446808, 'step': 50}
> {'loss': 8.374706, 'learning_rate': 4.468085e-05, 'epoch': 0.324468085106383, 'step': 60}
> {'loss': 7.674021, 'learning_rate': 4.379432e-05, 'epoch': 0.3776595744680851, 'step': 70}
> {'loss': 7.106626, 'learning_rate': 4.29078e-05, 'epoch': 0.4308510638297872, 'step': 80}
> {'loss': 6.6170573, 'learning_rate': 4.202128e-05, 'epoch': 0.48404255319148937, 'step': 90}
> {'loss': 6.226298, 'learning_rate': 4.113475e-05, 'epoch': 0.5372340425531915, 'step': 100}
> {'loss': 5.859087, 'learning_rate': 4.0248226e-05, 'epoch': 0.5904255319148937, 'step': 110}
> {'loss': 5.560567, 'learning_rate': 3.93617e-05, 'epoch': 0.6436170212765957, 'step': 120}
> {'loss': 5.2810636, 'learning_rate': 3.8475173e-05, 'epoch': 0.6968085106382979, 'step': 130}
> {'loss': 5.040142, 'learning_rate': 3.758865e-05, 'epoch': 0.75, 'step': 140}
> {'loss': 4.830164, 'learning_rate': 3.6702124e-05, 'epoch': 0.8031914893617021, 'step': 150}
> {'loss': 4.6353145, 'learning_rate': 3.5815603e-05, 'epoch': 0.8563829787234043, 'step': 160}
> {'loss': 4.446635, 'learning_rate': 3.492908e-05, 'epoch': 0.9095744680851063, 'step': 170}
> {'loss': 4.300565, 'learning_rate': 3.4042554e-05, 'epoch': 0.9627659574468085, 'step': 180}
> 2020-09-11 23:08:59.187632: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23119 of 24000
> 2020-09-11 23:08:59.564647: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
> {'loss': 1.8294506, 'learning_rate': 3.3156026e-05, 'epoch': 1.0106382978723405, 'step': 190}
> {'loss': 1.576384, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 200}
> {'loss': 1.4572238, 'learning_rate': 3.1382977e-05, 'epoch': 1.1170212765957448, 'step': 210}
> {'loss': 1.4322406, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 220}
> {'loss': 1.3880556, 'learning_rate': 2.960993e-05, 'epoch': 1.2234042553191489, 'step': 230}
> {'loss': 1.3613675, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 240}
> {'loss': 1.3514798, 'learning_rate': 2.7836879e-05, 'epoch': 1.3297872340425532, 'step': 250}
> {'loss': 1.3266419, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 260}
> {'loss': 1.3012911, 'learning_rate': 2.606383e-05, 'epoch': 1.4361702127659575, 'step': 270}
> {'loss': 1.2993147, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 280}
> {'loss': 1.2913059, 'learning_rate': 2.4290777e-05, 'epoch': 1.5425531914893615, 'step': 290}
> {'loss': 1.2822802, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 300}
> {'loss': 1.2839314, 'learning_rate': 2.2517732e-05, 'epoch': 1.648936170212766, 'step': 310}
> {'loss': 1.2641081, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 320}
> {'loss': 1.2524884, 'learning_rate': 2.0744681e-05, 'epoch': 1.7553191489361701, 'step': 330}
> {'loss': 1.2450953, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 340}
> {'loss': 1.2448001, 'learning_rate': 1.897163e-05, 'epoch': 1.8617021276595744, 'step': 350}
> {'loss': 1.2407304, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 360}
> {'loss': 1.2282307, 'learning_rate': 1.719858e-05, 'epoch': 1.9680851063829787, 'step': 370}
> 2020-09-11 23:14:15.677977: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23228 of 24000
> 2020-09-11 23:14:16.010139: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
> {'loss': 0.85539556, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 380}
> {'loss': 0.94433624, 'learning_rate': 1.542553e-05, 'epoch': 2.074468085106383, 'step': 390}
> {'loss': 0.93659353, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 400}
> {'loss': 0.8978215, 'learning_rate': 1.365248e-05, 'epoch': 2.1808510638297873, 'step': 410}
> {'loss': 0.9126247, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 420}
> {'loss': 0.9438525, 'learning_rate': 1.1879432e-05, 'epoch': 2.2872340425531914, 'step': 430}
> {'loss': 0.9655043, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 440}
> {'loss': 0.9694119, 'learning_rate': 1.0106382e-05, 'epoch': 2.393617021276596, 'step': 450}
> {'loss': 0.95613927, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 460}
> {'loss': 0.9483009, 'learning_rate': 8.333333e-06, 'epoch': 2.5, 'step': 470}
> {'loss': 0.93453395, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 480}
> {'loss': 0.92573655, 'learning_rate': 6.5602835e-06, 'epoch': 2.6063829787234045, 'step': 490}
> {'loss': 0.9156919, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 500}
> {'loss': 0.9098517, 'learning_rate': 4.787233e-06, 'epoch': 2.7127659574468086, 'step': 510}
> {'loss': 0.9110744, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 520}
> {'loss': 0.89820355, 'learning_rate': 3.014183e-06, 'epoch': 2.8191489361702127, 'step': 530}
> {'loss': 0.8982555, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 540}
> {'loss': 0.8997822, 'learning_rate': 1.2411356e-06, 'epoch': 2.925531914893617, 'step': 550}
> {'loss': 0.89611685, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 560}
> Training took: 0:17:32.846350
> Saving model in /home/jplu/model
> Configuration saved in /home/jplu/model/config.json
> Model weights saved in /home/jplu/model/tf_model.h5
> ***** Running Evaluation *****
> Num examples = 2200
> Batch size = 32
> {'eval_loss': 1.4825085626132246, 'eval_precision': 0.8298914945747288, 'eval_recall': 0.8713708195516354, 'eval_f1': 0.8501254930082466, 'epoch': 3.0, 'step': 564}
> ```
>
> With gradient accumulation:
>
> ```
> ***** Running training *****
> Num examples = 24000
> Num Epochs = 3
> Instantaneous batch size per device = 32
> Total train batch size (w. parallel, distributed & accumulation) = 256
> Gradient Accumulation steps = 2
> Steps per epoch = 94
> Total optimization steps = 282
> {'loss': 17.029345, 'learning_rate': 4.822695e-05, 'epoch': 0.11702127659574468, 'step': 10}
> {'loss': 13.583568, 'learning_rate': 4.6453897e-05, 'epoch': 0.22340425531914893, 'step': 20}
> {'loss': 11.413903, 'learning_rate': 4.468085e-05, 'epoch': 0.32978723404255317, 'step': 30}
> {'loss': 9.977048, 'learning_rate': 4.29078e-05, 'epoch': 0.43617021276595747, 'step': 40}
> {'loss': 8.904137, 'learning_rate': 4.113475e-05, 'epoch': 0.5425531914893617, 'step': 50}
> {'loss': 8.056796, 'learning_rate': 3.93617e-05, 'epoch': 0.648936170212766, 'step': 60}
> {'loss': 7.339738, 'learning_rate': 3.758865e-05, 'epoch': 0.7553191489361702, 'step': 70}
> {'loss': 6.7678766, 'learning_rate': 3.5815603e-05, 'epoch': 0.8617021276595744, 'step': 80}
> {'loss': 6.2809086, 'learning_rate': 3.4042554e-05, 'epoch': 0.9680851063829787, 'step': 90}
> 2020-09-11 23:36:37.142279: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000
> 2020-09-11 23:36:37.311979: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
> {'loss': 2.2556372, 'learning_rate': 3.22695e-05, 'epoch': 1.0638297872340425, 'step': 100}
> {'loss': 2.0573573, 'learning_rate': 3.0496454e-05, 'epoch': 1.1702127659574468, 'step': 110}
> {'loss': 1.9544038, 'learning_rate': 2.8723403e-05, 'epoch': 1.2765957446808511, 'step': 120}
> {'loss': 1.8848253, 'learning_rate': 2.6950353e-05, 'epoch': 1.3829787234042552, 'step': 130}
> {'loss': 1.837054, 'learning_rate': 2.5177305e-05, 'epoch': 1.4893617021276595, 'step': 140}
> {'loss': 1.7885295, 'learning_rate': 2.3404255e-05, 'epoch': 1.5957446808510638, 'step': 150}
> {'loss': 1.7535, 'learning_rate': 2.1631204e-05, 'epoch': 1.702127659574468, 'step': 160}
> {'loss': 1.7068337, 'learning_rate': 1.9858155e-05, 'epoch': 1.8085106382978724, 'step': 170}
> {'loss': 1.6874169, 'learning_rate': 1.8085108e-05, 'epoch': 1.9148936170212765, 'step': 180}
> 2020-09-11 23:41:05.123380: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:172] Filling up shuffle buffer (this may take a while): 23596 of 24000
> 2020-09-11 23:36:05.332080: I tensorflow/core/kernels/data/shuffle_dataset_op.cc:221] Shuffle buffer filled.
> {'loss': 1.189175, 'learning_rate': 1.6312057e-05, 'epoch': 2.021276595744681, 'step': 190}
> {'loss': 1.2820004, 'learning_rate': 1.4539006e-05, 'epoch': 2.127659574468085, 'step': 200}
> {'loss': 1.2558761, 'learning_rate': 1.27659605e-05, 'epoch': 2.2340425531914896, 'step': 210}
> {'loss': 1.2865627, 'learning_rate': 1.0992908e-05, 'epoch': 2.3404255319148937, 'step': 220}
> {'loss': 1.2853887, 'learning_rate': 9.219858e-06, 'epoch': 2.4468085106382977, 'step': 230}
> {'loss': 1.2650005, 'learning_rate': 7.4468076e-06, 'epoch': 2.5531914893617023, 'step': 240}
> {'loss': 1.2414478, 'learning_rate': 5.67376e-06, 'epoch': 2.6595744680851063, 'step': 250}
> {'loss': 1.243986, 'learning_rate': 3.9007095e-06, 'epoch': 2.7659574468085104, 'step': 260}
> {'loss': 1.2295542, 'learning_rate': 2.1276592e-06, 'epoch': 2.872340425531915, 'step': 270}
> {'loss': 1.2296557, 'learning_rate': 3.5460886e-07, 'epoch': 2.978723404255319, 'step': 280}
> Training took: 0:19:10.070496
> Saving model in /home/jplu/model
> Configuration saved in /home/jplu/model/config.json
> Model weights saved in /home/jplu/model/tf_model.h5
> ***** Running Evaluation *****
> Num examples = 2200
> Batch size = 32
> {'eval_loss': 1.5663623533387114, 'eval_precision': 0.8107354478912513, 'eval_recall': 0.8548327820654171, 'eval_f1': 0.8322003577817532, 'epoch': 3.0, 'step': 282}
> ```
>
> Looks ok to me. I have also tested for text classification and question answering without any error.
Wow! Thank you @jplu, a test and a reply on Friday night :)
I might need to get some gpu though if I continue to contribute - can't let you do all such tests all the time.
Great to see it works.<|||||>Ahah no worries it is ok, not everybody can have such setup.
@LysandreJik looks ok to merge. |
transformers | 6,997 | closed | run_squad.py not working on 3.1.0 version | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-91-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0 (False)
- Tensorflow version (GPU?): 2.1.0 (False)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->@LysandreJik @sshleifer
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [x] the official example scripts: (give details below)
run_squad.py is working with 2.9.1 but when I update it to 3.1.0 it gives error
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. !pip install transformers==3.1.0
It works when using 2.9.1 but not with this
2.
!mkdir dataset \
&& cd dataset \
&& wget https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v2.0.json \
&& wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v2.0.json
3.
!export SQUAD_DIR=/content/dataset \
&& python transformers/examples/run_squad.py \
--model_type bert \
--model_name_or_path bert-base-uncased \
--do_train \
--do_eval \
--do_lower_case \
--train_file $SQUAD_DIR/train-v2.0.json \
--predict_file $SQUAD_DIR/dev-v2.0.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 1.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content/model_output \
--save_steps 1000 \
--threads 4 \
--version_2_with_negative
Error message is:
2020-09-07 19:33:26.850641: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "transformers/examples/run_squad.py", line 74, in <module>
(),
File "transformers/examples/run_squad.py", line 73, in <genexpr>
(tuple(conf.pretrained_config_archive_map.keys()) for conf in (BertConfig, RobertaConfig, XLNetConfig, XLMConfig)),
AttributeError: type object 'BertConfig' has no attribute 'pretrained_config_archive_map'
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. --> It should start training without error just like older versions
| 09-07-2020 19:36:36 | 09-07-2020 19:36:36 | Indeed, it seems it hasn't been up to date. Did you try running `run_squad_trainer.py`? It should be more up to date.
@sgugger we should probably deprecate `run_squad.py` now that we have a Trainer-based SQuAD script.<|||||>The missing part was the eval which shouldn't be too hard to add (https://github.com/huggingface/transformers/pull/4829#issuecomment-645994130)
And then we can rename files as in https://github.com/huggingface/transformers/pull/5582<|||||>> Indeed, it seems it hasn't been up to date. Did you try running `run_squad_trainer.py`? It should be more up to date.
>
> @sgugger we should probably deprecate `run_squad.py` now that we have a Trainer-based SQuAD script.
No run_squad_trainer.py also gives error in 3.1.0
`!python transformers/examples/question-answering/run_squad_trainer.py --help
`
Error is
`python3: can't open file 'transformers/examples/question-answering/run_squad_trainer.py': [Errno 2] No such file or directory`<|||||>@deepanshu650 the file exists, it's [here](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_squad_trainer.py). Are you sure you cloned the v3.1.0 repo?<|||||>Yes it's working , initially I was working on copy of someone's notebook which was installing v2.3.0 and I had to pip install v3.1.0, so after changing to new notebook it starts running.
But it doesn't give evaluation results(which `run_squad.py` does) as `run_squad_trainer.py` not calling trainer.evaluate() though it makes eval_dataset .
So how do I evaluate . Thanks for replying.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,996 | closed | [generation] decoder priority for choosing decoder_start_token_id value | `config.decoder` needs to be checked first before model's `config` to set `decoder_start_token_id`.
This is needed for https://github.com/huggingface/transformers/pull/6940 where I think for the first time there is an actual `config.decoder`
| 09-07-2020 18:27:44 | 09-07-2020 18:27:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=h1) Report
> Merging [#6996](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90ec78b5140251f093f658ebd4d2925e8c03f5e6?el=desc) will **decrease** coverage by `0.55%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6996 +/- ##
==========================================
- Coverage 80.58% 80.03% -0.56%
==========================================
Files 161 161
Lines 30123 30123
==========================================
- Hits 24276 24109 -167
- Misses 5847 6014 +167
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (-0.14%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (+5.26%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6996/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=footer). Last update [90ec78b...fd199a5](https://codecov.io/gh/huggingface/transformers/pull/6996?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Shouldn't this be part of #6996? Or would this PR be needed either way?<|||||>> Shouldn't this be part of #6996? Or would this PR be needed either way?
Did you mean "part of " https://github.com/huggingface/transformers/pull/6940?
It's there already, and yes it's required to work. I just thought that it's the best to change core functions in separate PRs and not as a part of a much larger PR. Please advise if this is not the right approach.
In this particular case `config.decoder` is more specific than `config` and therefore should be checked first. I don't think currently there is any more that actively uses `config.decoder` (grep didn't find any), therefore it must have been untested since it was added in first place perhaps?
<|||||>Don't really agree with this PR - I think `bos_token_id` should have higher priority than `model.config.decoder.bos_token_id`. The `model.config.decoder.bos_token_id` was mainly added because of the `EncoderDecoderModel` framework<|||||>Yes I meant "part of " #6940?
If two PRs do not exist/make sense without each other I think they should be together.
Otherwise we can merge one without the other and have either broken or dead code<|||||>@stas00 why can't you use `decoder_start_token_id` for FSMT?<|||||>> why can't you use `decoder_start_token_id` for FSMT?
That works - thank you for the suggestion, @sshleifer
<|||||>> Don't really agree with this PR - I think `bos_token_id` should have higher priority than `model.config.decoder.bos_token_id`. The `model.config.decoder.bos_token_id` was mainly added because of the `EncoderDecoderModel` framework
The way I read the intention is that if it goes:
```
if self.config.is_encoder_decoder:
```
we are in encoder-decoder zone and as such `.decoder` should take priority
I guess Bart and friends are semi-encoder-decoder as far as the framework goes.
Thank you for the feedback, @patrickvonplaten
I suppose in the ideal world we should have tests that validate such scenarios. |
transformers | 6,995 | closed | [from_pretrained] Allow tokenizer_type ≠ model_type | For an exemple usage of this PR, see the `tokenizer_class` attribute in this config.json: https://s3.amazonaws.com/models.huggingface.co/bert/julien-c/dummy-diff-tokenizer/config.json
Instead of a class, we could have used a `tokenizer_type` belonging to the set of all `model_type`s, like `"bert"`, etc. but it feels more restrictive, especially in case we start having tokenizer classes that are not obviously linked to a "model", like a potential "TweetTokenizer"
Context: https://github.com/huggingface/transformers/pull/6129
**Update: documented by @sgugger in https://github.com/huggingface/transformers/pull/8152** | 09-07-2020 17:15:30 | 09-07-2020 17:15:30 | > Not sure I fully understand the use case, but nothing against the principle of it.
The idea is to prevent combinatorial explosion of "model types" when only the tokenizer is different (e.g. Flaubert, CamemBERT if we wanted to support them today)
In the future we might even want to have a few model-agnostic tokenizer classes like ByteLevelBPETokenizer (basically RobertaTokenizer), as they can be initialized pretty exhaustively from the init args stored in `tokenizer_config.json`
<|||||>Documented by @sgugger in https://github.com/huggingface/transformers/pull/8152 |
transformers | 6,994 | closed | Fix typo | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-07-2020 16:38:05 | 09-07-2020 16:38:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=h1) Report
> Merging [#6994](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90ec78b5140251f093f658ebd4d2925e8c03f5e6?el=desc) will **decrease** coverage by `0.54%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6994 +/- ##
==========================================
- Coverage 80.58% 80.04% -0.55%
==========================================
Files 161 161
Lines 30123 30123
==========================================
- Hits 24276 24111 -165
- Misses 5847 6012 +165
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+6.07%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6994/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=footer). Last update [90ec78b...09dda6e](https://codecov.io/gh/huggingface/transformers/pull/6994?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,993 | closed | PegasusForConditionalGeneration stops at unknown token | Hi all,
When following the code snippet from the [huggingface documentation](https://huggingface.co/transformers/master/model_doc/pegasus.html) but replacing the text, I have found that the summary stops when it reaches an unknown token. Is there a way around this?
src_text = [
"""As a child, the documentary maker, who was born in the Indian state of Kerala but now lives in Toronto, saw ceremonial elephants being paraded and thought they were beautiful. Later, she learned about the ordeal the animals are subjected to. "So many elephants had ghastly wounds on their hips, massive tumours and blood oozing out of their ankles, because chains had cut into their flesh and many of them were blind," Iyer told the BBC. She has made a documentary, Gods in Shackles, in an attempt to draw attention to the treatment of temple elephants she saw in India. "They were so helpless and the chains were so heavy," she said. "It was absolutely heart-breaking for me to witness this."""
]
Using this sample text, for example, my summary is:
As a child, the documentary maker, who was born in the Indian state of Kerala but now lives in Toronto, saw ceremonial elephants being paraded and thought they were beautiful, but later, she learned about the ordeal the animals are subjected to in India, in an attempt to draw attention to the treatment of templeunk_9 | 09-07-2020 13:43:28 | 09-07-2020 13:43:28 | The easiest way I can think of is to avoid generating the unk token altogether.
add the following method to `PegasusForConditionalGeneration`
```python
def adjust_logits_during_generation(self, logits, cur_len, max_length):
# Note, this will break with a tokenizer that is not PegasusTokenizer
logits[:, list(range(2, 105))] = float("-inf") # never predict unk tokens
if cur_len == max_length - 1 and self.config.eos_token_id is not None:
self._force_token_ids_generation(logits, self.config.eos_token_id)
return logits
```
Let me know if that helps!
<|||||>> The easiest way I can think of is to avoid generating the unk token altogether.
>
> add the following method to `PegasusForConditionalGeneration`
>
> ```python
> def adjust_logits_during_generation(self, logits, cur_len, max_length):
> # Note, this will break with a tokenizer that is not PegasusTokenizer
> logits[:, list(range(2, 105))] = float("-inf") # never predict unk tokens
> if cur_len == max_length - 1 and self.config.eos_token_id is not None:
> self._force_token_ids_generation(logits, self.config.eos_token_id)
> return logits
> ```
>
> Let me know if that helps!
Thank you for the reply!
I've added the code and checked it's being run, but unfortunately the output still stops at an unknown token.<|||||>I can't replicate.
got
> As a child, the documentary maker, who was born in the Indian state of Kerala but now lives in Toronto, saw ceremonial elephants being paraded and thought they were beautiful.
on the branch of #7014
<|||||>Setting the min_length parameter to 100 yields the problem (as I should have mentioned). Might this be an issue with the minimum length being too long relative to the size of the input?<|||||>Yeah. the `xsum` model especially is trained to generate very short summaries.
`pegasus-arxiv` for example, can generate up to 256 tokens.
you can see each available checkpoint and it's maximum input and output sizes [here](https://github.com/huggingface/transformers/blob/0f58903bb62870342eae52f5a02c9105ec6f9b1e/src/transformers/configuration_pegasus.py#L50)
+ `max_length`: max length to generate
+ `max_position_embeddings`: max input size. |
transformers | 6,992 | closed | Mobile Bert Tiny model | # 🚀 Feature request
Can you add support for variants of MobileBERT?
## Motivation
The package currently provides various varients of bert - 'bert-base-cased', 'bert-base-uncased', 'bert-large'...
Can you also similarly provide Mobile Bert Tiny as well?
| 09-07-2020 12:30:56 | 09-07-2020 12:30:56 | MobileBERT is supported, see [here](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_mobilebert.py) for the code, and [here](https://huggingface.co/transformers/model_doc/mobilebert.html) for the docs. [Here](https://huggingface.co/models?search=mobilebert) are all the available mobilebert models on the hub. |
transformers | 6,991 | closed | Conversion scripts shouldn't have relative imports | 09-07-2020 12:30:53 | 09-07-2020 12:30:53 | https://github.com/huggingface/transformers/blob/0203ad43bcd0b29423dec6ca1a58ed58300f0d61/src/transformers/convert_mbart_original_checkpoint_to_pytorch.py#L7
Hello! Does this line also need to be changed? |
|
transformers | 6,990 | closed | README for HooshvareLab/bert-fa-base-uncased | ParsBERT v2.0 is a fine-tuned and vocab-reconstructed version of ParsBERT, and it's able to be used in other scopes!
Some features:
- We added some unused-vocab for use in summarization and other scopes.
- We fine-tuned the model on vast styles of writing in the Persian language.
| 09-07-2020 12:04:59 | 09-07-2020 12:04:59 | |
transformers | 6,989 | closed | TypeError: __init__() got an unexpected keyword argument 'cache_dir' |
I'm fine tuning with the example `run_language_modeling.py`as follows.
```shell
python run_language_modeling.py --output_dir=output_dir --model_type gpt2 --model_name_or_path distilgpt2 --do_train --train_data_file=xxx.data.txt
```
It failed with error
```shell
Traceback (most recent call last):
File "run_language_modeling.py", line 313, in <module>
main()
File "run_language_modeling.py", line 242, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_train else None
File "run_language_modeling.py", line 143, in get_dataset
cache_dir=cache_dir,
TypeError: __init__() got an unexpected keyword argument 'cache_dir'
```
Could you please tell me how to use it ? Thanks a lot.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-07-2020 11:53:05 | 09-07-2020 11:53:05 | Solved with https://github.com/huggingface/transformers/issues/319 |
transformers | 6,988 | closed | t5 embed_tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-07-2020 10:41:11 | 09-07-2020 10:41:11 | |
transformers | 6,987 | closed | DefaultCPUAllocator: can't allocate memory: you tried to allocate 100663296 bytes | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I'm using the `run_language_modeling.py` to fine tuning the model `distilgpt2`. The command i used as follows:
```shell
python run_language_modeling.py --output_dir=output_dir --model_type gpt2 --model_name_or_path distilgpt2 --do_train --train_data_file=data/data.txt --overwrite_output_dir
```
But it core with following error
```shell
File "/data1/xxx/transformers/src/transformers/activations.py", line 30, in gelu_new
return 0.5 * x * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (x + 0.044715 * torch.pow(x, 3.0))))
RuntimeError: [enforce fail at CPUAllocator.cpp:64] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 100663296 bytes. Error code 12 (Cannot allocate memory)
```
Does it core duing to the insufficient memory on my machine? Hope for your help. Thanks a lot.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 09-07-2020 10:00:19 | 09-07-2020 10:00:19 | This happens because you don't have sufficient memory in your machine, indeed. You can try reducing the batch size.<|||||>> This happens because you don't have sufficient memory in your machine, indeed. You can try reducing the batch size.
In my opinion, the problem occurs may due to my dataset is too large(Relative to the memory of my machine). But the loading data part may be optimized referring to the following issue.
https://stackoverflow.com/questions/51444059/how-to-iterate-over-two-dataloaders-simultaneously-using-pytorch/57890309#57890309 <|||||>There are two questions:
(1) I've changed a machine to run the code. It started running normal, but will quit midway while training. Is this also related to my machine's memory?
This is the data of my machine during training.

This is the exit interface. I don't know what's the matter.

(2) In addition , how can i use gpu to run `run_language_modeling.py` please. Thanks a lot.
<|||||>I have a similar problem. Was there any solution in your case?
<|||||>same problem<|||||>same problem<|||||>Were there any solutions to this? I am encountering the exact same issue trying to train Dolly on the original dataset. |
transformers | 6,986 | closed | Demoing LXMERT with raw images by incorporating the FRCNN model for roi-pooled extraction and bounding-box predction on the GQA answer set. | This is a follow up PR from initially incorporating LXMERT. This PR includes the Faster-RCNN code to convert raw-images into usable roi-pooled features downstream in lxmert or any other suitable vision model. | 09-07-2020 09:27:54 | 09-07-2020 09:27:54 | |
transformers | 6,985 | closed | Enhance a MarianMT pretrained model from HuggingFace with more training data | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: https://stackoverflow.com/questions/63774619/enhance-a-marianmt-pretrained-model-from-huggingface-with-more-training-data | 09-07-2020 09:13:00 | 09-07-2020 09:13:00 | Have you tried the finetune.sh script shown [here](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.sh)? In addition to the short list of CLI flags listed there, you could try adding:
```
--src_lang "en" \
--tgt_lang "de" \
--num_train_epochs 400 \
--warmup_steps 20 \
--train_batch_size 32 \
--eval_batch_size 32 \
--data_dir "/data/dir" \
--output_dir "/path/to/store/model/etc" \
--cache_dir "/path/for/misc/files" \
--max_source_length 128 \
--max_target_length 128 \
--val_max_target_length 128 \
--test_max_target_length 128 \
--model_name_or_path "</path/to/pretrained>"
```
where the "/path/to/pretrained" could be either a local path on your machine or MarianMT model (Opus-en-de or equivalent). The "data/dir" has a "train.source" and "train.target" for the source & target languages, such that line number x of the target is a translation of line x in the source (and same with "val.source" and "val.target"). I have changed the finetune.py script [here](https://github.com/huggingface/transformers/blob/77cd0e13d2d09f60d2f6d8fb8b08f493d7ca51fe/examples/seq2seq/finetune.py#L415) to
```
parser = TranslationModule.add_model_specific_args(parser, os.getcwd())
```
and then ran the finetune.sh script.
Note: The gradients blew up when I used the "fp16" flag (with Pytorch 1.6), so I had removed it. Also, you might want to check on the "val_check_interval", "check_val_every_n_epoch", and probably check [this issue](https://github.com/huggingface/transformers/issues/3447) on how to save multiple checkpoints.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,984 | closed | Cannot index `None` | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6950
| 09-07-2020 08:39:25 | 09-07-2020 08:39:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=h1) Report
> Merging [#6984](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/995a958dd18d4326e608efc3bfc4005acfef8e56?el=desc) will **increase** coverage by `0.26%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6984 +/- ##
==========================================
+ Coverage 80.03% 80.30% +0.26%
==========================================
Files 161 161
Lines 30122 30123 +1
==========================================
+ Hits 24108 24190 +82
+ Misses 6014 5933 -81
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.44% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6984/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=footer). Last update [995a958...8ecbd15](https://codecov.io/gh/huggingface/transformers/pull/6984?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,983 | closed | [generation] multiple eos/pad asserts/ifs in generate search functions | In `_generate_no_beam_search` `eos_token_id` is required: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L731 (that code always get hit)
```
assert (
eos_token_id is not None and pad_token_id is not None
), "generated beams >= num_beams -> eos_token_id and pad_token have to be defined"
```
why do we assert and check `eos_token_id is not None` multiple times through the code, why not assert once at the top of the function and then just use it?
Moreover, all those `if eos_token_id is not None` can be then removed (or reduced if there are other parts to them).
Also a larger question - is there a model where `eos_token_id` is not defined? If there is none, then why not assert once at the top of `generate` and then just use it everywhere in sub-calls without testing its definition?
Oh, I also see `pad_token_id` is used in `_generate_no_beam_search` w/o testing whether it's defined: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L571
```
tokens_to_add = next_token * unfinished_sents + (pad_token_id) * (1 - unfinished_sents)
```
is it the same situation as `eos_token_id` - that is it is always needed?
I see it's may be defined here: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L355 but only if `eos_token_id` is defined.
```
if pad_token_id is None and eos_token_id is not None:
logger.warning(
"Setting `pad_token_id` to {} (first `eos_token_id`) to generate sequence".format(eos_token_id)
)
pad_token_id = eos_token_id
```
my thinking is that if this worked until now for all models, it's another proof that `eos_token_id` has to be required again.
in `_generate_no_beam_search` `pad_token_id` is required and similarly to `eos_token_id` can be asserted once on top and not multiple times through the code.
Thank you for reviewing my observations. It's possible that some (all?) are incorrect if I missed something. | 09-07-2020 07:07:55 | 09-07-2020 07:07:55 | I think eos is always defined, but I think this (or just checking `pad_token_id` was one of Patrick's first PRs. He would know more.<|||||>Thank you for that feedback, @sshleifer.
If it makes things simpler, I could re-work both functions wrt these 2 tokens' definition checks and you can review a PR instead.
I just wanted to validate that the issue is real and I'm not missing something obvious before I invest time into doing that.<|||||>Hey @stas00,
This is definitely part of the code that should be refactored :D Super hard to follow the logic there :-/
As a start, this PR is probably quite useful for context: https://github.com/huggingface/transformers/pull/2885. So there are a couple of models where EOS token is not defined and I'm quite sure that the code you linked does not always get hit. It can very well be that we apply beam search to `OpenAIGPT` - with a given `max_length`. `OpenAIGPT` does not have an EOS token, but beam search should work nevertheless.
It's quite a tricky pad token / eos token / ... logic that is implemented there. I think we have to be super careful to not break anything here - even if all the slow tests pass, it might not be enough (`OpenAIGPT` beam search is not integration tested...)
Also, I'm currently working on refactoring the generate function, will ping you guys in a couple of days with a first design proposition. My idea is to pull apart beam search + greedy / beam search + sampling / no beam search + greedy / no beam searh + greedy to make everything more readable. I'm not sure whether it's worth diving deep into the generate() logic before we have a more readable code<|||||>That sounds like a fantastic plan, @patrickvonplaten!
> So there are a couple of models where EOS token is not defined and I'm quite sure that the code you linked does not always get hit.
I stand corrected, that's good to know, thank you!.
That means that the code is very tricky, since a reader will expect that at some point the generation should be complete and `done` set to True, which currently absolutely requires eos. I haven't considered the case where it'll go through that loop and not hit done. If I follow it carefully it only happens if `max_length` is reached and there is no `done` yet, and moreover it has to be that the hypos are exactly of the same length. if they aren't the same, eos is almost always required.
As you are saying there isn't really a test that covers that (odd?) case. Actually, PR https://github.com/huggingface/transformers/pull/6982 is very likely to break it then, since now it requires eos for both situations where hypos are of the same length and are not. But if it breaks that very special case, then the issue lies elsewhere and it just happened to work. (As I suggested I changed "is" for "was" in an input and suddenly eos was gone from all of the hypos.)
Note: I have only run the code in my head and haven't validated that in fact it'd break something. It's possible that you're talking about a completely different case.
<|||||>I think your PR is fine because if no `eos_token_id` is defined, this condition can never happen: `sent_lengths[i] < max_length:`.
What I mean is that if no `eos_token_id` is defined no matter what `generate()` method is used, all sent_length will always be == `max_length` and the condition will not be hit.<|||||>ah, yes, you're absolutely correct, Patrick - you definitely have been holding that generation code in your head for much longer than I - I don't have the full coverage yet :)<|||||>Reopen if this was a mistake! |
transformers | 6,982 | closed | [generation] consistently add eos tokens | Currently beam search returns inconsistent outputs - if hypos have different lengths we get eos, if they are the same - we don't. I found a sentence where if I change one letter in one of the input words the beam search outputs all suddenly lack eos, which is an inconsistent behavior.
This PR makes the output more consistent. (but not 100%, please see below)
---------
Also why not replace:
```
if sent_lengths[i] < max_length:
decoded[i, sent_lengths[i]] = eos_token_id
```
with:
```
decoded[i, sent_lengths[i]] = eos_token_id
```
Shouldn't eos always be there? If generated data gets truncated, the caller needs to use a larger `max_length`.
Currently, if the hypos lengths are on the cusp of `max_length`, some of them will get eos, whereas others won't, which is again inconsistent.
Please correct me if my logic is flawed.
-----
I also looked at `_generate_no_beam_search` - there eos adding logic uses a somewhat different logic.
Should the two functions (beam/no_beam) be consistent eos-injection wise?
| 09-07-2020 06:19:50 | 09-07-2020 06:19:50 | IMO, `eos` should not always be there. The reason is that if the user defines `max_length=30` and the EOS token was not generated by the model, then no EOS token should be added. I think EOS token should only be added if it is produced by the model. *E.g.* a generated sentence like "I will go to the office and" should not have an added EOS token at the end.<|||||>> [...] _E.g._ a generated sentence like "I will go to the office and" should not have an added EOS token at the end.
Thank you for explaining that, @patrickvonplaten.
Could you also please review this PR, as it's unrelated to max_length or undefined EOS, that was a related question that I wasn't sure about.
<|||||>@stas00
Have you run the slow tests that might be effected? (will take 10-30 mins)
```
run_generation_integration_tests () {
# assumes USE_CUDA is exported, rather than passed
RUN_SLOW=1 pytest tests/test_modeling_pegasus.py
RUN_SLOW=1 pytest tests/test_modeling_bart.py
RUN_SLOW=1 pytest tests/test_modeling_t5.py
RUN_SLOW=1 pytest tests/test_modeling_marian.py
RUN_SLOW=1 pytest tests/test_modeling_mbart.py
RUN_SLOW=1 pytest tests/test_modeling_encoder_decoder.py
RUN_SLOW=1 pytest tests/test_pipelines.py
RUN_SLOW=1 pytest tests/test_modeling_gpt2.py
}
```<|||||>Good call, @sshleifer! (I edited the last one to `tests/test_modeling_gpt2.py`)
```
RUN_SLOW=1 pytest --disable-warnings tests/test_modeling_pegasus.py tests/test_modeling_bart.py tests/test_modeling_t5.py tests/test_modeling_marian.py tests/test_modeling_mbart.py tests/test_modeling_encoder_decoder.py tests/test_pipelines.py tests/test_modeling_gpt2.py
====================================================================== test session starts =======================================================================
platform linux -- Python 3.7.5, pytest-5.4.1, py-1.8.1, pluggy-0.13.1
rootdir: /mnt/nvme1/code/huggingface/transformers
plugins: hypothesis-5.5.4, timeout-1.4.2, filter-subpackage-0.1.1, arraydiff-0.3, flaky-3.6.1, ipynb-1.1.1.dev0, cov-2.10.0, astropy-header-0.1.2, forked-1.2.0, doctestplus-0.5.0, openfiles-0.4.0, remotedata-0.3.2, xdist-1.32.0, repeat-0.8.0, flakefinder-1.0.0
collected 211 items
tests/test_modeling_pegasus.py .. [ 0%]
tests/test_modeling_bart.py ..............s....s............................ [ 23%]
tests/test_modeling_t5.py ......................s..s........... [ 41%]
tests/test_modeling_marian.py ................ [ 48%]
tests/test_modeling_mbart.py s...s. [ 51%]
tests/test_modeling_encoder_decoder.py ............................... [ 66%]
tests/test_pipelines.py ....................................... [ 84%]
tests/test_modeling_gpt2.py .......................s........ [100%]
==================================================== 204 passed, 7 skipped, 45 warnings in 980.34s (0:16:20) =====================================================
```<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=h1) Report
> Merging [#6982](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce37be9d94da57897cce9c49b3421e6a8a927d4a?el=desc) will **increase** coverage by `2.39%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6982 +/- ##
==========================================
+ Coverage 77.60% 80.00% +2.39%
==========================================
Files 161 161
Lines 30120 30119 -1
==========================================
+ Hits 23374 24096 +722
+ Misses 6746 6023 -723
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.20% <100.00%> (-0.01%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+1.00%)` | :arrow_up: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <0.00%> (+20.74%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6982/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=footer). Last update [ce37be9...85dd09d](https://codecov.io/gh/huggingface/transformers/pull/6982?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,981 | closed | LongformerForQuestionAnswering sample code error | ## Environment info
- `transformers` version: 2.11.0
- Platform: Linux-4.19.112+-x86_64-with-debian-buster-sid
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Longformer/Reformer: @patrickvonplaten
## Information
Model I am using LongformerForQuestionAnswering
The problem arises when using:
[https://huggingface.co/transformers/v2.11.0/model_doc/longformer.html#longformerforquestionanswering](url)
## To reproduce
Steps to reproduce the behavior:
1. I ran the following code on Kaggle kernel
from transformers import LongformerTokenizer, LongformerForQuestionAnswering
import torch
tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa", return_dict=True)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
encoding = tokenizer(question, text, return_tensors="pt")
input_ids = encoding["input_ids"]
attention_mask = encoding["attention_mask"]
outputs = model(input_ids, attention_mask=attention_mask)
start_logits = outputs.start_logits
end_logits = outputs.end_logits
all_tokens = tokenizer.convert_ids_to_tokens(input_ids[0].tolist())
answer_tokens = all_tokens[torch.argmax(start_logits) :torch.argmax(end_logits)+1]
answer = tokenizer.decode(tokenizer.convert_tokens_to_ids(answer_tokens)) # remove space prepending space token
**Following error occurred**
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-70f3a4bf4161> in <module>
2 import torch
3 tokenizer = LongformerTokenizer.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa")
----> 4 model = LongformerForQuestionAnswering.from_pretrained("allenai/longformer-large-4096-finetuned-triviaqa", return_dict=True)
5 question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
6 encoding = tokenizer(question, text, return_tensors="pt")
/opt/conda/lib/python3.7/site-packages/transformers/modeling_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
653
654 # Instantiate model.
--> 655 model = cls(config, *model_args, **model_kwargs)
656
657 if state_dict is None and not from_tf:
TypeError: __init__() got an unexpected keyword argument 'return_dict'
> and after doing the following changes:
_encoding = tokenizer(question, text, return_tensors="pt") -> encoding = tokenizer(question, text)_
the following error occured
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-54aabf4ba7c0> in <module>
1 question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
----> 2 encoding = tokenizer(question, text, return_tensors="pt")
TypeError: 'LongformerTokenizer' object is not callable
| 09-07-2020 04:51:49 | 09-07-2020 04:51:49 | `tokenizer` is not a callable in v.2.11.0, which is why you are getting this error. Also the example that you posted is from stable docs not from 2.11.0, upgrading to latest stable version should resolve the issue.<|||||>@patil-suraj
Sorry for asking this naive ques but how to upgrade to latest stable version?
and
What about the first error?
TypeError: init() got an unexpected keyword argument 'return_dict'
Can you please point to some reference or any link?<|||||>Upgrading to latest version should also resolve the first issue. To upgrade
`pip install -U transformers`<|||||>Thank You. It worked. |
transformers | 6,980 | closed | [gen utils] missing else case | 1. `else` is missing - I hit that case while porting a model. Probably needs to assert there?
2. also the comment on top seems to be outdated (just vocab_size is being set there)
| 09-07-2020 02:50:42 | 09-07-2020 02:50:42 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=h1) Report
> Merging [#6980](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/ce37be9d94da57897cce9c49b3421e6a8a927d4a?el=desc) will **increase** coverage by `0.39%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6980 +/- ##
==========================================
+ Coverage 77.60% 77.99% +0.39%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 23374 23492 +118
+ Misses 6746 6628 -118
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/configuration\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <0.00%> (-80.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `23.50% <0.00%> (-46.52%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+2.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <0.00%> (+63.80%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.91% <0.00%> (+72.35%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6980/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=footer). Last update [ce37be9...2b9171e](https://codecov.io/gh/huggingface/transformers/pull/6980?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,979 | closed | RunTime Error: CUDA out of memory when running trainer.train() | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: P100 GPU instance (Google Colab)
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger @julien-c @LysandreJik
## Information
Model I am using (Bert, XLNet ...): RoBERTa with Byte-Pair Encoder (loading from the checkpointed pre-trained model on HuggingFace model hub).
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x ] my own modified scripts: (give details below)
I modify a variant of the [`01_how_to_train.ipynb`,](https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/01_how_to_train.ipynb) replacing the `LinebyLineTextDataset` as it results in Out of Memory Issues with my text corpus. I built a hugging face NLP dataset which tokenizes the corpus.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below): the PubChem 1M SELFIES set, a set of one million SELFIES Strings. "SELFIES" is a 100% chemically valid molecular string representation. You can view the library [here](https://github.com/aspuru-guzik-group/selfies) (I'm one of the developers).
## To reproduce
Steps to reproduce the behavior:
Reproducing requires a copy of the `shard_00_selfies.txt` dataset (click [here](https://drive.google.com/file/d/1DRq8UgBaKSNfyYtqNMyQ4WimG64sSOFR/view?usp=sharing) for a drive link), as well as the tokenizer's files which can be loaded from the HuggingFace hub with the following name: `seyonec/BPE_SELFIES_PubChem_shard00_50k`
From there, you can just run a variant of the following colab file, with modified file paths of course: https://colab.research.google.com/drive/1a4edCW1b2rSVA_bkqEaywhvGMM_nQzRC?usp=sharing
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Strangely, the Trainer doesn't output a link to the Weights and Biases run page like it normally did before (using a near-identical script only a couple of days ago). It throws a CUDA out of memory error once I run the `trainer.train()` command:

<img width="1200" alt="Screen Shot 2020-09-06 at 10 45 58 PM" src="https://user-images.githubusercontent.com/46096704/92343851-d8b73c00-f092-11ea-8c89-a0034191975d.png">
Thanks for the help! Any advice or help is desperately welcome, I've been stuck for the past days with various memory issues with tokenization, and now with running the trainer class for some reason. 😄
You can check out the main public repository for this project, alongside with the abstract, and more [here](https://github.com/seyonechithrananda/bert-loves-chemistry)! | 09-07-2020 02:48:13 | 09-07-2020 02:48:13 | Hi @seyonechithrananda , I'm facing the same problem, how did you solve this issue?<|||||>Ping here about that, I'm having the same problem<|||||>I am suffering from the same problem, would like to know the solution if there is any<|||||>Same problem here. When I run this code on my machine:
https://github.com/huggingface/notebooks/blob/master/examples/token_classification.ipynb
https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing
I get:
RuntimeError: CUDA out of memory. Tried to allocate 192.00 MiB (GPU 0; 1.96 GiB total capacity; 785.01 MiB already allocated; 111.25 MiB free; 832.00 MiB reserved in total by PyTorch)
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce MX250 Off | 00000000:02:00.0 Off | N/A |
| N/A 56C P3 N/A / N/A | 1897MiB / 2002MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
```
<|||||>Hi ,I'm too facing this issue, any solution found on this ?<|||||>I am having the same issue and it is recurring. Any solution? |
transformers | 6,978 | closed | [gen utils] missing else case | This code: https://github.com/huggingface/transformers/blob/master/src/transformers/generation_utils.py#L361-L369
```
# current position and vocab size
if hasattr(self.config, "vocab_size"):
vocab_size = self.config.vocab_size
elif (
self.config.is_encoder_decoder
and hasattr(self.config, "decoder")
and hasattr(self.config.decoder, "vocab_size")
):
vocab_size = self.config.decoder.vocab_size
```
1. `else` is missing - I hit that case while porting a model. Probably needs to assert there?
```
raise ValueError("either self.config.vocab_size or self.config.decoder.vocab_size need to be defined")
```
2. also the comment on top seems to be outdated (just `vocab_size` is being set there)
| 09-07-2020 02:46:53 | 09-07-2020 02:46:53 | sent PR https://github.com/huggingface/transformers/pull/6980 |
transformers | 6,977 | closed | [s2s] warn if --fp16 for torch 1.6 | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-06-2020 23:28:32 | 09-06-2020 23:28:32 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=h1) Report
> Merging [#6977](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e?el=desc) will **increase** coverage by `0.59%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6977 +/- ##
==========================================
+ Coverage 79.45% 80.04% +0.59%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 23931 24109 +178
+ Misses 6189 6011 -178
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90NS5weQ==) | `26.05% <0.00%> (-63.52%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/configuration\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3Q1LnB5) | `85.71% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `71.02% <0.00%> (-5.69%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `82.95% <0.00%> (-3.76%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `85.01% <0.00%> (-2.29%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.16%)` | :arrow_up: |
| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6977/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=footer). Last update [f72fe1f...6b88230](https://codecov.io/gh/huggingface/transformers/pull/6977?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,976 | closed | LXMERT imports | # ❓ Questions & Help
Hello, I was very happy to see that LXMERT is being integrated into this library and I wanted to try it out. I got a KeyError in configuration_auto.py, as in CONFIG_MAPPING there was no 'lxmert'. Then, I re-installed transformers from source. The KeyError went away, but this time I encountered issues in modeling_auto.py as below:
```model = AutoModelWithLMHead.from_pretrained("unc-nlp/lxmert-base-uncased")```
/transformers/src/transformers/modeling_auto.py", line 841, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.configuration_lxmert.LxmertConfig'> for this kind of AutoModel: AutoModelWithLMHead.
Model type should be one of T5Config, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, MarianConfig, BartConfig, LongformerConfig, RobertaConfig, BertConfig, OpenAIGPTConfig, GPT2Config, MobileBertConfig, TransfoXLConfig, XLNetConfig, FlaubertConfig, XLMConfig, CTRLConfig, ElectraConfig, EncoderDecoderConfig, ReformerConfig.
`
In the example [here](https://huggingface.co/unc-nlp/lxmert-base-uncased), AutoModelWithLMHead is imported. However, because of the error above, I also tried LxmertPreTrainedModel, which led to some other error concerning the initialization of the weights. Importing LxmertModel instead seems to work out without errors. Would that be the correct way if I want to extract features from a pretrained model?
I would appreciate any help regarding this issue!
Best regards,
Ece | 09-06-2020 20:49:08 | 09-06-2020 20:49:08 | Hey @ecekt,
Thanks for your issue!
Yes, you are correct ```"unc-nlp/lxmert-base-uncased"``` should only be imported with the `AutoModel` or `LxmertModel` class.
@julien-c - the config looks correct I'm not sure why it states `AutoModelWithLMHeadModel` in the examlpe here: https://huggingface.co/unc-nlp/lxmert-base-uncased<|||||>Hi @patrickvonplaten, thank you for the reply! I will explore its use with the LxmertModel import then.
Best.<|||||>This is now fixed on https://huggingface.co/unc-nlp/lxmert-base-uncased, thanks for the heads up |
transformers | 6,975 | closed | Created README for labse_bert model card | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-06-2020 19:09:44 | 09-06-2020 19:09:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=h1) Report
> Merging [#6975](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e?el=desc) will **increase** coverage by `0.56%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6975 +/- ##
==========================================
+ Coverage 79.45% 80.01% +0.56%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 23931 24102 +171
+ Misses 6189 6018 -171
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |
| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6975/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=footer). Last update [f72fe1f...b65c486](https://codecov.io/gh/huggingface/transformers/pull/6975?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks for sharing this is great! |
transformers | 6,974 | closed | Create README.md | <!-- This line specifies which issue to close after the pull request is merged. -->
Model Card for https://huggingface.co/akhooli/mbart-large-cc25-ar-en
| 09-06-2020 15:29:55 | 09-06-2020 15:29:55 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=h1) Report
> Merging [#6974](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/f72fe1f31aca235c7f675680832cc364efe4088e?el=desc) will **increase** coverage by `0.57%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6974 +/- ##
==========================================
+ Coverage 79.45% 80.02% +0.57%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 23931 24105 +174
+ Misses 6189 6015 -174
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (-0.33%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |
| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6974/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=footer). Last update [f72fe1f...efa8495](https://codecov.io/gh/huggingface/transformers/pull/6974?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,973 | closed | Fixed the default number of attention heads in Reformer Configuration | <!-- This line specifies which issue to close after the pull request is merged. -->
Just a simple fix. The default number of attention heads was 2 instead of 12.
| 09-06-2020 09:45:18 | 09-06-2020 09:45:18 | I'm a bit indifferent to this change, but I'm ok with setting it to `12` |
transformers | 6,972 | closed | The configuration of 3.0.2 and 3.1.0 is not compatible | The configuration of **3.0.2** and **3.1.0** is not compatible.
For example, in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L388, `config.chunk_size_feed_forward` should change to `config.get('chunk_size_feed_forward', 0)`, because there is not `chunk_size_feed_forward` config in former version.
```
class BertLayer(nn.Module):
def __init__(self, config):
super().__init__()
# self.chunk_size_feed_forward = config.chunk_size_feed_forward
self.chunk_size_feed_forward = config.get('chunk_size_feed_forward', 0)
self.seq_len_dim = 1
self.attention = BertAttention(config)
self.is_decoder = config.is_decoder
self.add_cross_attention = config.add_cross_attention
if self.add_cross_attention:
assert self.is_decoder, f"{self} should be used as a decoder model if cross attention is added"
self.crossattention = BertAttention(config)
self.intermediate = BertIntermediate(config)
self.output = BertOutput(config)
``` | 09-06-2020 08:25:34 | 09-06-2020 08:25:34 | issue #6950 is also this problem.<|||||>I see what you mean! From `3.1.0` onwards every configuration has a `config.chunk_size_feed_forward` parameter. So as far as I can see whenever a config is loaded (whether via `.from_pretrained()` or with `BertConfig(....)`, this parameter is part of the config...). Can you give me an example where this would not be the case? <|||||>@patrickvonplaten Thanks for your replay.
Yes, I load the pretrained model with my own code, which cause this problem.
For some users who only use part of structures/classes/others in Transfomers, is it necessary to have certain compatibility?<|||||>> I see what you mean! From `3.1.0` onwards every configuration has a `config.chunk_size_feed_forward` parameter. So as far as I can see whenever a config is loaded (whether via `.from_pretrained()` or with `BertConfig(....)`, this parameter is part of the config...). Can you give me an example where this would not be the case?
I saw the `chunk_size_feed_forward` in BertConfig() after opened this issue.🤪<|||||>@patrickvonplaten
In the future, I will make more normative to call modules.
Thanks for your replay.
I close this issue. |
transformers | 6,971 | closed | distilled bart-large/bart-base | @sshleifer Is there a distilled bart model (not CNN/XSUM) model available?
Thanks! | 09-06-2020 02:32:04 | 09-06-2020 02:32:04 | Hi @kkissmart
Yes, distilbart models are available [here](https://huggingface.co/sshleifer/distilbart-cnn-6-6)<|||||>@patil-suraj I meant a distill Bart ( a pretrain model) not a summarization model. Do I misunderstand the name?<|||||>No, we don't have that.<|||||>Ohh, AFAIK there is no pre-trained distilbart like distilbert.
There are two types of distillation
1. No teacher distillation: which copies alternate layers from the pre-trained model and creates a small student model.
2. With teacher distillation: enforce that the student and teacher produce similar encoder_outputs, logits, and hidden_states
You can easily create a student (no teacher) model using the scripts [here](https://github.com/huggingface/transformers/tree/master/examples/seq2seq#no-teacher-distillation) , you'll just need to use `bart-large` instead of `bart-large-cnn`.
pre-training distilbart is still not included, however you can train the large model on the down-stream task and then do with teacher distillation for a smaller distilled model.<|||||>Also, consider asking such non-bug questions on the forum https://discuss.huggingface.co/ -:)<|||||>Is there a distilBART base model that does not have pretaining weights?<|||||>There is no distilbart-base model.
There are only distilled models fine-tuned on summarization tasks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>@sshleifer @patil-suraj
```
You can easily create a student (no teacher) model using the scripts here , you'll just need to use bart-large instead of bart-large-cnn.
https://github.com/huggingface/transformers/tree/master/examples/seq2seq#no-teacher-distillation or
https://github.com/huggingface/transformers/tree/master/examples/legacy/seq2seq#no-teacher-distillation ?
```
I want to make a DistilBART model from my japanese BART-large mode. but, No script in this link. Has the script been kept private? I want to see the script.
<|||||>Hi @hisashi-ito
The seq2seq distillation scripts are now moved under `examples/research_projects/seq2seq-distillation` directory. You can find them here.
https://github.com/huggingface/transformers/tree/master/examples/research_projects/seq2seq-distillation<|||||>Hi @patil-suraj
Thank you for teaching !! |
transformers | 6,970 | closed | Error installing transformers 3.1.0 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.7
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
pipeline("zero-shot-classification")
The problem arises when using:
pip3 install transformers=="3.1.0"
The tasks I am working on is:
Just installing the package to use zero-shot-classification
## To reproduce
Steps to reproduce the behavior:
1. pip3 install transformers=="3.1.0"
Alternatively:
1. pip3 install tokenizers=="0.8.1.rc2"
Notes:
It seems that the tokenizers version '0.8.1.rc2' is the issue. I can install the system just fine on different systems by changing the version to '0.8.0' in transformers/setup.py. Alternatively pip3 install transformers=="3.1.0" tokenizers=="0.8.0" seems to be a working method of installation, but tokenizers version "0.8.1.rc2" still has the error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
error: build failed
/tmp/pip-build-env-mey29riz/overlay/lib/python3.6/site-packages/setuptools/dist.py:452: UserWarning: Normalizing '0.8.1.rc2' to '0.8.1rc2'
warnings.warn(tmpl.format(**locals()))
cargo rustc --lib --manifest-path Cargo.toml --features pyo3/extension-module --release --verbose -- --crate-type cdylib
error: cargo failed with code: 101
ERROR: Failed building wheel for tokenizers
Running setup.py clean for tokenizers
Failed to build tokenizers
ERROR: Could not build wheels for tokenizers which use PEP 517 and cannot be installed directly
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The expected behaviour would be to install properly without error. | 09-05-2020 22:29:00 | 09-05-2020 22:29:00 | Yeah the latest release of tokenizers is 0.8.1 but the current code still references the rc2:
https://github.com/huggingface/transformers/blob/master/setup.py#L113<|||||>I was able to get this working by installing tokenizers 0.8.1. I then
installed transformers 3.1.0 without dependencies using --no-dependencies
flag (had to install a few other dependencies manually).
Your mileage may vary.
On Fri, Sep 11, 2020 at 23:21 Fabrizio Milo <[email protected]>
wrote:
>
>
> Yeah the latest release of tokenizers is 0.8.1 but the current code still
> references the rc2:
>
>
> https://github.com/huggingface/transformers/blob/master/setup.py#L113
>
>
>
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/6970#issuecomment-691394983>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/ACMFA4WL57SIWJT5U2LIBHDSFLSLZANCNFSM4Q3OYZ6Q>
> .
>
>
>
<|||||>I simply installed the transformer 3.0.0 version until they fix this problem.
`python3 -m pip install transformers==3.0.0`<|||||>> I simply installed the transformer 3.0.0 version until they fix this problem.
> `python3 -m pip install transformers==3.0.0`
I need version 3.1.0 for the latest 0-shot pipeline. But the following fixed the problem that @alexuadler mentioned:
pip3 install tokenizers=="0.8.1"
pip3 install transformers=="3.1.0" --no-dependencies<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,969 | closed | Incorrect loss calculation for the last batch in TFTrainer if dataloader_drop_last is False | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-115-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.7
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
tensorflow: @jplu
## Description
In [training_step()](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L595) in `trainer_tf.py`, we have
scaled_loss = per_example_loss / self.total_train_batch_size
However, if `dataloader_drop_last=False`, the last bacth (before being distributed to replicas) won't necessary have `self.total_train_batch_size` examples. If we allow `dataloader_drop_last=False`, we need a way to dynamically calculate the actual number of examples in a global batch, and pass this information in some way to the replicas.
| 09-05-2020 21:53:51 | 09-05-2020 21:53:51 | Thanks @chiapas! Indeed if the batch size becomes lower for the last step we divide by a wrong number. I will investigate this to better handle this edge case.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,968 | closed | Potential incorrect loss calculation for TFTokenClassification in TFTrainer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-4.15.0-115-generic-x86_64-with-debian-buster-sid
- Python version: 3.6.7
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
-->
Trainer: @sgugger
tensorflow: @jplu
examples/token-classification: @stefan-it
Mostly for @jplu, potentially for @stefan-it (because the workaround I have in mind requires a bit change in the token classification dataset).
## Information
The problem arises when using:
* [x] The official example scripts:
The involved scripts are:
- https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py
- https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_tf_ner.py
However, in order to demonstrate the issue in a more clear way, I use a minimal example which doesn't use directly these two scripts. See the description and code snippet below.
The tasks I am working on is:
* [x] Official token classification task in TensorFlow
## Description
In [trainer_tf.py](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L595), the loss calculation is calculated from `per_example_loss` divided by `total_train_batch_size`.
per_example_loss, _ = self.run_model(features, labels, True)
scaled_loss = per_example_loss / self.total_train_batch_size
Here `total_train_batch_size` is the size of a whole batch that will be distributed to (potentially) different replicas and optionally consisting of several smaller batches for accumulation steps.
For sentence level tasks, where each example (i.e. sentence) corresponds to a label (for example, sentence classification), the above loss calculation is correct.
However, for token level tasks like token classification, the the above loss seems incorrect to me. For such tasks, the loss should be the per example losses **divided by the number of real tokens involved in the batch**.
In [utils_ner](https://github.com/huggingface/transformers/blob/master/examples/token-classification/utils_ner.py#L75), `convert_examples_to_features` set labels to `-100` for padding tokens and other special tokens (`[CLS]`, `[SEP]`, etc), which are the places to be ignored for loss calculation. Therefore, the loss calculation should be the per example losses **divided by the number of labels that are not -100 in the \*_batch_\***.
By **\*_batch_\***, it should be careful that it is not the batch received by a single replica, and neither the smaller batch in a single accumulation step. It means `the whole batch that will be distributed to (potentially) different replicas and optionally consisting of several smaller batches for accumulation steps.` More precisely, it means a batch passed to [distributed_training_steps()](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L651) - for the same reason as we divide per example losses by `total_train_batch_size` for sentence level tasks, rather than dividing it by the size of batch received by a single replica.
In order to calculate the correct loss values, we have to pass the global information - the number of labels that are not `-100` in a `global batch` to each replica. I don't know a clean way to do it, but for my own personal projects, I inject this extra information into global batches as a constant, and each replica receiving a distributed smaller batch will have this information to calculate the correct scaled losses.
(I have a notebook showing how to perform it, if you want to look it, let me know.)
## Code Snippets
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is a minimal example to demonstrate the issue.
Here, we have only one real example (sentence) and `n_empty_string` empty sentences.
Each empty sentence will give only [CLS], [SEP] and [PAD] tokens that will be ignored for token classification.
import os
os.environ['TF_DETERMINISTIC_OPS'] = '1'
SEED = 42
name = 'distilbert-base-uncased'
seq_len = 8
num_labels = 2
n_empty_string = 10
import tensorflow as tf
tf.random.set_seed(SEED)
strategy = tf.distribute.OneDeviceStrategy(device="/cpu:0")
from transformers import TFTrainer, AutoConfig, AutoTokenizer, TFAutoModelForTokenClassification
from transformers.training_args_tf import TFTrainingArguments
text = [
'My dog is cute'
]
text.extend([''] * n_empty_string)
n_examples = len(text)
config = AutoConfig.from_pretrained(
name,
num_labels=num_labels
)
tokenizer = AutoTokenizer.from_pretrained(name)
model = TFAutoModelForTokenClassification.from_pretrained(
name
)
training_args = TFTrainingArguments(
output_dir='./tmp/',
per_device_train_batch_size=n_examples,
gradient_accumulation_steps=1,
seed=SEED
)
# Initialize our Trainer
trainer = TFTrainer(
model=model,
args=training_args,
train_dataset=None,
eval_dataset=None,
compute_metrics=None
)
trainer.total_train_batch_size = strategy.num_replicas_in_sync \
* training_args.per_device_train_batch_size \
* training_args.gradient_accumulation_steps
trainer.train_loss = tf.keras.metrics.Sum()
features = tokenizer.batch_encode_plus(text, max_length=seq_len, padding='max_length', return_tensors='tf')
# Set all labels to `1`, except for special tokens: cls/sep/pad, where the labels are `-100`.
labels = tf.constant(1, shape=[n_examples, seq_len])
for token_id in [tokenizer.pad_token_id] + tokenizer.all_special_ids:
labels = labels * tf.cast(features['input_ids'] != token_id, dtype=tf.int32) + \
-100 * tf.cast(features['input_ids'] == token_id, dtype=tf.int32)
# Only the first example `features[0]` has real tokens, the other examples have only [PAD].
print(features['input_ids'])
# Only the first example has labels that won't be ignored.
print(labels)
# Copy from:
# https://github.com/huggingface/transformers/blob/master/src/transformers/trainer_tf.py#L601
per_example_loss, _ = trainer.run_model(features, labels, True)
scaled_loss = per_example_loss / trainer.total_train_batch_size
print(scaled_loss)
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
When `n_empty_string = 0`, we get `scaled_loss`
tf.Tensor([0.56047076 0.46507886 0.51456743 0.50131255], shape=(4,), dtype=float32)
When `n_empty_string = 9`, we get `scaled_loss`
tf.Tensor([0.05604707 0.04650789 0.05145674 0.05013125], shape=(4,), dtype=float32)
However, in both case, we should get the same value, which should be
tf.Tensor([0.56047076 0.46507886 0.51456743 0.50131255], shape=(4,), dtype=float32) | 09-05-2020 20:59:01 | 09-05-2020 20:59:01 | Hello @chiapas!
Thanks for having investigating this! Have you checked the way we compute the loss for Token classification? Right here https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_utils.py#L161
We are already ignoring all the tokens that have -100 as label. So if there is an issue it might come from somewhere else.
<|||||>Hi, @jplu,
Yes, I checked that. But the issue in this bug report is not about ignoring -100 or not. The problem is that, the loss is calculated from the per example losses, then divided by `total_train_batch_size`. However, for token level tasks, it should be divided by `the number of actual tokens (i.e. tokens not ignored) in the global batch (i.e. the batch that having the size total_train_batch_size)`.<|||||>I don't get what you mean by the number of actual token?
You mean the number of batches that contain actual tokens, no? In your example, just 1? <|||||>If you want a support for the argument about the denominator I claimed, we can look at the pytorch implementation in [DistilBert](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_distilbert.py#L820) about the loss for token classification, you will see
loss_fct = CrossEntropyLoss()
and
loss = loss_fct(active_logits, active_labels)
And from [torch's doc](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#crossentropyloss), the default reduction is `mean`. So this corresponds to the per example losses (-100 ignored) divided by the number of actual tokens (i.e. -100 ignored again).
<|||||>>
>
> I don't get what you mean by the number of actual token?
>
> You mean the number of batches that contain actual tokens, no? In your example, just 1?
For any global batch (that has size `total_train_batch_size`), after distributed to replicas, we compute the per example losses on the smaller batches (where the tokens with label -100 are ignored). Then in the current implementation, this per example losses are divided by `total_train_batch_size`.
By the actual tokens, it means `the tokens in a global batch with labels != -100`. And what I said is that, the per example losses should be divided by the number of tokens with label != -100 in that global batch.
This number of `actual` tokens will be varied for different global batch however.
In the example in the code snippet, they are only `4` actual tokens. By when `n_empty_string = 9`, the current implementation divided the per example losses by ` 10 (1 + 9 dummy sentences)`.
<|||||>So basically what you propose is to return a mean reduction in the `call` function instead of the effective per example loss? Something like to replace:
```
loss = None if labels is None else self.compute_loss(labels, logits)
```
By
```
loss = None if labels is None else tf.reduce_mean(self.compute_loss(labels, logits))
```
And then dividing the result per the number of replica:
```
per_example_loss, _ = self.run_model(features, labels, True)
scaled_loss = per_example_loss / self.args.n_replicas
```
Otherwise, please show me you notebook because I still don't get it.<|||||>@jplu ,
I will explain a bit more and also show my notebook later. But no, I am not suggesting using
tf.reduce_mean(self.compute_loss(labels, logits)) # If we do so, the average occurs on the small batch in each replica.
I mentioned the pytorch version just to show that the loss should be averaged over the tokens, not over the sentences in the batch. However, the average shouldn't be over the small batches received by each replica, it should be over the global batches.<|||||>@jplu
If you want to look the code directly, here is my (kaggle) notebook
[Masked, My Dear Watson - MLM with TPU](https://www.kaggle.com/yihdarshieh/masked-my-dear-watson-mlm-with-tpu#MLM-loss-calculation). Please check `def mlm_fine_tune_step(batch):` just below that markdown cell, which has
loss_mlm = loss_fn(
labels_at_masked_tokens,
logits_at_masked_tokens
)
# divide the number of masked tokens in the global batch, i.e. the whole batch that is distributed to different replicas.
loss_mlm = loss_mlm / tf.cast(nb_tokens_masked[0], dtype=tf.float32)
If you prefer, I can work on this and make a PR. But I think it is better for us to agree that the loss calculation should be corrected. So I will try to explain:
1. For token level tasks, the loss value is the per example (and here, example = tokens) losses in a batch, divided by the number of tokens in that batch.
2. If we have tokens being ignored for loss calculation, the denominator above becomes the number of tokens not ignored in that batch.
3. By `batch`, it is the whole set used for 1 parameter update by gradients - which is called a `global batch`.
4. Since we use distributed strategy, and optionally gradient accumulation, while `training_step()` processes a batch, it is a small batch (i.e. a batch for `only 1 gradient accumulation step` on a `single replica`). However, the denominator in step `1.` or `2.` should be the `the number of token, not being ignored, in a global batch`, despite the per example losses are still based on the small batch received by a replica.
5. Since gradient accumulation will add the gradients, and distributed strategy will sync across replicas by summing the gradients before applying gradients, the above steps will give us the `averaged losses over the tokens (not ignored) in a global batch`.
Hope it make things a bit clear.<|||||>OK now with an example and the explanation I got it. Thank you very much!
I prefer you do a PR and then you get the credit of this fix :) And if you can tag me as reviewer I will be able to help you if needed, as there is certainly a nicer way to do. Maybe with a class field?
Thanks again, waiting your PR ^^<|||||>I am not able to assign reviewer, since I am not a collaborator yet on transformer repository.<|||||>Nice! I will review that carefuly tomorrow. I have assigned 2 other persons and myself as reviewer.<|||||>Hi, sorry, I created the pull request as draft. It is not ready to review at all. I will let you know when it is ready.<|||||>No problem! Take the time you need and let me know. |
transformers | 6,967 | closed | hack to extract cross attention for bart decoder | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-05-2020 20:23:49 | 09-05-2020 20:23:49 | |
transformers | 6,966 | closed | SPM Tokenizer confusion with fairseq Roberta | Hi,
I have pre-trained a custom Roberta Model from scratch with a unigram sentencepiece model (also trained from scratch). I have converted the model from fairseq to huggingface with this [script](https://github.com/huggingface/transformers/blob/master/src/transformers/convert_roberta_original_pytorch_checkpoint_to_pytorch.py) successfully.
I have tried loading the model in huggingface, which was successful, but the issue lies with the tokenizer, I tried using the Roberta tokenizer but it screamed at me because it was looking for merges and vocab. I then loaded my spm model with AlbertTokenizer, but when I try to test it out with a simple fill_mask, the answer tokens are incorrect. How do I correctly use my SPM model with Roberta? I also have dict.txt from fairseq.
Any help would be appreciated! | 09-05-2020 20:23:09 | 09-05-2020 20:23:09 | Hi @hichiaty ,
in that case I would try the CamemBERT Tokenizer or the one from XLM-RoBERTa (last one contains some hacky workaround to align fairseq vocab with SPM...) :)<|||||>hey @stefan-it thanks for the advice! I am still slightly confused though, I loaded my spm model with CamemBERT but for some reason it still doesn't match the tokens from fairseq's roberta.encode.
Using Fairseq's encode I get:
```
roberta.encode('HELLO')
>tensor([0, 7, 4, 6, 2])
```
With CamemBERT I get:
```
tokenizer('HELLO')
>{'input_ids': [5, 45, 36, 3863, 3595, 3595, 19, 5], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}
```
I'm just trying to figure out how fairseq works at this point because I don't even pass the spm model to it but a dict.txt file<|||||>Solved, created a custom tokenizer (based on camembert) that uses spm to encode as pieces, then fairseq's dict.txt to get ids. |
transformers | 6,965 | closed | transformers-cli upload individual files simplification | Currently it's not possible to upload an individual file in a simple:
```
transformers-cli upload fsmt-wmt19-ru-en/vocab-src.json
```
Getting error:
```
Filename invalid, every file must be nested inside a "model_name" folder.
```
but, instead, have to do:
```
transformers-cli upload fsmt-wmt19-ru-en/vocab-src.json --filename fsmt-wmt19-ru-en/vocab-src.json
```
But this is silly as the exact same input is repeated twice and I'm more likely to make an error while copy-n-pasting when providing an explicit destination filename. Why not look at the relative path and use that? And only give that error when there is no "model_name" folder in the args. i.e. definitely give error on:
```
transformers-cli upload vocab.json
```
I understand the ` --filename` is useful for renaming, but there is no renaming here.
Additionally, if the first suggestion is acceptable, would it be OK to support multiple filenames? I want to be able to update several files in one go (just the config files, but not the whole folder, since the model is huge)
```
transformers-cli upload fsmt-wmt19-ru-en/vocab-*.json
```
gives error:
```
Transformers CLI tool: error: unrecognized arguments: fsmt-wmt19-ru-en/vocab-ru.json fsmt-wmt19-ru-en/vocab-src.json fsmt-wmt19-ru-en/vocab-tgt.json
```
Thanks. | 09-05-2020 20:10:03 | 09-05-2020 20:10:03 | Well, I wrote a little script to generate the long commands that are required now - perhaps it'd be useful to someone:
```
perl -le 'for $f (@ARGV) { print qq[yes Y | transformers-cli upload $_/$f --filename $_/$f] for map { "fsmt-wmt19-$_" } ("en-ru", "ru-en", "de-en", "en-de")}' vocab-src.json vocab-tgt.json tokenizer_config.json
```
generated:
```
yes Y | transformers-cli upload fsmt-wmt19-en-ru/vocab-src.json --filename fsmt-wmt19-en-ru/vocab-src.json
yes Y | transformers-cli upload fsmt-wmt19-ru-en/vocab-src.json --filename fsmt-wmt19-ru-en/vocab-src.json
yes Y | transformers-cli upload fsmt-wmt19-de-en/vocab-src.json --filename fsmt-wmt19-de-en/vocab-src.json
yes Y | transformers-cli upload fsmt-wmt19-en-de/vocab-src.json --filename fsmt-wmt19-en-de/vocab-src.json
yes Y | transformers-cli upload fsmt-wmt19-en-ru/vocab-tgt.json --filename fsmt-wmt19-en-ru/vocab-tgt.json
yes Y | transformers-cli upload fsmt-wmt19-ru-en/vocab-tgt.json --filename fsmt-wmt19-ru-en/vocab-tgt.json
yes Y | transformers-cli upload fsmt-wmt19-de-en/vocab-tgt.json --filename fsmt-wmt19-de-en/vocab-tgt.json
yes Y | transformers-cli upload fsmt-wmt19-en-de/vocab-tgt.json --filename fsmt-wmt19-en-de/vocab-tgt.json
yes Y | transformers-cli upload fsmt-wmt19-en-ru/tokenizer_config.json --filename fsmt-wmt19-en-ru/tokenizer_config.json
yes Y | transformers-cli upload fsmt-wmt19-ru-en/tokenizer_config.json --filename fsmt-wmt19-ru-en/tokenizer_config.json
yes Y | transformers-cli upload fsmt-wmt19-de-en/tokenizer_config.json --filename fsmt-wmt19-de-en/tokenizer_config.json
yes Y | transformers-cli upload fsmt-wmt19-en-de/tokenizer_config.json --filename fsmt-wmt19-en-de/tokenizer_config.json
```
As I have an easy workaround that works well, unless others feel the suggested improvements in the OP would be useful, I'd be happy to close this ticket.<|||||>pinging @julien-c <|||||>@julien-c, Should this be closed or fixed? Thanks.<|||||>Closing as we are migrating to a new system anyways (more info soon) |
transformers | 6,964 | closed | Create README.md model card | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
model card for https://huggingface.co/rjbownes/Magic-The-Generating?text=Once+upon+a+time%2C | 09-05-2020 18:44:50 | 09-05-2020 18:44:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=h1) Report
> Merging [#6964](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d31031f603043281d4fbac6cbdcfb6497fd500ab?el=desc) will **decrease** coverage by `4.23%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6964 +/- ##
==========================================
- Coverage 80.03% 75.80% -4.24%
==========================================
Files 161 161
Lines 30120 30120
==========================================
- Hits 24108 22833 -1275
- Misses 6012 7287 +1275
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <0.00%> (-80.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.30% <0.00%> (-55.16%)` | :arrow_down: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `23.50% <0.00%> (-46.52%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.00% <0.00%> (-20.08%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.57% <0.00%> (-14.29%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6964/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=footer). Last update [d31031f...4ff71ec](https://codecov.io/gh/huggingface/transformers/pull/6964?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is great! Added some custom prompts for the inference widget. Thanks for sharing! |
transformers | 6,963 | closed | Longformer config - vocabulary size | ## Environment info
- `transformers` version: 3.1.0
### Who can help
Longformer/Reformer: @patrickvonplaten
## Information
Why does the LongformerConfig's vocab_size defaults to 30522 while the Longformer has embeddings first dimension 50265? It reuses the same config as RoBERTa which also has embeddings shape (50265,768) due to the BPE tokenization (and also defaults to 30522).
```python
LongformerConfig {
"attention_probs_dropout_prob": 0.1,
"attention_window": 512,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "longformer",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"sep_token_id": 2,
"type_vocab_size": 2,
"vocab_size": 30522
}
``` | 09-05-2020 17:28:59 | 09-05-2020 17:28:59 | Hey @blawok,
which model are you referring to exactly? This longformer model: https://s3.amazonaws.com/models.huggingface.co/bert/allenai/longformer-base-4096/config.json has vocab_size set to 50265 <|||||>Thanks for the answer @patrickvonplaten :)
Using the code from the docs (https://huggingface.co/transformers/model_doc/longformer.html#longformerconfig):
```python
from transformers import LongformerConfig, LongformerModel
# Initializing a Longformer configuration
configuration = LongformerConfig()
# Initializing a model from the configuration
model = LongformerModel(configuration)
# Accessing the model configuration
configuration = model.config
print(configuration)
```
I am getting this output:
```
LongformerConfig {
"attention_probs_dropout_prob": 0.1,
"attention_window": [
512,
512,
512,
512,
512,
512,
512,
512,
512,
512,
512,
512
],
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "longformer",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"sep_token_id": 2,
"type_vocab_size": 2,
"vocab_size": 30522
}
```
It resulted in error while trying to train:
```python
config = LongformerConfig()
model = TFLongformerModel.from_pretrained('allenai/longformer-base-4096', config=config)
```
LongformerTokenizer correctly used the 50265 vocab, but the model expected it to be 30522. However, I am not getting this error when I am not specifying the configuration.
Example to reproduce:
```python
from transformers import LongformerTokenizer, TFLongformerModel, LongformerConfig
tokenizer = LongformerTokenizer.from_pretrained('allenai/longformer-base-4096')
input = tokenizer('Hello world')
config = LongformerConfig()
model = TFLongformerModel.from_pretrained('allenai/longformer-base-4096',
config=config)
```
Error I am getting with the code above:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-4-80be6efb2343> in <module>()
6 config = LongformerConfig()
7 model = TFLongformerModel.from_pretrained('allenai/longformer-base-4096',
----> 8 config=config)
9 print(model(input))
2 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/saving/hdf5_format.py in load_weights_from_hdf5_group_by_name(f, layers, skip_mismatch)
784 symbolic_weights[i])) +
785 ', but the saved weight has shape ' +
--> 786 str(weight_values[i].shape) + '.')
787
788 else:
ValueError: Layer #0 (named "longformer"), weight <tf.Variable 'tf_longformer_model/longformer/embeddings/word_embeddings/weight:0' shape=(30522, 768) dtype=float32, numpy=
array([[-0.01930627, -0.01715518, -0.00557071, ..., -0.01202598,
0.01007012, -0.00184635],
[ 0.00633493, 0.00123013, -0.0134872 , ..., -0.01304915,
-0.00157391, 0.00082429],
[-0.01581489, 0.01005882, -0.01242067, ..., 0.00555116,
0.02116241, 0.03123646],
...,
[-0.01625618, 0.01438301, 0.03368756, ..., -0.02742909,
0.00300512, 0.00728624],
[-0.0078434 , -0.01735217, 0.00178284, ..., -0.01191203,
-0.01451435, 0.03031485],
[-0.00814894, 0.01228636, 0.00573935, ..., 0.01143655,
-0.00131886, -0.03910364]], dtype=float32)> has shape (30522, 768), but the saved weight has shape (50265, 768).
```<|||||>Hi @blawok, you're initializing a configuration using the default parameters, which may not be the same as the checkpoint's parameters you're initializing (it's not the case here).
You should initialize the configuration from the checkpoint here too:
```py
config = LongformerConfig.from_pretrained('allenai/longformer-base-4096')
model = TFLongformerModel.from_pretrained('allenai/longformer-base-4096', config=config)
```<|||||>Great, thank you for the explanation @LysandreJik :)
I am closing this issue. |
transformers | 6,962 | closed | Tokenizers became slow compared to 2.8.0 | ## Environment info
- `transformers` version: 3.1.0
- Platform: Ubuntu 20.04
- Python version: 3.7.9
- PyTorch version (GPU?): No
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
tokenizers: @mfuntowicz
## To reproduce
Steps to reproduce the behavior:
```python
import timeit
import numpy as np
from transformers import __version__ as trans_version
from transformers import (
CTRLTokenizer,
GPT2Tokenizer,
RobertaTokenizer,
XLMTokenizer,
XLNetTokenizer
)
Tok_CLASSES = {
"gpt2": GPT2Tokenizer,
"ctrl": CTRLTokenizer,
"roberta-large": RobertaTokenizer,
"xlnet-large-cased": XLNetTokenizer,
"xlm-mlm-100-1280": XLMTokenizer,
}
print(trans_version)
for k, v in Tok_CLASSES.items():
tokenizer_class = v
tokenizer = tokenizer_class.from_pretrained(k)
text = """<|endoftext|>🤗 Transformers (formerly known as pytorch-transformers and pytorch-pretrained-bert) provides
general-purpose
architectures (BERT, GPT-2, RoBERTa, XLM, DistilBert, XLNet…) for Natural Language Understanding (NLU)
and Natural Language Generation (NLG) with over 32+ pretrained models in 100+ languages and deep
interoperability between TensorFlow 2.0 and PyTorch.</s> <eos>"""
print(tokenizer.__class__)
r = timeit.repeat(stmt="tokenizer.encode(text)", repeat=100, number=500, globals=globals())
rounding = 4
print('Time taken (mean ± 3std):',
str(np.round(np.mean(r), rounding)) + '±' + str(np.round(3 * np.std(r), rounding)))
```
In 3.1.0 output is:
```
3.1.0
<class 'transformers.tokenization_gpt2.GPT2Tokenizer'>
Time taken (mean ± 3std): 0.1808±0.0115
<class 'transformers.tokenization_ctrl.CTRLTokenizer'>
Time taken (mean ± 3std): 0.0678±0.0015
<class 'transformers.tokenization_roberta.RobertaTokenizer'>
Time taken (mean ± 3std): 0.2051±0.0024
<class 'transformers.tokenization_xlnet.XLNetTokenizer'>
Time taken (mean ± 3std): 0.1567±0.002
<class 'transformers.tokenization_xlm.XLMTokenizer'>
Time taken (mean ± 3std): 0.3601±0.0248
```
## Expected behavior
In 2.8.0 output is (and i hope, even these times can be improved without using Fast versions).
```
<class 'transformers.tokenization_gpt2.GPT2Tokenizer'>
Time taken (mean ± 3std): 0.1808±0.0115
<class 'transformers.tokenization_ctrl.CTRLTokenizer'>
Time taken (mean ± 3std): 0.0678±0.0015
<class 'transformers.tokenization_roberta.RobertaTokenizer'>
Time taken (mean ± 3std): 0.2051±0.0024
<class 'transformers.tokenization_xlnet.XLNetTokenizer'>
Time taken (mean ± 3std): 0.1567±0.002
<class 'transformers.tokenization_xlm.XLMTokenizer'>
Time taken (mean ± 3std): 0.3601±0.0248
```
## TLDR
With GPT2 and CTRL 3.1.0 tokenizer.encode (with default options) takes ~1.3x time compared to 2.8.0 code. I think this can be improved|solved if code like https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L254-L256 was not executed every tokenization call but for example once tokens added (and result stored in self.all_special_tokens_extended). Maybe this is not only place with unnecessary calculations per tokenization.
For example `self.encoder.get(self.unk_token)` from https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_gpt2.py#L244 could be stored once in some property updated with unk_token change to reduce time to call it every time token-to-id conversion happens.
Same storage idea for https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils.py#L266
| 09-05-2020 15:43:18 | 09-05-2020 15:43:18 | Working with special tokens attributes also became slower:
```python
import timeit
import numpy as np
from transformers import __version__ as trans_version
from transformers import (
CTRLTokenizer,
GPT2Tokenizer,
RobertaTokenizer,
XLMTokenizer,
XLNetTokenizer
)
Tok_CLASSES = {
"gpt2": GPT2Tokenizer,
"ctrl": CTRLTokenizer,
"roberta-large": RobertaTokenizer,
"xlnet-large-cased": XLNetTokenizer,
"xlm-mlm-100-1280": XLMTokenizer,
}
print(trans_version)
rounding = 4
for k, v in Tok_CLASSES.items():
tokenizer_class = v
tokenizer = tokenizer_class.from_pretrained(k, verbose=False)
print(tokenizer.__class__)
# print(tokenizer.encode(text))
if tokenizer.eos_token is not None:
r_eos_token = timeit.repeat(stmt="tokenizer.eos_token", repeat=100, number=100000, globals=globals())
r_eos_token_id = timeit.repeat(stmt="tokenizer.eos_token_id", repeat=100, number=100000, globals=globals())
print('Get eos_token, time taken (mean ± 3std):',
str(np.round(np.mean(r_eos_token), rounding)) + '±' + str(np.round(3 * np.std(r_eos_token), rounding)))
print('Get eos_token_id, time taken (mean ± 3std):',
str(np.round(np.mean(r_eos_token_id), rounding)) + '±' + str(
np.round(3 * np.std(r_eos_token_id), rounding)))
if tokenizer.bos_token is not None:
r_bos_token = timeit.repeat(stmt="tokenizer.bos_token", repeat=100, number=100000, globals=globals())
r_bos_token_id = timeit.repeat(stmt="tokenizer.bos_token_id", repeat=100, number=100000, globals=globals())
print('Get bos_token, time taken (mean ± 3std):',
str(np.round(np.mean(r_bos_token), rounding)) + '±' + str(np.round(3 * np.std(r_bos_token), rounding)))
print('Get bos_token_id, time taken (mean ± 3std):',
str(np.round(np.mean(r_bos_token_id), rounding)) + '±' + str(
np.round(3 * np.std(r_bos_token_id), rounding)))
if tokenizer.unk_token is not None:
r_unk_token = timeit.repeat(stmt="tokenizer.unk_token", repeat=100, number=100000, globals=globals())
r_unk_token_id = timeit.repeat(stmt="tokenizer.unk_token_id", repeat=100, number=100000, globals=globals())
print('Get unk_token, time taken (mean ± 3std):',
str(np.round(np.mean(r_unk_token), rounding)) + '±' + str(np.round(3 * np.std(r_unk_token), rounding)))
print('Get unk_token_id, time taken (mean ± 3std):',
str(np.round(np.mean(r_unk_token_id), rounding)) + '±' + str(
np.round(3 * np.std(r_unk_token_id), rounding)))
```
gives for 2.8.0
```
2.8.0
<class 'transformers.tokenization_gpt2.GPT2Tokenizer'>
Get eos_token, time taken (mean ± 3std): 0.0104±0.0003
Get eos_token_id, time taken (mean ± 3std): 0.0715±0.0059
Get bos_token, time taken (mean ± 3std): 0.0101±0.0021
Get bos_token_id, time taken (mean ± 3std): 0.0687±0.0155
Get unk_token, time taken (mean ± 3std): 0.0101±0.0001
Get unk_token_id, time taken (mean ± 3std): 0.0632±0.0004
<class 'transformers.tokenization_ctrl.CTRLTokenizer'>
Get unk_token, time taken (mean ± 3std): 0.0098±0.0002
Get unk_token_id, time taken (mean ± 3std): 0.0639±0.0008
<class 'transformers.tokenization_roberta.RobertaTokenizer'>
Get eos_token, time taken (mean ± 3std): 0.0099±0.0003
Get eos_token_id, time taken (mean ± 3std): 0.0644±0.0017
Get bos_token, time taken (mean ± 3std): 0.0093±0.0001
Get bos_token_id, time taken (mean ± 3std): 0.064±0.0003
Get unk_token, time taken (mean ± 3std): 0.0094±0.0002
Get unk_token_id, time taken (mean ± 3std): 0.0727±0.0039
<class 'transformers.tokenization_xlnet.XLNetTokenizer'>
Get eos_token, time taken (mean ± 3std): 0.01±0.0002
Get eos_token_id, time taken (mean ± 3std): 0.0848±0.0021
Get bos_token, time taken (mean ± 3std): 0.0104±0.0003
Get bos_token_id, time taken (mean ± 3std): 0.0847±0.0072
Get unk_token, time taken (mean ± 3std): 0.0097±0.0001
Get unk_token_id, time taken (mean ± 3std): 0.084±0.0007
<class 'transformers.tokenization_xlm.XLMTokenizer'>
Get bos_token, time taken (mean ± 3std): 0.01±0.0001
Get bos_token_id, time taken (mean ± 3std): 0.0646±0.001
Get unk_token, time taken (mean ± 3std): 0.0098±0.0001
Get unk_token_id, time taken (mean ± 3std): 0.0639±0.0004
```
and for 3.1.0 (2x...4x slower):
```
3.1.0
<class 'transformers.tokenization_gpt2.GPT2Tokenizer'>
Get eos_token, time taken (mean ± 3std): 0.0422±0.0004
Get eos_token_id, time taken (mean ± 3std): 0.1465±0.0015
Get bos_token, time taken (mean ± 3std): 0.0418±0.0005
Get bos_token_id, time taken (mean ± 3std): 0.1453±0.0009
Get unk_token, time taken (mean ± 3std): 0.0417±0.0002
Get unk_token_id, time taken (mean ± 3std): 0.1519±0.0186
<class 'transformers.tokenization_ctrl.CTRLTokenizer'>
Get unk_token, time taken (mean ± 3std): 0.0163±0.0003
Get unk_token_id, time taken (mean ± 3std): 0.0821±0.0006
<class 'transformers.tokenization_roberta.RobertaTokenizer'>
Get eos_token, time taken (mean ± 3std): 0.0419±0.0029
Get eos_token_id, time taken (mean ± 3std): 0.1462±0.004
Get bos_token, time taken (mean ± 3std): 0.042±0.0004
Get bos_token_id, time taken (mean ± 3std): 0.1544±0.0311
Get unk_token, time taken (mean ± 3std): 0.0449±0.0016
Get unk_token_id, time taken (mean ± 3std): 0.1511±0.006
<class 'transformers.tokenization_xlnet.XLNetTokenizer'>
Get eos_token, time taken (mean ± 3std): 0.0165±0.0004
Get eos_token_id, time taken (mean ± 3std): 0.0918±0.0043
Get bos_token, time taken (mean ± 3std): 0.0164±0.0003
Get bos_token_id, time taken (mean ± 3std): 0.0931±0.0034
Get unk_token, time taken (mean ± 3std): 0.0166±0.0002
Get unk_token_id, time taken (mean ± 3std): 0.0933±0.0004
<class 'transformers.tokenization_xlm.XLMTokenizer'>
Get bos_token, time taken (mean ± 3std): 0.0162±0.0003
Get bos_token_id, time taken (mean ± 3std): 0.0801±0.0008
Get unk_token, time taken (mean ± 3std): 0.016±0.0003
Get unk_token_id, time taken (mean ± 3std): 0.0827±0.0002
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,961 | closed | adding TRANSFORMERS_VERBOSITY env var | Per discussion at https://github.com/huggingface/transformers/pull/6816#issuecomment-686347433, this PR:
- adds `TRANSFORMERS_VERBOSITY` env var
- docs
- tests
- new test utils
I'm open to a different name if that one doesn't work.
Thank you.
| 09-05-2020 06:38:01 | 09-05-2020 06:38:01 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=h1) Report
> Merging [#6961](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56742e9f610231d7b28fe2387770dc56014b79de?el=desc) will **increase** coverage by `0.65%`.
> The diff coverage is `96.42%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6961 +/- ##
==========================================
+ Coverage 80.00% 80.65% +0.65%
==========================================
Files 161 161
Lines 30120 30147 +27
==========================================
+ Hits 24097 24315 +218
+ Misses 6023 5832 -191
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `69.17% <94.11%> (+3.28%)` | :arrow_up: |
| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `85.89% <100.00%> (+10.89%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.80%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+1.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6961/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=footer). Last update [56742e9...3c194b7](https://codecov.io/gh/huggingface/transformers/pull/6961?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>All the requested changes have been done.<|||||>Great, thanks @stas00 |
transformers | 6,960 | closed | create model card for astroGPT | 09-05-2020 01:25:43 | 09-05-2020 01:25:43 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=h1) Report
> Merging [#6960](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.50%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6960 +/- ##
==========================================
+ Coverage 77.81% 79.31% +1.50%
==========================================
Files 157 157
Lines 28853 28853
==========================================
+ Hits 22452 22885 +433
+ Misses 6401 5968 -433
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.54% <0.00%> (-41.13%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-19.15%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.81%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.37%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.83%)` | :arrow_up: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6960/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=footer). Last update [4ebb52a...402e26a](https://codecov.io/gh/huggingface/transformers/pull/6960?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>really cool model, thanks for sharing:
<img width="736" alt="Screenshot 2020-09-05 at 18 49 27" src="https://user-images.githubusercontent.com/326577/92309755-4883d480-ef76-11ea-90e5-27a97e7b2746.png">
<|||||>ヘ( ^o^)ノ\(^_^ ) thanks @julien-c |
|
transformers | 6,959 | closed | typo | there is no var `decoder_input_ids`, but there is `input_ids` for decoder :)
| 09-05-2020 00:25:07 | 09-05-2020 00:25:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=h1) Report
> Merging [#6959](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56742e9f610231d7b28fe2387770dc56014b79de?el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6959 +/- ##
==========================================
- Coverage 80.00% 79.98% -0.02%
==========================================
Files 161 161
Lines 30120 30120
==========================================
- Hits 24097 24092 -5
- Misses 6023 6028 +5
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <ø> (ø)` | |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.45% <0.00%> (-1.76%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6959/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=footer). Last update [56742e9...12a1792](https://codecov.io/gh/huggingface/transformers/pull/6959?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,958 | closed | [testing] add dependency: parametrize | unittest doesn't support pytest's super-handy `@pytest.mark.parametrize`, I researched and there are many proposed workarounds, most are tedious at best. If we include https://pypi.org/project/parameterized/ in dev's testing dependencies - it will provide a very easy to write parameterization in tests. It provides the same functionality as pytest's fixture, plus quite a few other ways.
Example:
```
from parameterized import parameterized
@parameterized([
(2, 2, 4),
(2, 3, 8),
(1, 9, 1),
(0, 9, 0),
])
def test_pow(base, exponent, expected):
assert_equal(math.pow(base, exponent), expected)
```
(extra `self`var if inside a test class)
To remind the pytest style is slightly different:
```
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
def test_eval(test_input, expected):
```
More examples here: https://pypi.org/project/parameterized
May I suggest that it will make it much easier to write some types of tests?
And I have an immediate use for it, in the current PR I'm working on. So it's not just nice to have request.
Thank you. | 09-05-2020 00:02:48 | 09-05-2020 00:02:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=h1) Report
> Merging [#6958](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/56742e9f610231d7b28fe2387770dc56014b79de?el=desc) will **increase** coverage by `0.30%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6958 +/- ##
==========================================
+ Coverage 80.00% 80.30% +0.30%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 24097 24189 +92
+ Misses 6023 5931 -92
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+1.50%)` | :arrow_up: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |
| ... and [3 more](https://codecov.io/gh/huggingface/transformers/pull/6958/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=footer). Last update [56742e9...6a043c8](https://codecov.io/gh/huggingface/transformers/pull/6958?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,957 | closed | PRETRAINED_INIT_CONFIGURATION for local model path | Tokenizers have a special dict `PRETRAINED_INIT_CONFIGURATION`, which tells the tokenization_utils_base, which extra args to pass to the tokenizer's `__init__`, except it doesn't work for the local model, as the hash is for online s3 models.
I have:
```
PRETRAINED_INIT_CONFIGURATION = {
"stas/fsmt-wmt19-ru-en": {
"langs": ["ru", "en"],
},
"stas/fsmt-wmt19-en-ru": {
"langs": ["en", "ru"],
},
"stas/fsmt-wmt19-de-en": {
"langs": ["de", "en"],
},
"stas/fsmt-wmt19-en-de": {
"langs": ["en", "de"],
},
}
```
So in my own code I use
```
if LOCAL:
path = "/code/huggingface/transformers-fair-wmt/data/fsmt-wmt19-ru-en/"
mname = path
mname_tok = f"stas/fsmt-wmt19-{src}-{tgt}"
tokenizer = FSMTTokenizer.from_pretrained(mname_tok)
model = FSMTForConditionalGeneration.from_pretrained(mname)
else:
# # s3 uploaded model
mname = f"stas/fsmt-wmt19-{src}-{tgt}"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
```
So `mname_tok` overrides to look up the dict above, since it fails to find that entry using local path.
This, however, doesn't work in tools that aren't under my control, `run_eval.py` in seq2seq for example.
**edit**, more to it - it doesn't pass `PRETRAINED_VOCAB_FILES_MAP` args either for the same reason - it fails to look up those entries for local path. I need to change them all.
Any suggestions on how to fix this problem? | 09-04-2020 21:26:28 | 09-04-2020 21:26:28 | I found a sort of band-aid, I added this code in the model's tokenization code, right after init of `PRETRAINED_INIT_CONFIGURATION, PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES`:
```
LOCALIZE=1
if LOCALIZE:
old, new = ("stas/", "/mnt/nvme1/code/huggingface/transformers-fair-wmt/data/")
def localize(buf): return buf.replace(old, new)
for d in [PRETRAINED_INIT_CONFIGURATION, PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES]:
for k, v in d.copy().items():
d[localize(k)] = v
for d in [PRETRAINED_VOCAB_FILES_MAP]:
for tk, tv in d.items():
for k, v in tv.copy().items():
tv[localize(k)] = v
```
It's still not great, since now I can't commit this file to repo as this leads to a problem of committing other changes in this file. But at least I can move forward.<|||||>I dug dipper and found how to solve it - needed to create `model_dir/tokenizer_config.json` and put the special init params there. And do several more tweaks to make the vocab files to not include the language names in the filename, but use the generic 'vocab-src.txt', 'vocab-tgt.txt'. |
transformers | 6,956 | closed | [doc] remove the implied defaults to :obj:`None`, s/True/ :obj:`True/, etc. | as discussed at https://github.com/huggingface/transformers/pull/6932#issuecomment-687362952
**edit**: I also threw in :obj:`True/False - anything else?
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|, defaults to :obj:.None.||' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|, defaults to True|, defaults to :obj:`True`|' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|, defaults to False|, defaults to :obj:`False`|' {} \;
```
@sgugger | 09-04-2020 20:32:25 | 09-04-2020 20:32:25 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=h1) Report
> Merging [#6956](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/eff274d629c95fca459969b530b4ad0da5563918?el=desc) will **decrease** coverage by `6.41%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6956 +/- ##
==========================================
- Coverage 80.02% 73.61% -6.42%
==========================================
Files 161 161
Lines 30120 30120
==========================================
- Hits 24105 22172 -1933
- Misses 6015 7948 +1933
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <ø> (ø)` | |
| [src/transformers/configuration\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JlcnQucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2VsZWN0cmEucHk=) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZsYXViZXJ0LnB5) | `100.00% <ø> (ø)` | |
| [src/transformers/configuration\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2dwdDIucHk=) | `97.29% <ø> (ø)` | |
| [src/transformers/configuration\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <ø> (-80.00%)` | :arrow_down: |
| [src/transformers/configuration\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `97.05% <ø> (ø)` | |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `97.14% <ø> (ø)` | |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <ø> (-78.38%)` | :arrow_down: |
| [src/transformers/configuration\_retribert.py](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JldHJpYmVydC5weQ==) | `34.78% <ø> (ø)` | |
| ... and [89 more](https://codecov.io/gh/huggingface/transformers/pull/6956/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=footer). Last update [eff274d...e93aa69](https://codecov.io/gh/huggingface/transformers/pull/6956?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Great, thanks a lot! |
transformers | 6,955 | closed | [WIP] Language modeling example for TF Trainer | To support language modeling for TF Trainer.
| 09-04-2020 19:26:40 | 09-04-2020 19:26:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=h1) Report
> Merging [#6955](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c5d43a872f0e85ce069e921c5bda02374e5b9cbf?el=desc) will **decrease** coverage by `0.01%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6955 +/- ##
==========================================
- Coverage 80.02% 80.00% -0.02%
==========================================
Files 161 161
Lines 30120 30120
==========================================
- Hits 24104 24098 -6
- Misses 6016 6022 +6
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6955/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (ø)` | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=footer). Last update [c5d43a8...8e24159](https://codecov.io/gh/huggingface/transformers/pull/6955?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,954 | closed | How to insert a hidden output from GPT2 model directly into a BERT layer? | Hello,
I am trying to do the following:
1. Feed in the input_ids into the embedding layer of the pre-trained GPT2 model, and get the resulting embedding.
2. directly feed the embedding from 1. to each layer of the pre-trained GPT2, by using `GPT2_model.transformers.h`.
Store the resulting hidden output from each layer of the GPT-2 in a tensor named `layer_hidden_state`.
3. Directly input the `layer_hidden_state` into the 1st layer (which is on the top of the embedding layer) of the pre-trained BertModel, and let BertModel process the `layer_hidden_state` until it reaches the uppermost layer of BertModel.
My attempts for carrying out the above steps are shown below.
But I am getting an error when I try to do the step 3...how can I fix my error? the error is shown at the bottom of my code.
Thank you for the help,
```Python
# turn on the evaluation mode
# (to prevent the dropout for evaluation purpose).
gpt2DoubleHeadsModel.eval()
len_input_ids = len(input_ids)
# get the hidden state vector from the embedding layer.
# we will use this hidden state vector as an input to each layer.
input_hidden_state = gpt2DoubleHeadsModel(input_ids=input_ids,
mc_token_ids = mc_token_ids,
token_type_ids = token_type_ids,
attention_mask = attention_mask)[3][0][:,:,:].detach()
for j in range(num_layer_gpt2):
# directly feed in the embedding hidden state vector into each layer of the GPT2DoubleHeadsModel,
# and retrieve the resulting hidden state vector from each layer.
layer_hidden_state = \
gpt2DoubleHeadsModel.transformer.h[j](input_hidden_state)[0][:,(len_input_ids-1),:]
# store the hidden state vectors of the last token from each layer in `last_hidden_output_tensor`.
last_hidden_output_tensor[:,j,:] = layer_hidden_state
last_hidden_output_tensor = tuple(last_hidden_output_tensor)
best_model_bert = BertModel.from_pretrained('bert-large-uncased', output_hidden_states=True)
# an error is generated here the error is shown below:
for k in range(nlayer_bert):
last_hidden_output_tensor = best_model_bert.encoder.layer[k]((last_hidden_output_tensor)[0])
"""
error:
File "/Users/hyunjindominiquecho/opt/anaconda3/lib/python3.7/site-packages/transformers/modeling_bert.py", line 239, in transpose_for_scores
return x.permute(0, 2, 1, 3)
RuntimeError: number of dims don't match in permute
"""
``` | 09-04-2020 19:22:39 | 09-04-2020 19:22:39 | how do you fix it ? |
transformers | 6,953 | closed | [s2s] run_eval supports --prefix clarg. | - Useful for multilingual models.
- `model.config.prefix` will still be used if prefix not passed. This prefix is added to the beginning of each example from the source document before calling `generate`.
- `decoder_start_token_id` is a different thing and unaffected.
Usage:
```bash
export dd=wmt_en_de
python run_eval.py Helsinki-NLP/opus-mt-en-gem \
$dd/val.source \
$dd/marian_multi_val_gens.txt \
--reference_path $dd/val.target \
--task translation --fp16 --bs 128 \
--score_path $dd/marian_multi_val_bleu.json \
--prefix ">>deu<<"
``` | 09-04-2020 18:50:44 | 09-04-2020 18:50:44 | |
transformers | 6,952 | closed | typo | 09-04-2020 18:47:34 | 09-04-2020 18:47:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=h1) Report
> Merging [#6952](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4?el=desc) will **decrease** coverage by `2.08%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6952 +/- ##
==========================================
- Coverage 80.02% 77.94% -2.09%
==========================================
Files 161 161
Lines 30120 30120
==========================================
- Hits 24104 23477 -627
- Misses 6016 6643 +627
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <ø> (-7.19%)` | :arrow_down: |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6952/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=footer). Last update [a4fc0c8...865e3b2](https://codecov.io/gh/huggingface/transformers/pull/6952?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,951 | closed | How to enable grad_fn when calling the generate() method of a T5 model | I was trying to attribute a prediction by a T5 model to the words of an input by gradient method. For a ```T5_model```, I call ```T5_model(**inputs)``` when training, and call ```T5_model.generate(**inputs)``` when doing inference. In training, the grad_fn is enabled for the loss but not in inference.
So how do I enable grad_fn when calling the ```generate()``` method to generate a prediction? So that I can get the gradient of the prediction with respect to each word of the input sentence. | 09-04-2020 18:17:44 | 09-04-2020 18:17:44 | In fact, in the implementation file of T5, [modeling_t5.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py), I did not see ```with torch.no_grad:```. So I was wondering how ```grad_fn``` is disabled in the ```generate()``` method.<|||||>Although I think it has a low chance to work, I tried to add ```torch.set_grad_enabled(True)``` before calling the ```generate()``` method, but the ```grad_fn``` is still not enabled.<|||||>It turned out there is a decorator before the [generate() method](https://github.com/huggingface/transformers/blob/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4/src/transformers/generation_utils.py#L110). I was looking for it in the body of the method.<|||||>> It turned out there is a decorator before the [generate() method](https://github.com/huggingface/transformers/blob/a4fc0c80b11e14aaf6a9ec7c6fa5e6dab54261e4/src/transformers/generation_utils.py#L110). I was looking for it in the body of the method.
Did you solve this problem? I run into the same problem and need a solution. Thanks! |
transformers | 6,950 | closed | head_mask in modeling_bert.py | Should change https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L487 `head_mask[i]` to `head_mask[i] if head_mask is not None else None`.
full code:
```
class BertEncoder(nn.Module):
def __init__(self, config):
super().__init__()
self.config = config
self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)])
def forward(
self,
hidden_states,
attention_mask=None,
head_mask=None,
encoder_hidden_states=None,
encoder_attention_mask=None,
output_attentions=False,
output_hidden_states=False,
return_dict=False,
):
all_hidden_states = () if output_hidden_states else None
all_attentions = () if output_attentions else None
for i, layer_module in enumerate(self.layer):
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
if getattr(self.config, "gradient_checkpointing", False):
def create_custom_forward(module):
def custom_forward(*inputs):
return module(*inputs, output_attentions)
return custom_forward
layer_outputs = torch.utils.checkpoint.checkpoint(
create_custom_forward(layer_module),
hidden_states,
attention_mask,
head_mask[i] if head_mask is not None else None,
encoder_hidden_states,
encoder_attention_mask,
)
else:
layer_outputs = layer_module(
hidden_states,
attention_mask,
head_mask[i] if head_mask is not None else None,
encoder_hidden_states,
encoder_attention_mask,
output_attentions,
)
hidden_states = layer_outputs[0]
if output_attentions:
all_attentions = all_attentions + (layer_outputs[1],)
if output_hidden_states:
all_hidden_states = all_hidden_states + (hidden_states,)
if not return_dict:
return tuple(v for v in [hidden_states, all_hidden_states, all_attentions] if v is not None)
return BaseModelOutput(
last_hidden_state=hidden_states, hidden_states=all_hidden_states, attentions=all_attentions
)
``` | 09-04-2020 18:14:25 | 09-04-2020 18:14:25 | |
transformers | 6,949 | closed | Refactoring the generate() function | ## Generation refactor
This is a possible design that IMO would make generate much more readable and flexible for future code changes.
The code shown in this PR is more or less pseudo code, but I'm sure that this design is fully backwards compatible (for the `generate()` function, I don't plan on keep `beam_search_generation` and `_no_beam_search_generation` -> I don't think anybody directly accessed these functions).
It's probably better to look at the code directly, here: https://github.com/huggingface/transformers/blob/3c8a12a3bf44aa7845675852e28c47d0a9cb808e/src/transformers/generation_utils.py then at the diff.
The following major changes are made:
### Split the generate method into four generate functions:
a) `greedy_search` (corresponds to num_beams = 1, do_sample=False)
b) `sample` (corresponds to num_beams = 1, do_sample=True)
c) `beam_search` (corresponds to num_beams > 1, do_sample = False)
d) `beam_sample` (corresponds to num_beams > 1, do_sample = True)
It is split in a way that the functions can be used on their own and don't necessarily have to be accessed by the `generate()` method. This allows for much more flexibility for the user - he can decide what kind of distribution warper he wants to use and what kind of "Beam scorer" he wants to use (more on "distribution warper" and "beam scorer" later.
Also mostly a model only uses one of these methods, `EncoderDecoder` models usually use `beam_search()`, `...ForCausalLM` models usually use `sample()`. Because `generate` is now split to the corresponding functions relevant for a model, it makes the code much more readable for the users because they will mostly only look into one of the four functions.
Splitting `generate()` into four functions removes **a lot** of `if-else` statements
### Create `LogitsProcessor` and `LogitsWarper` objects.
Instead of adding each of these "logit warpers / processors", such as `bad_token_words` with `if-else` statements, a list of these objects is created in the beginning and then called in the code. This is largely copied from this very nice PR: https://github.com/huggingface/transformers/pull/5420 . The arguments are 1) easier to test, 2) adds more flexbility because users can easily add their own "logit warpers / processors".
Note that we need both a `pre_processor` and a `dist_warper` list to make `beam_sample` work correctly. This comment explains why in more detail: https://github.com/huggingface/transformers/pull/5420#discussion_r449779867
### Create `beam_scorer` class.
The `generate_beam_search` function has become extremely hard to read because most of the beam search logic is directly written into the function. I would propose to move this code into a `beam_scorer` class that would also replace the `BeamHypotheses` class. The class would essentially expose the following functions:
```python
beam_scorer.update(next_scores, next_tokens)
next_beam_scores = beam_scorer.get_next_scores()
next_beam_tokens = beam_scorer.get_next_tokens()
next_beam_idx = beam_scorer.get_next_beam_idx()
beam_scorer.is_done()
```
and IMO all the beam search relevant code can be handled in the `beam_scorer.update(...)` function.
Besides better readability, such a class could also be replaced by another "Beam seach scorer" logic, which makes the beam search code much easier to extend IMO.
The con of this PR is obviously that some code would be copy pasted along the four functions (but it should not be too much).
Let me know if something is not clear in the design. Would be super happy to hear your feedback @LysandreJik @thomwolf @sshleifer @yjernite @JetRunner @sgugger
## 1st Review:
The current state shows how the refactored code would look like. All important tests are passing now (the slow GPT2, Bart & T5 tests), but the PR is not at all finished yet. It would be nice if @sshleifer @yjernite @thomwolf (and @LysandreJik @sgugger) could take a first look at the complete "new" architecture and give some feedback. If you guys are ok with the new design, I will add a bunch of tests and clean up the code more to make sure we have 100% backward compatibily.
## TODO:
- [x] Make all slow tests pass
- [x] Rename according to discussion
- [x] Add tests for all processors
- [x] Add more and better generation tests
- [x] Do speed comparison
- [x] Add docstring
- [x] Final thoughts about design
## Final review:
I'm happy with the changes I've done now. Better tests (more aggressive and should also be faster since only 3 tokens are generated instead of 10 now) have been added and docstrings have hopefully become more understandable.
A complete explanation of new "generation" philosophy is described in the forum: https://discuss.huggingface.co/t/big-generate-refactor/1857 | 09-04-2020 17:22:00 | 09-04-2020 17:22:00 | I think this is a massive improvement and really hope it gets merged. Everything is much better encapsulated and it's super easy to find what you want.
I have some naming ideas, but the code seems to not be finished so I will wait.<|||||>I am very unfamiliar with the generate code because the code has scared me way, this one is way more inviting to read and I can actually understand it so I think that's a very nice achievement :-)
I don't know if what's hidden in `# add necessary encoder decoder code` and `# add all post processing functions` is different for the four versions of decode, but might be nice to refactor it in some helper methods if possible.<|||||>Seconding Sam and Sylvain. Really excited for the improved legibility but we should also make sure that code isn't copy-pasted four times or we'll start having inconsistencies soon :)
We should probably allow `dist_warper` in the beam search too: the current ones wouldn't make a difference but some use cases will (e.g. noisy channel which uses a backward distribution to reweigh the next token scores)<|||||>Related: https://github.com/huggingface/transformers/issues/7626<|||||>Can you please add this functionality too https://github.com/huggingface/transformers/issues/5164?<|||||>This is really cool! I love the additional clarity and think you're building a really strong base for the generation code. Here's what I think we should still chat about before finalizing the PR:
1. We still have inconsistencies in how we handle the end of generation. I really think that `max_length` should be handled the same way as `min_length` with a sampler in `pre_processor ` which forces a Dirac on `eos_token_id` when `cur_len==max_length`
2. Similarly, we should get rid of `adjust_logits_during_generation` and have e.g. a Bart-specific sampler in `pre_processor `
3. We're missing out on supporting some really interesting research by not giving a good option to return and backprop through the generation scores. Also, I think we should extend self-documenting outputs to the `generate` function (cc @sgugger ). My proposal would be to add a `return_dict` argument, with an `output_generation_scores` (and possibly `with_grad`) option. (And we just return the generated ids if `return_dict=False` to stay backwards compatible)
4. Generation with `decoder_prefix_ids` for encoder-decoder models :D ! That can be a future PR though, and can also be handled with `pre_processor` to force the output (a bit wasteful but will do in a pinch)
I also agree with @sgugger that we should come up with a better name than `Sampler`, will think about it<|||||>@yjernite has great feature requests, but this PR is already huge and I don't see why they need to be handled here.<|||||>> @yjernite has great feature requests, but this PR is already huge and I don't see why they need to be handled here.
Good point, we can definitely look at 3. and 4. later, and I know from experience that 2. is probably the wrong kind of rabbit hole to get into right now.
I am a little concerned about 1. though. The samplers will definitely need to consider more than just `input_ids` and `scores` in the future, and we should make sure that we don't need to rebuild them from the ground up when that happens.
@patrickvonplaten what are your thoughts on changing e.g. [generation_utils.py#L451](https://github.com/huggingface/transformers/blob/5df79e2c41bf4b47ab4a36f903be163252714fe3/src/transformers/generation_utils.py#L451) when we need to also pass `cur_len` to enforce the max length or if we want the samples to look back at the `encoder_input_ids` ?<|||||>Thanks for the feedback! @yjernite - regarding your points:
1) I think I see your point that `max_length` should be treated the same way as `min_length`. If we follow this appoarch, we would replace the `while cur_len < max_length:` with `while True:` and then break if `max_length` is hit. I'm a bit worried the people that will use one of the four functions directly, such as:
```python
pre_processor = # create your list of pre_processors here <= THIS MUST INCLUDE max_length
dist_warper = # create your dist warper here
outputs = model.sample(input_ids, pre_processor, dist_warper, max_length, pad_token_id, eos_token_id, **model_kwargs)
```
will forget to put a `max_length` "processor" in `pre_processor` and then the `while True:` loop would run forever.
For me, the difference between `max_length` and and *e.g.* `min_length` is that `max_length` is a mandatory parameter for generation, which is why I left it as an input to the "sub" generation functions.
For you, what would be the big advantage of moving `max_length` to a preprocessor item (besides consistency?).
I'm 100% fine with extending the pre-processors or warpers (trying to not use the word samplers anymore :D) to accept more input arguments, but I think we could also do this in a future PR as it would not break backwards compatibility.
Happy to discuss what opinions the others have on this!
2) Agree - will have to see how to make that non-breaking, but should be possible!
3) Agree to add a `ModelOutputs` class to `generate()` that would include `attentions` and `hidden_states`. Regarding being able to backprop through `generate()` - I don't really think that this is super important. It would probably also require a lot of tweaking the way generate() is done currently so I'd prefer if ppl would just use the own fork/branch for these kind of things
4) This should already be possible I think. `decoder_input_ids` can be passed to generate.<|||||>## Speed comparison:
sample search / greedy search yields equivalent results (TESTED on GPT2)
beam search yields ~5 % speed up thanks to the use of tensors instead of lists in `beam_scorer` (TESTED on BART and T5)<|||||>All slow tests that pass on master, also pass in this PR now. Ran all those tests:
```
run_generation_integration_tests () {
RUN_SLOW=1 pytest tests/test_modeling_pegasus.py
RUN_SLOW=1 pytest tests/test_modeling_bart.py
RUN_SLOW=1 pytest tests/test_modeling_t5.py
RUN_SLOW=1 pytest tests/test_modeling_reformer.py
RUN_SLOW=1 pytest tests/test_modeling_marian.py
RUN_SLOW=1 pytest tests/test_modeling_mbart.py
RUN_SLOW=1 pytest tests/test_modeling_prophetnet.py
RUN_SLOW=1 pytest tests/test_modeling_xlm_prophetnet.py
RUN_SLOW=1 pytest tests/test_modeling_encoder_decoder.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_conversational.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_text2text_generation.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_text_generation.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_summarization.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_translation.py
RUN_PIPELINE_TESTS=1 RUN_SLOW=1 pytest tests/test_pipelines_dialog.py
RUN_SLOW=1 pytest tests/test_modeling_gpt2.py
RUN_SLOW=1 pytest tests/test_modeling_xlnet.py
RUN_SLOW=1 pytest tests/test_modeling_transfo_xl.py
RUN_SLOW=1 pytest tests/test_modeling_rag.py
RUN_SLOW=1 pytest tests/test_modeling_fsmt.py
RUN_SLOW=1 pytest tests/test_modeling_blenderbot.py
}
```
Think I ran all of the important ones (cc @sshleifer )<|||||>> I don't know if this PR was ready for review or not but I still went ahead ;-). I still like this a lot, I added suggestions for the docs and I think it would be great if all your awesome work was documented. `GenerationMixin` is already documented in `main_classes/model` so all its public methods should have nice doc.
>
> Then I think all you added warrants an `internal/generation` where we could document all the tools you added.
>
> Last thing: I'm not super fan of `DistProcessor` as a name, mainly because I don't know that dist means distribution (I think distributed but I'm obsessed with Trainer ;-) ). `DistributionProcessor` might be a bit long. The class seems to be mainly doing some preprocessing on the logits though, so why not `LogitsProcessor`?
Changed to `LogitsProcessor` - like that name! Yeah the docs weren't ready yet, but thanks for the feedback :-) <|||||>Hmmm OK so after thinking about it for a bit I realize that having a `max_length` logits processor would change the current beam search behavior in some cases.
Currently, the beam gets re-ordered at `max_length` without penalizing sequences that aren't "well-formed" (haven't reached EOS within the allotted time). If we switched to a `LogitsProcessor` however, the beam search would up-weight sequences that were more likely to lead to an EOS at time step (`max_length`-1)
I think this new behavior is better and definitely would like the option to implement it (@srush would love your input on that especially), but I'm not sure how breaking it is (conversely, it might also account for some of the different scores we've had from other libs). We can also just add a `BeamSearchMaxLengthLogitsProcessor` now or later.
What do you think @thomwolf @patrickvonplaten @sshleifer ?<|||||>> We can also just add a BeamSearchMaxLengthLogitsProcessor now or later.
This sounds useful! Bart forces all the generations of length `max_length -1` to end in EOS, which helps performance. Might help other models too. Would lean towards adding it later. <|||||>> This all looks great to me. I have final nits on the docs, some general rules not to forget:
>
> * no abbreviation in the documentation as we have all the space we want to explain things to the user
> * no lines > 119 pretty please, the script takes care of everything except the examples and some of the examples have veeeeeery long lines.
> This all looks great to me. I have final nits on the docs, some general rules not to forget:
>
> * no abbreviation in the documentation as we have all the space we want to explain things to the user
> * no lines > 119 pretty please, the script takes care of everything except the examples and some of the examples have veeeeeery long lines.
Noo, I broke the 119 rule again - I though the script would now save me from everything :D Will correct this! Should I also add the `>>>` for the example code or is this just for the model's forward function? <|||||>> Should I also add the `>>>` for the example code or is this just for the model's forward function?
It's only used as a marker for doctests, so you should do this for examples that are not slow. We still need to resuscitate the doctests with @LysandreJik for it to have any use, though.
<|||||>The doctest run as slow tests so you can still add them even if they're slow!<|||||>> The doctest run as slow tests so you can still add them even if they're slow!
Okey added `>>>` to all examples and made them pretty<|||||>Hi, thanks for creating this PR and the code looks great! Is it possible to decode and meanwhile return the token probabilities? It would be very helpful in the following scenarios -
I. Calculating perplexities of generated texts
II. Reinforcement learning for text generation.<|||||>Is it possible to backpropagate the conditional text generation model parameters on the loss defined via this .generate() function?<|||||>Hi @patrickvonplaten
Can you share how to calculate the output probabilities of each token given by generate() ? |
transformers | 6,948 | closed | [s2s] run_eval.py parses generate_kwargs | You can now run
```bash
td=test_data/wmt_en_ro
python run_eval.py t5-base $td/val.source preds.txt --reference_path $td/val.target \
--score_path metrics.json --length_penalty 0.2 \
--task translation_en_to_ro --num_beams 2 --n_obs 2 --bs 1 --length_penalty 0.6
```
h/t @stas00 in #6369 | 09-04-2020 17:12:49 | 09-04-2020 17:12:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=h1) Report
> Merging [#6948](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/6078b12098337bcb98c0540b07a623223ffdd1c8?el=desc) will **increase** coverage by `0.52%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6948 +/- ##
==========================================
+ Coverage 80.01% 80.54% +0.52%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 24102 24259 +157
+ Misses 6018 5861 -157
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-5.27%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `84.21% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6948/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `98.95% <0.00%> (+73.82%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=footer). Last update [6078b12...2f212d3](https://codecov.io/gh/huggingface/transformers/pull/6948?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Yes, please and thank you!!! Needing it badly!<|||||>Here you go! |
transformers | 6,947 | closed | Training script for other language (except English) | Dear Friends. I have found this very nice and great.
I would like to ask, is there any script which can be used to train other language model?
and how much minimum GPU power is required for such training?
If you can guide . ! I am thinking to test for German language.
| 09-04-2020 14:36:42 | 09-04-2020 14:36:42 | You could try either https://huggingface.co/blog/how-to-train or https://discuss.huggingface.co
^^ Forum is better for open-ended questions like those ones. |
transformers | 6,946 | closed | [LXMERT] Fix tests on gpu | Some tensors and models were created in tests without specifying the device.
| 09-04-2020 14:00:18 | 09-04-2020 14:00:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=h1) Report
> Merging [#6946](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a75e31981915cbd072be7c4050a4b58c63ca6d33?el=desc) will **increase** coverage by `1.25%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6946 +/- ##
==========================================
+ Coverage 80.01% 81.27% +1.25%
==========================================
Files 161 161
Lines 30120 30120
==========================================
+ Hits 24102 24479 +377
+ Misses 6018 5641 -377
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `82.76% <0.00%> (+6.06%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6946/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `96.91% <0.00%> (+72.35%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=footer). Last update [a75e319...b6fd572](https://codecov.io/gh/huggingface/transformers/pull/6946?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,945 | closed | Restoring ELECTRA-Small checkpoint doesn't work properly | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Colab
- Python version: 3.6.9
- PyTorch version (GPU?): default Colab
- Tensorflow version (GPU?): default Colab (checkpoint from 1.15)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Seems like nobody
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Hello, I am trying to load electra-small checkpoint model from Google Research (https://github.com/google-research/electra) into HuggingFace's ElectraForMaskedLM object. There were several different ways I tried to achieve that:
- Converted the checkpoint with the help of cli **convert_electra_original_tf_checkpoint_to_pytorch.py** file
- Converted the checkpoint with the help of .from_pretrained() method with the config.json provided here: https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-small-generator/config.json
Both worked without any exceptions. The first one didn't write anything to the output except for the contents of config.json file and the path the model would be saved to. The second one writes lots of information about skipping several variables and initialising others:
`Initialize PyTorch weight ['discriminator_predictions', 'dense', 'bias'] discriminator_predictions/dense/bias
Initialize PyTorch weight ['discriminator_predictions', 'dense', 'kernel'] discriminator_predictions/dense/kernel
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'bias'] discriminator_predictions/dense_1/bias
Initialize PyTorch weight ['discriminator_predictions', 'dense_prediction', 'kernel'] discriminator_predictions/dense_1/kernel
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'beta'] electra/embeddings/LayerNorm/beta
Initialize PyTorch weight ['electra', 'embeddings', 'LayerNorm', 'gamma'] electra/embeddings/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'embeddings', 'position_embeddings'] electra/embeddings/position_embeddings
Initialize PyTorch weight ['electra', 'embeddings', 'token_type_embeddings'] electra/embeddings/token_type_embeddings
Initialize PyTorch weight ['electra', 'embeddings', 'word_embeddings'] electra/embeddings/word_embeddings
Initialize PyTorch weight ['electra', 'embeddings_project', 'bias'] electra/embeddings_project/bias
Initialize PyTorch weight ['electra', 'embeddings_project', 'kernel'] electra/embeddings_project/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_0/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_0/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_0/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_0/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_0/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_0/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_0/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_0/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_0/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_0/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] electra/encoder/layer_0/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_0/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_0/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_0/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'dense', 'bias'] electra/encoder/layer_0/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] electra/encoder/layer_0/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_1/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_1/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_1/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_1/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_1/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_1/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_1/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_1/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_1/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_1/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] electra/encoder/layer_1/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_1/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_1/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_1/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'dense', 'bias'] electra/encoder/layer_1/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] electra/encoder/layer_1/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_10/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_10/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_10/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_10/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_10/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_10/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_10/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_10/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_10/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_10/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] electra/encoder/layer_10/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_10/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_10/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_10/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'dense', 'bias'] electra/encoder/layer_10/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] electra/encoder/layer_10/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_11/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_11/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_11/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_11/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_11/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_11/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_11/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_11/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_11/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_11/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] electra/encoder/layer_11/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_11/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_11/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_11/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'dense', 'bias'] electra/encoder/layer_11/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] electra/encoder/layer_11/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_2/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_2/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_2/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_2/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_2/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_2/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_2/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_2/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_2/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_2/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] electra/encoder/layer_2/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_2/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_2/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_2/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'dense', 'bias'] electra/encoder/layer_2/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] electra/encoder/layer_2/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_3/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_3/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_3/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_3/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_3/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_3/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_3/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_3/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_3/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_3/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] electra/encoder/layer_3/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_3/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_3/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_3/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'dense', 'bias'] electra/encoder/layer_3/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] electra/encoder/layer_3/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_4/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_4/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_4/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_4/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_4/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_4/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_4/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_4/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_4/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_4/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] electra/encoder/layer_4/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_4/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_4/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_4/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'dense', 'bias'] electra/encoder/layer_4/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] electra/encoder/layer_4/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_5/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_5/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_5/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_5/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_5/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_5/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_5/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_5/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_5/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_5/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] electra/encoder/layer_5/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_5/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_5/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_5/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'dense', 'bias'] electra/encoder/layer_5/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] electra/encoder/layer_5/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_6/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_6/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_6/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_6/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_6/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_6/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_6/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_6/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_6/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_6/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] electra/encoder/layer_6/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_6/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_6/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_6/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'dense', 'bias'] electra/encoder/layer_6/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] electra/encoder/layer_6/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_7/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_7/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_7/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_7/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_7/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_7/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_7/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_7/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_7/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_7/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] electra/encoder/layer_7/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_7/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_7/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_7/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'dense', 'bias'] electra/encoder/layer_7/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] electra/encoder/layer_7/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_8/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_8/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_8/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_8/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_8/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_8/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_8/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_8/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_8/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_8/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] electra/encoder/layer_8/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_8/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_8/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_8/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'dense', 'bias'] electra/encoder/layer_8/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] electra/encoder/layer_8/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_9/attention/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_9/attention/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] electra/encoder/layer_9/attention/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] electra/encoder/layer_9/attention/output/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] electra/encoder/layer_9/attention/self/key/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] electra/encoder/layer_9/attention/self/key/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] electra/encoder/layer_9/attention/self/query/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] electra/encoder/layer_9/attention/self/query/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] electra/encoder/layer_9/attention/self/value/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] electra/encoder/layer_9/attention/self/value/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] electra/encoder/layer_9/intermediate/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] electra/encoder/layer_9/intermediate/dense/kernel
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] electra/encoder/layer_9/output/LayerNorm/beta
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] electra/encoder/layer_9/output/LayerNorm/gamma
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'dense', 'bias'] electra/encoder/layer_9/output/dense/bias
Initialize PyTorch weight ['electra', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] electra/encoder/layer_9/output/dense/kernel
Skipping generator/embeddings_project/bias ['generator', 'embeddings_project', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/embeddings_project/kernel ['generator', 'embeddings_project', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/bias ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/output/dense/kernel ['generator', 'encoder', 'layer_0', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/key/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/query/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/bias ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/attention/self/value/kernel ['generator', 'encoder', 'layer_0', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/bias ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/intermediate/dense/kernel ['generator', 'encoder', 'layer_0', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/beta ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/LayerNorm/gamma ['generator', 'encoder', 'layer_0', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/bias ['generator', 'encoder', 'layer_0', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_0/output/dense/kernel ['generator', 'encoder', 'layer_0', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/bias ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/output/dense/kernel ['generator', 'encoder', 'layer_1', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/key/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/query/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/bias ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/attention/self/value/kernel ['generator', 'encoder', 'layer_1', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/bias ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/intermediate/dense/kernel ['generator', 'encoder', 'layer_1', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/beta ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/LayerNorm/gamma ['generator', 'encoder', 'layer_1', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/bias ['generator', 'encoder', 'layer_1', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_1/output/dense/kernel ['generator', 'encoder', 'layer_1', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/bias ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/output/dense/kernel ['generator', 'encoder', 'layer_10', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/key/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/query/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/bias ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/attention/self/value/kernel ['generator', 'encoder', 'layer_10', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/bias ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/intermediate/dense/kernel ['generator', 'encoder', 'layer_10', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/beta ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/LayerNorm/gamma ['generator', 'encoder', 'layer_10', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/bias ['generator', 'encoder', 'layer_10', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_10/output/dense/kernel ['generator', 'encoder', 'layer_10', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/bias ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/output/dense/kernel ['generator', 'encoder', 'layer_11', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/key/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/query/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/bias ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/attention/self/value/kernel ['generator', 'encoder', 'layer_11', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/bias ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/intermediate/dense/kernel ['generator', 'encoder', 'layer_11', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/beta ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/LayerNorm/gamma ['generator', 'encoder', 'layer_11', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/bias ['generator', 'encoder', 'layer_11', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_11/output/dense/kernel ['generator', 'encoder', 'layer_11', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/bias ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/output/dense/kernel ['generator', 'encoder', 'layer_2', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/key/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/query/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/bias ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/attention/self/value/kernel ['generator', 'encoder', 'layer_2', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/bias ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/intermediate/dense/kernel ['generator', 'encoder', 'layer_2', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/beta ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/LayerNorm/gamma ['generator', 'encoder', 'layer_2', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/bias ['generator', 'encoder', 'layer_2', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_2/output/dense/kernel ['generator', 'encoder', 'layer_2', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/bias ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/output/dense/kernel ['generator', 'encoder', 'layer_3', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/key/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/query/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/bias ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/attention/self/value/kernel ['generator', 'encoder', 'layer_3', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/bias ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/intermediate/dense/kernel ['generator', 'encoder', 'layer_3', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/beta ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/LayerNorm/gamma ['generator', 'encoder', 'layer_3', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/bias ['generator', 'encoder', 'layer_3', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_3/output/dense/kernel ['generator', 'encoder', 'layer_3', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/bias ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/output/dense/kernel ['generator', 'encoder', 'layer_4', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/key/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/query/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/bias ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/attention/self/value/kernel ['generator', 'encoder', 'layer_4', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/bias ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/intermediate/dense/kernel ['generator', 'encoder', 'layer_4', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/beta ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/LayerNorm/gamma ['generator', 'encoder', 'layer_4', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/bias ['generator', 'encoder', 'layer_4', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_4/output/dense/kernel ['generator', 'encoder', 'layer_4', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/bias ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/output/dense/kernel ['generator', 'encoder', 'layer_5', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/key/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/query/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/bias ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/attention/self/value/kernel ['generator', 'encoder', 'layer_5', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/bias ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/intermediate/dense/kernel ['generator', 'encoder', 'layer_5', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/beta ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/LayerNorm/gamma ['generator', 'encoder', 'layer_5', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/bias ['generator', 'encoder', 'layer_5', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_5/output/dense/kernel ['generator', 'encoder', 'layer_5', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/bias ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/output/dense/kernel ['generator', 'encoder', 'layer_6', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/key/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/query/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/bias ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/attention/self/value/kernel ['generator', 'encoder', 'layer_6', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/bias ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/intermediate/dense/kernel ['generator', 'encoder', 'layer_6', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/beta ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/LayerNorm/gamma ['generator', 'encoder', 'layer_6', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/bias ['generator', 'encoder', 'layer_6', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_6/output/dense/kernel ['generator', 'encoder', 'layer_6', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/bias ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/output/dense/kernel ['generator', 'encoder', 'layer_7', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/key/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/query/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/bias ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/attention/self/value/kernel ['generator', 'encoder', 'layer_7', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/bias ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/intermediate/dense/kernel ['generator', 'encoder', 'layer_7', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/beta ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/LayerNorm/gamma ['generator', 'encoder', 'layer_7', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/bias ['generator', 'encoder', 'layer_7', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_7/output/dense/kernel ['generator', 'encoder', 'layer_7', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/bias ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/output/dense/kernel ['generator', 'encoder', 'layer_8', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/key/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/query/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/bias ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/attention/self/value/kernel ['generator', 'encoder', 'layer_8', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/bias ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/intermediate/dense/kernel ['generator', 'encoder', 'layer_8', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/beta ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/LayerNorm/gamma ['generator', 'encoder', 'layer_8', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/bias ['generator', 'encoder', 'layer_8', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_8/output/dense/kernel ['generator', 'encoder', 'layer_8', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'attention', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/bias ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/output/dense/kernel ['generator', 'encoder', 'layer_9', 'attention', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/key/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'key', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/query/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'query', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/bias ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/attention/self/value/kernel ['generator', 'encoder', 'layer_9', 'attention', 'self', 'value', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/bias ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/intermediate/dense/kernel ['generator', 'encoder', 'layer_9', 'intermediate', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/beta ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/LayerNorm/gamma ['generator', 'encoder', 'layer_9', 'output', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/bias ['generator', 'encoder', 'layer_9', 'output', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator/encoder/layer_9/output/dense/kernel ['generator', 'encoder', 'layer_9', 'output', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator'
Skipping generator_predictions/LayerNorm/beta ['generator_predictions', 'LayerNorm', 'beta'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'
Skipping generator_predictions/LayerNorm/gamma ['generator_predictions', 'LayerNorm', 'gamma'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/bias ['generator_predictions', 'dense', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'
Skipping generator_predictions/dense/kernel ['generator_predictions', 'dense', 'kernel'] 'ElectraForPreTraining' object has no attribute 'generator_predictions'
Skipping generator_predictions/output_bias ['generator_lm_head', 'bias'] 'ElectraForPreTraining' object has no attribute 'generator_lm_head'`
It seems like OK since the Google's checkpoint consists of both generator and discriminator. However, as soon as I try to make some prediction (e.g. "I love reading [MASK]."), the top-5 most likely words is:
- ᵃ
- fulfilled
- sal
- 1809
- drank
which is pretty random, I guess.
On the other hand, as soon as I initialise the ElectraForMaskedLM model directly from https://huggingface.co/google/electra-small-generator , everything works fantastically!
So my hypothesis is, that there is a bug in checkpoint translation to HF format. Can anybody tell me how I can load my own checkpoint (or at least that Google's to check if the whole thing works correctly)?
## To reproduce
Steps to reproduce the behavior:
1. Download the official ELECTRA-small checkpoint
2. Try to run CLI script to convert the TF checkpoint to HF .bin model
3. Run classical prediction and see top-5 words (OR import HF pipeline and run it in "fill-mask" mode)
You will see that the model from HF web works correctly whereas the model from Google's GitHub gives random tokens.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expected to see that the model is capable of making basic predictions, so that I know that it has been restored and reformatted correctly.
| 09-04-2020 13:58:28 | 09-04-2020 13:58:28 | > Who can help
> Seems like nobody
Haha hopefully we can help you with that :-)
Pinging our ELECTRA master @LysandreJik <|||||>Hello! Indeed I think I can help you :)
I don't think I'm seeing what happened with the first case? Your first option was correct, you should use the conversion script. This should create a directory in which there is a `pytorch_model.bin` and a `config.json`.
However, you should note that ELECTRA contains both a *discriminator* and a *generator*. Only the generator may be used for MLM, as the discriminator is trained with the ELECTRA objective and would output gibberish if used for MLM.
If you used the script with the option `--discriminator_or_generator=discriminator`, then you should load your checkpoint in `ElectraForPreTraining`. If you used the script with the option `--discriminator_or_generator=generator`, then you can load your checkpoint in `ElectraForMaskedLM` and should see sensible output when using it for MLM tasks.<|||||>@LysandreJik Thank you for your reply! As I've said, I used the conversion script and specified generator as needed for the MLM. However, I still get gibberish results as shown above. That's why I guess there is a bug in the way how the weights are transferred from Tensorflow checkpoints to HF model. <|||||>Oh, okay. Let me check.<|||||>I just did the exact following steps and got it to work:
```bash
# Link from the official google repo
wget https://storage.googleapis.com/electra-data/electra_small.zip
unzip electra_small.zip
cd electra_small
# If you're converting a different model you should make your own config.json file
wget https://s3.amazonaws.com/models.huggingface.co/bert/google/electra-small-generator/config.json
# Use the conversion script
python transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py \
--tf_checkpoint_path=electra_small/electra_small \
--config_file=electra_small/config.json \
--pytorch_dump_path=electra_small/pytorch_model.bin \
--discriminator_or_generator=generator
```
The last command outputs:
```
[...]
Initialize PyTorch weight ['generator_predictions', 'LayerNorm', 'beta'] generator_predictions/LayerNorm/beta
Initialize PyTorch weight ['generator_predictions', 'LayerNorm', 'gamma'] generator_predictions/LayerNorm/gamma
Initialize PyTorch weight ['generator_predictions', 'dense', 'bias'] generator_predictions/dense/bias
Initialize PyTorch weight ['generator_predictions', 'dense', 'kernel'] generator_predictions/dense/kernel
Initialize PyTorch weight ['generator_lm_head', 'bias'] generator_predictions/output_bias
Skipping generator_predictions/temperature
Skipping global_step
Save PyTorch model to electra_small/pytorch_model.bin
```
You can then load the model you exported:
```py
from transformers import FillMaskPipeline, ElectraForMaskedLM, ElectraTokenizer
fill_mask = FillMaskPipeline(
model=ElectraForMaskedLM.from_pretrained("/path/to/model/and/config/electra_small"),
tokenizer=ElectraTokenizer.from_pretrained("google/electra-small-generator")
)
print(fill_mask("Filling the blanks using a pipeline is an [MASK] thing to do."))
```
which returns
```
[{'sequence': '[CLS] filling the blanks using a pipeline is an easy thing to do. [SEP]',
'score': 0.8874430060386658,
'token': 3733,
'token_str': 'easy'},
{'sequence': '[CLS] filling the blanks using a pipeline is an easier thing to do. [SEP]',
'score': 0.024068119004368782,
'token': 6082,
'token_str': 'easier'},
{'sequence': '[CLS] filling the blanks using a pipeline is an interesting thing to do. [SEP]',
'score': 0.016776252537965775,
'token': 5875,
'token_str': 'interesting'},
{'sequence': '[CLS] filling the blanks using a pipeline is an important thing to do. [SEP]',
'score': 0.014077582396566868,
'token': 2590,
'token_str': 'important'},
{'sequence': '[CLS] filling the blanks using a pipeline is an expensive thing to do. [SEP]',
'score': 0.012089359574019909,
'token': 6450,
'token_str': 'expensive'}]
```<|||||>Closing as the issue is resolved. |
transformers | 6,944 | closed | Finetuning XLM-Roberta-2XLM-Roberta on custom dataset gives the following error: | `Evaluation: 100% 30/30 [00:45<00:00, 1.53s/it]
[libprotobuf FATAL /sentencepiece/src/../third_party/protobuf-lite/google/protobuf/repeated_field.h:1505] CHECK failed: (index) >= (0):
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): CHECK failed: (index) >= (0): `
I am using the following script: [https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16](url)
Appreciate any help. Thank you. | 09-04-2020 09:14:14 | 09-04-2020 09:14:14 | This looks like an issue with tokenizer in evaluation.
if you don't need generative metrics while Training then set `predict_from_generate` to `False` and don't pass `compute_metrics` function to `Trainer`
pinging @patrickvonplaten <|||||>Hi @patil-suraj ,
Did exactly that. This issue didn't show up. But now all generations on the test set are the same regardless of the input.<|||||>Hey @laibamehnaz,
Could you copy paste a working code example, so that we can reproduce the error? :-) Note that you have to use a different tokenizer than the one that is used in `bert2bert-cnn_dailymail-fp16`<|||||>Sure, you can see the code here: [https://colab.research.google.com/drive/1xxBQcPe05bFBQvJQLx6Mw2Qd9YVWPNS9?usp=sharing](url)<|||||>Hmm, I don't get protobuf error when running locally...could you maybe adapt the google colab so that I can get your error above by just clicking "run" :-) ? (Setting the correct transformer pip installs and the correct trainer params, etc...)<|||||>Oh I am so sorry, this script won't give you the error because I have set `predict_from_generate` as `False` and `prediction_loss_only` to `True`. <|||||>Sure, I will share the script.
> Hmm, I don't get protobuf error when running locally...could you maybe adapt the google colab so that I can get your error above by just clicking "run" :-) ? (Setting the correct transformer pip installs and the correct trainer params, etc...)
<|||||>Here you go:
[https://colab.research.google.com/drive/1xxBQcPe05bFBQvJQLx6Mw2Qd9YVWPNS9?usp=sharing](url)
> Hmm, I don't get protobuf error when running locally...could you maybe adapt the google colab so that I can get your error above by just clicking "run" :-) ? (Setting the correct transformer pip installs and the correct trainer params, etc...)
<|||||>I'm sorry the colab does not work for me...I get install errors when running the second cell. Let me merge the "more_general_trainer_metric" PR into master next week and then we can work directly on master.<|||||>Hi @patrickvonplaten , I have fixed the issue in the second cell.
[https://colab.research.google.com/drive/1xxBQcPe05bFBQvJQLx6Mw2Qd9YVWPNS9?usp=sharing](url)<|||||>> I'm sorry the colab does not work for me...I get install errors when running the second cell. Let me merge the "more_general_trainer_metric" PR into master next week and then we can work directly on master.
Sure, that will be great!!<|||||>@patrickvonplaten These types of metrics are included in #6769, and it's almost ready to merge.
How about we support `EncoderDecoder` models in `examples/seq2seq` ?
@sshleifer does that make sense ?<|||||>Depends how much complexity it adds, but on a high level I like that idea a lot!<|||||>There is a more pressing issue of getting incremental decoding/use cache working for Roberta that I would probably prioritize higher.<|||||>@patil-suraj @sshleifer - I like the idea of adding `EncoderDecoder` to your `Seq2SeqTrainer` a lot. This way I won't have to continue my hacky PR here: https://github.com/huggingface/transformers/pull/5840. I would place this actually as more important since people need to be able to quickly fine-tune any `EncoderDecoderModel`.
@patil-suraj - After merging your PR, I'd be happy to work on adding `EncoderDecoder` to the Seq2Seq Trainer.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,943 | closed | Transformer-XL: Remove unused/unnecessary Parameters | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The used configuration parameters `tgt_len` and `ext_len` in the Transformer-XL implementation are not utilized in the source code. I would recommend removing those since the only confuse the users of the model(s).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
As already mentioned, the unnecessary parameters are confusing and can be likely removed. The parameter `tgt_len` is determined for the input tensor anyway and `ext_len` was only an experimental feature (see [issue from the original repo](https://github.com/kimiyoung/transformer-xl/issues/9)).
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I am happy to contribute and open a PR with the requested changes if that's also in your interest? | 09-04-2020 09:08:12 | 09-04-2020 09:08:12 | ping @TevenLeScao |
transformers | 6,942 | closed | Create Readme.MD for KanBERTo | KanBERTo language model readme for Kannada language. which I trained by following your blog.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-04-2020 08:08:23 | 09-04-2020 08:08:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=h1) Report
> Merging [#6942](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `2.32%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6942 +/- ##
==========================================
+ Coverage 77.70% 80.02% +2.32%
==========================================
Files 161 161
Lines 30119 30119
==========================================
+ Hits 23403 24103 +700
+ Misses 6716 6016 -700
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.14%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.95%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6942/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=footer). Last update [e95d262...f75d263](https://codecov.io/gh/huggingface/transformers/pull/6942?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>please let me know how to mention language code (kn) also so that it will be easy to filter on website.<|||||>> please let me know how to mention language code (kn) also so that it will be easy to filter on website.
There you go!
Thanks for sharing<|||||>https://huggingface.co/models?filter=kn<|||||>>
>
> https://huggingface.co/models?filter=kn
Thanks a lot @julien-c 👍 . |
transformers | 6,941 | closed | match CI's version of flake8 | my flake8 wasn't up-to-date enough, so my system's `make quality` wasn't reporting the same things CI did - this PR adds the actual required version.
Thinking more about some of these minimal versions - CI will always install afresh and thus will always run the latest version. Is there a way to tell pip to always install the latest versions of certain dependencies on `pip install -i ".[dev]"`, rather than hardcoding the minimal numbers which quickly become outdated?
| 09-04-2020 06:49:49 | 09-04-2020 06:49:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=h1) Report
> Merging [#6941](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `2.62%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6941 +/- ##
==========================================
+ Coverage 77.70% 80.33% +2.62%
==========================================
Files 161 161
Lines 30119 30119
==========================================
+ Hits 23403 24195 +792
+ Misses 6716 5924 -792
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+2.28%)` | :arrow_up: |
| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6941/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=footer). Last update [e95d262...a0fcde2](https://codecov.io/gh/huggingface/transformers/pull/6941?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,940 | closed | [ported model] FSMT (FairSeq MachineTranslation) | This PR implements the spec specified at https://github.com/huggingface/transformers/issues/5419
The new model is FSMT (aka FairSeqMachineTranslation): `FSMTForConditionalGeneration` which comes with 4 models:
* "facebook/wmt19-ru-en"
* "facebook/wmt19-en-ru"
* "facebook/wmt19-de-en"
* "facebook/wmt19-en-de"
This is a ported version of [fairseq wmt19 transformer](https://github.com/pytorch/fairseq/blob/master/examples/wmt19/README.md) which includes 3 languages and 4 pairs.
For more details of the original, please see, [Facebook FAIR's WMT19 News Translation Task Submission](https://arxiv.org/abs/1907.06616).
**Huge, huge thanks to @sshleifer, who has been incredibly supportive of this very difficult, yet, fun learning experience! Thank you, Sam!**
**And many thanks to all those who wrote all the existing transformers code, so that I just needed to tweak a few things here and there, rather than write from scratch. And, last, but not least, to the fairseq developers, who have done the heavy lifting with the initial training and finetuning, and coding.**
The tokenizer is a tweaked XLM tokenizer, the model is a tweaked Bart model. There were too many differences that I couldn't just subclass either of these 2, having 2 unmerged dictionaries of different sized being the main cause. But there were quite a few other nuances, please see the porting notes in the code.
There are a few more things to complete, in particular we currently don't have support for model ensemble, which is used by fairseq - they run eval on an ensemble of 4 model checkpoints. This implementation currently uses only the first checkpoint.
And then more work on matching fairseq outputs is needed - no beam is perfect, and with beam search there are some small differences - I was encouraged to release the model and continue working on improving it.
I'm still a few points behind on the BLEU score - most likely due to having the ensemble, but since I am not able to reproduce fairseq reported scores, I'm not sure how to evaluate against a single model. See the [issue](https://github.com/pytorch/fairseq/issues/2544). I added the current and the expected scores in the model cards. If one of you has already started working on ensemble support please let me know.
You will find 'Porting Notes' in `modeling_fsmt.py` and `tokenization_fsmt.py` with what has been done, nuances and what still needs to be done.
The 4 models are up on s3 and can be used already.
Usage:
```python
from transformers.tokenization_fsmt import FSMTTokenizer
from transformers.modeling_fsmt import FSMTForConditionalGeneration
mname = "facebook/wmt19-en-ru"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input = "Machine learning is great, isn't it?
input_ids = tokenizer.encode(input, return_tensors="pt")
outputs = model.generate(input_ids)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(decoded) # Машинное обучение - это здорово, не так ли?
```
**edit**: we have 5 more wmt models en/de from https://github.com/jungokasai/deep-shallow/ ready to be added as well, once this is merged.
@sshleifer | 09-04-2020 06:34:31 | 09-04-2020 06:34:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=h1) Report
> Merging [#6940](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/90cde2e938638e64a8696a12b79ee5f52364b162?el=desc) will **increase** coverage by `2.47%`.
> The diff coverage is `94.60%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6940 +/- ##
==========================================
+ Coverage 79.62% 82.10% +2.47%
==========================================
Files 168 171 +3
Lines 32284 33044 +760
==========================================
+ Hits 25706 27130 +1424
+ Misses 6578 5914 -664
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mc210LnB5) | `93.58% <93.58%> (ø)` | |
| [src/transformers/tokenization\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnNtdC5weQ==) | `95.23% <95.23%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.35% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.15% <100.00%> (+0.04%)` | :arrow_up: |
| [src/transformers/configuration\_fsmt.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2ZzbXQucHk=) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `82.38% <100.00%> (+0.08%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `91.93% <100.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.55% <0.00%> (-34.28%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `70.19% <0.00%> (-23.08%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6940/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=footer). Last update [90cde2e...1be40e3](https://codecov.io/gh/huggingface/transformers/pull/6940?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Here is a little paraphrase script to amuse you:
```python
from transformers.tokenization_fsmt import FSMTTokenizer
from transformers.modeling_fsmt import FSMTForConditionalGeneration
text = "Every morning when I wake up, I experience an exquisite joy - the joy of being Salvador Dalí - and I ask myself in rapture: What wonderful things is this Salvador Dalí going to accomplish today?"
def translate(src_lang, tgt_lang, text):
mname = f"facebook/wmt19-{src_lang}-{tgt_lang}"
tokenizer = FSMTTokenizer.from_pretrained(mname)
model = FSMTForConditionalGeneration.from_pretrained(mname)
input_ids = tokenizer.encode(text, return_tensors='pt')
outputs = model.generate(input_ids, num_beams=5, early_stopping=True)
decoded = tokenizer.decode(outputs[0], skip_special_tokens=True)
return decoded
def paraphrase(src_lang, tgt_lang, text):
return translate(tgt_lang, src_lang, translate(src_lang, tgt_lang, text))
print(f"original:\n{text}")
print(f"paraphrased en-ru-en:\n{paraphrase('en', 'ru', text)}")
print(f"paraphrased en-de-en:\n{paraphrase('en', 'de', text)}")
```
* original:
Every morning when I wake up, I experience an exquisite joy - the joy of being Salvador Dalí - and I ask myself in rapture: What wonderful things is this Salvador Dalí going to accomplish today?
* paraphrased en-ru-en:
Every morning when I wake up, I have an amazing joy - the joy of being Salvador Dali - and I ask myself in awe: what wonderful things is this Salvador Dali going to do today?
* paraphrased en-de-en:
Every morning when I wake up, I experience an exquisite joy - the joy of being Salvador Dalí - and I ask myself in ecstasy: what wonderful things will this Salvador Dalí do today?
Dali would have been proud! :)<|||||>Hi, @stas00
Can the models be torchscripted or quantized?
I understand they are from fairseq and are pre-trained. What about optimizations in training a seq2seq model in transfomers?<|||||>Also the integration test fails in my torch 1.5.1 environment: https://gist.github.com/sshleifer/4ba0386e06d2b348c809f80c19f283fd
<|||||>Super excited about this!<|||||>> Hi, @stas00
> Can the models be torchscripted or quantized?
> I understand they are from fairseq and are pre-trained. What about optimizations in training a seq2seq model in transfomers?
The first step is just to make things work and have a similar BLEU performance. At a later stage we can work on more goals. The plan is to polish this PR, have it merged and then I plan to post to the forums and then you guys can experiment, report problems, ask for things, etc. How does that sound?
<|||||>> Have not read modeling.py yet, but left some other nitpicks.
Thank you very much, @sshleifer - I will address those later today.
> More importantly, I couldn't replicate `run_eval.py` results from this branch.
I know why. I uploaded an experimental version of the models last night, thought I forced the caching off, as the models were re-downloaded, but just now while re-running run_eval I got suddenly a re-download and blue is 0.1. So the experimental model didn't work. :(
So I still need to sort out the caching issue: https://github.com/huggingface/transformers/issues/6916
I'm reverting the models - takes a while to upload 5GB. I will update once this is complete and then you can re-eval.
----
I'm also thinking - needing an actual run_eval quality test, which can be run as a part of the test suite. perhaps on a small sample, maybe 100 instead of 2000 and a smallish beam size? then it can be slow, but not too slow?
----
Also, as I mentioned earlier there is no way to override `num_beans` in run_eval so one has to manually change it in configuration_fsmt.py.
So you were running it with `num_beans=8`.
Here are the results that I get for `PAIR=en-ru`:
```
# 15:
# {'bleu': 31.2512, 'n_obs': 1997, 'runtime': 521, 'seconds_per_sample': 0.2609}
# 50:
# {'bleu': 31.2695, 'n_obs': 1997, 'runtime': 1692, 'seconds_per_sample': 0.8473}
```
I will rebase, once this is merged https://github.com/huggingface/transformers/pull/6948 - thank you!
<|||||>**edit**: CDN has been updated so you're good to go to eval the model.
So models have been updated, but I can't figure out how to bypass caching, so still getting the old versions - might have to wait 24h :( See this issue: https://github.com/huggingface/transformers/issues/6916#issuecomment-687321087
So until this caching issue is sorted out (or 24h have passed) please don't waste your time on trying to eval this model. It won't work.<|||||>I wrote a bash script that `run_eval.py`s each of 4 checkpoints separately for each pair. So let's see which is the winner and use that one for now (could be different for different models):
```
export BS=8
# set to 5 for a quick test run, set to 2000 to eval all available records
export OBJS=2000
# at the end we want NUM_BEAMS=50 (as that's what fairseq used in their eval)
export NUM_BEAMS=50
pairs=(ru-en en-ru en-de de-en)
for pair in "${pairs[@]}"
do
export PAIR=$pair
export DATA_DIR=data/$PAIR
export SAVE_DIR=data/$PAIR
mkdir -p $DATA_DIR
sacrebleu -t wmt19 -l $PAIR --echo src | head -$OBJS > $DATA_DIR/val.source
sacrebleu -t wmt19 -l $PAIR --echo ref | head -$OBJS > $DATA_DIR/val.target
if [[ $pair =~ "ru" ]]
then
subdir=ensemble # ru folders
else
subdir=joined-dict.ensemble # de data folders are different
fi
END=4;
for i in $(seq 1 $END);
do
model=model$i.pt;
CHKPT=$model PYTHONPATH="src" python src/transformers/convert_fsmt_original_pytorch_checkpoint_to_pytorch.py --fsmt_checkpoint_path data/wmt19.$PAIR.$subdir --pytorch_dump_folder_path data/fsmt-wmt19-$PAIR > log.$PAIR-$model 2>&1
echo "###" $PAIR $model num_beams=$NUM_BEAMS objs=$OBJS
PYTHONPATH="src:examples/seq2seq" python examples/seq2seq/run_eval.py /code/huggingface/transformers-fair-wmt/data/fsmt-wmt19-$PAIR $DATA_DIR/val.source $SAVE_DIR/test_translations.txt --reference_path $DATA_DIR/val.target --score_path $SAVE_DIR/test_bleu.json --bs $BS --task translation 2> /dev/null
done
echo
echo
done
```
If someone decides to run this, you have to modify `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py` to get the checkpoint name from `os.getenv("CHKPT")`<|||||>Results:
```
### ru-en model1.pt num_beams=50 objs=2000
{'bleu': 38.8222, 'n_obs': 2000, 'runtime': 233, 'seconds_per_sample': 0.1165}
### ru-en model2.pt num_beams=50 objs=2000
{'bleu': 38.4053, 'n_obs': 2000, 'runtime': 233, 'seconds_per_sample': 0.1165}
### ru-en model3.pt num_beams=50 objs=2000
{'bleu': 38.8222, 'n_obs': 2000, 'runtime': 234, 'seconds_per_sample': 0.117}
### ru-en model4.pt num_beams=50 objs=2000
{'bleu': 38.933, 'n_obs': 2000, 'runtime': 236, 'seconds_per_sample': 0.118}
### en-ru model1.pt num_beams=50 objs=2000
{'bleu': 31.2898, 'n_obs': 1997, 'runtime': 295, 'seconds_per_sample': 0.1477}
### en-ru model2.pt num_beams=50 objs=2000
{'bleu': 31.4669, 'n_obs': 1997, 'runtime': 293, 'seconds_per_sample': 0.1467}
### en-ru model3.pt num_beams=50 objs=2000
{'bleu': 33.4736, 'n_obs': 1997, 'runtime': 289, 'seconds_per_sample': 0.1447}
### en-ru model4.pt num_beams=50 objs=2000
{'bleu': 33.2873, 'n_obs': 1997, 'runtime': 296, 'seconds_per_sample': 0.1482}
### en-de model1.pt num_beams=50 objs=2000
{'bleu': 40.7906, 'n_obs': 1997, 'runtime': 304, 'seconds_per_sample': 0.1522}
### en-de model2.pt num_beams=50 objs=2000
{'bleu': 40.7677, 'n_obs': 1997, 'runtime': 307, 'seconds_per_sample': 0.1537}
### en-de model3.pt num_beams=50 objs=2000
{'bleu': 40.7677, 'n_obs': 1997, 'runtime': 306, 'seconds_per_sample': 0.1532}
### en-de model4.pt num_beams=50 objs=2000
{'bleu': 42.7892, 'n_obs': 1997, 'runtime': 305, 'seconds_per_sample': 0.1527}
### de-en model1.pt num_beams=50 objs=2000
{'bleu': 39.4096, 'n_obs': 2000, 'runtime': 238, 'seconds_per_sample': 0.119}
### de-en model2.pt num_beams=50 objs=2000
{'bleu': 39.3009, 'n_obs': 2000, 'runtime': 238, 'seconds_per_sample': 0.119}
### de-en model3.pt num_beams=50 objs=2000
{'bleu': 38.9375, 'n_obs': 2000, 'runtime': 238, 'seconds_per_sample': 0.119}
### de-en model4.pt num_beams=50 objs=2000
{'bleu': 41.1808, 'n_obs': 2000, 'runtime': 237, 'seconds_per_sample': 0.1185}
```
So the differences between checkpoints are quite significant, clearly the 4th checkpoint outperforms them all for each pair.<|||||>Here is where we are at right now BLEU score-wise: (w/ `num_beams=50`) (after switching to using the 4th checkpoint file):
pair | fairseq | transformers
-------|----|----------
"en-ru"|36.4| 33.29
"ru-en"|41.3| 38.93
"de-en"|42.3| 41.18
"en-de"|43.1| 42.79
We are very close with de/en/de, but 2-3 points below on ru/en/ru
So wrt/ model ensemble - as it was suggested transformers currently won't support that mechanism - so do we just stop here and release this ported version with a slight handicap?
I'm going to work on the remaining divergence in beam search and may be score a little bit more. But I doubt we will get to the same level w/o ensemble.
p.s. I'm uploading new models, so in about 11 hours the CDN cache should update, if you want to validate these numbers.<|||||>1) We can definitely merge this PR without ensemble, smart move checking each .pt file.
2) Would be good to figure out how to eval 1 model.pt file with `fairseq-generate` so that we can figure out whether the discrepancy is from anything besides ensembling.
<|||||>> Would be good to figure out how to eval 1 model.pt file with `fairseq-generate` so that we can figure out whether the discrepancy is from anything besides ensembling.
I agree. That would help a lot! But no word yet from @edunov: https://github.com/pytorch/fairseq/issues/2544
<|||||>wrt to shrinking it and reusing more from bart: probably it would take modifying bart to work with two vocabs and fall back on one, e.g. `if tgt_vocab is None: tgt_vocab_size = src_vocab_size`? That's the major reason for the "fork".
Then we can definitely fold most of it back with a few extra flags.<|||||>Here is an update on fairseq bleu scores validation. Got a [reply with great instructions](https://github.com/pytorch/fairseq/issues/2544#issuecomment-688054859) from @edunov and as a result I was able to get 35.7 with 4 models and 36.0 with model4 (en-ru pair). Sergei suggests that one more step is needed to re-rank the scores to reach the reported in the paper 36.4 score. Our best score at the moment is 33.29 for this pair. (other pairs are much closer to the goal than this one).
So now I know we are comparing apples to apples and I have more figuring out to do.<|||||>@sshleifer, I added a new test `test_bleu_scores` that evals bleu score on a small batch, which I think is very useful for regression testing - as it's now built into the test suite. it's the same speed as other integration tests (model loading still takes much longer). Surely, it gives about 2/3rd of the best score, but it's enough to detect a regression in the model.
I added caching so now it should be almost as fast to have many more integration tests.
Question: currently I had to hack `sys.path` to get to the code in `examples/seq2seq`.
```
# XXX: make calculate_bleu accessible to integration tests?
examples_dir = os.path.abspath(os.path.join(os.path.dirname(__file__), "..", "examples"))
sys.path.insert(0, examples_dir)
from seq2seq.utils import calculate_bleu # noqa
```
Is it time we make this and similar functions available to our "normal" integration tests? Thoughts?
**edit**: I ended up just copying the function as it's just 1 line.
but then of course CI doesn't have `sacrebleu` installed.
Where is the information on how/where slow tests are run - i.e. not on CI? I understand they are being run, just don't know where and how to see the status. Then need to add `sacrebleu` to the requirements file of that special CI.<|||||>Tried to answer your CI Q: https://discuss.huggingface.co/t/circleci-github-actions-which-tests-run-where-and-when/1042
In terms of moving the scorers, you will encounter some resistance because of the need to add dependencies.
You could just put your test in `examples/seq2seq/test_fsmt_bleu_score.py`.
`sacrebleu` is in `examples/requirements.txt` so the self-scheduled nightly CI will run it (hopefully).<|||||>OK, the model has been decoupled from Bart and a lot of the unneeded code removed.
Please let me know if anything else is needed. Thank you.<|||||>Finally, before merging this, let's discuss the model naming with thinking into the future.
What we give user is not fairseq, but a wmtXX-based/trained model, so perhaps fairseq shouldn't be anywhere in the name.
fairseq have done other wmt datasets in the past. And so very likely to do wmt20 and others. If such future versions of the model are not much different perhaps those too could be folded into this model, therefore it shouldn't be hardwired to wmt19.
They call their line of "products" related to wmt: `transformer.wmt\d\d.\w\w-\w\w` (`transformer.wmt19.en-ru`, `transformer.wmt20.de-en`, etc.
Perhaps therefore `TWMT` is most fitting then as the base name? As in `TransformerWMT`, but shorter? So we end up with:
* `TWMTTokenizer`
* `TWMTForConditionalGeneration`
Thoughts?
p.s. for better context, the loading code for fairseq wmt is:
```
model = torch.hub.load('pytorch/fairseq', "transformer.wmt19.en-ru",
checkpoint_file='model1.pt:model2.pt:model3.pt:model4.pt', tokenizer='moses', bpe='fastbpe')
```<|||||>- transformer is unhelpful -- all models in this lib are transformers.
- `FairseqMTModel` works for me -- its analogous to `MarianMTModel`.
@thomwolf: this is good from my side. You may have opinions on naming/nonstandard use of `DecoderConfig`
<|||||>> transformer is unhelpful -- all models in this lib are transformers.
agreed!
> FairseqMTModel works for me -- its analogous to MarianMTModel.
Too big of a scope? this one is wmt-specific.
Perhaps `FWMT`? as in FairseqWMT
and lowercased `fairseqmt` where it's needed sucks readability-wise, full abbreviation `fmt` works much better - i guess that's why it's `modeling_marian.py` and not `modeling_marianmt.py`.
Or just `WMT` from `wmtxx` series?<|||||>> nonstandard use of `DecoderConfig`
this is a non-standard model with 2 vocabs of different sizes, that if I'm correct is the first one in the family, so it calls for a non-standard solution in lieu of changing the core functions to support such models.
There are at least 3 other hacks I had to add in the tokenizer and the model/config to fit into the current world of "same size src/tgt vocab". And there is at least one core function (resize) that will most likely break on this model, since it resizes to the same size, but we haven't had a need for it so far.
p.s. Oddly enough their en-de/de--en models are of the same size merged vocabs, but ru-en/en-ru are not.
<|||||>I wrote a script that translates with fairseq/model4 and this model based on model4 side by side and comparing the outputs.
I fed it all of sacrebleu eval text, so out of 8000 sentences there were ~10 mismatches - the rest matches up perfectly on the top ranking beam choice (beams=5). Excellent!
Yet, we are still behind on the bleu scores.
We don't have (1) the model ensemble, and also (2) the re-ranking algorithm that they use, which is responsible for the extra points.<|||||>Bringing some of the insights from porting allen nlp models at https://github.com/huggingface/transformers/issues/7049, I tried to re-run eval with `len_penalty=0.6` (until now we used the default `len_penalty=1.0`).
And yes, for 3 out of 4 models we get a significant improvement.
| pair | fairseq +rerank | fairseq -rerank | transformers |
| ------- | --------- | --------- | ------------- |
| "ru-en" | 41.3 | 38.55 | 38.14/39.05 |
| "en-ru" | 36.4 | 31.26 | 32.76 |
| "en-de" | 43.1 | 40.88 | 42.23 |
| "de-en" | 42.3 | 39.38 | 40.71 |
We score higher on ru-en with `len_penalty=1.0` 38.8524, vs `38.13` with `len_penalty=0.6`. Rerunning with `len_penalty=1.1`, I get `39.0498` - almost a point higher!
I'm not sure how to get the `len_penalty` used by fairseq - this data is not being shared, other than the paper alluding that they found the best fit by searching the space.
I suppose we could search too, but how are we to know that finding the length penalty that leads to the highest bleu score on 2000 items is generic enough to lead to the best translation quality for any other input?
What do you think?
And we now beat fairseq's results on a single model with no re-ranking.<|||||>Using the new `run_eval_search.py` script https://github.com/huggingface/transformers/pull/7109 I run an extensive search last night and got some extra score!
```
PAIR=en-de
--search="num_beams=5:8:11:15 length_penalty=0.6:0.7:0.8:0.9:1.0:1.1 early_stopping=true:false"
```
Here is just the top results.
```
bleu | num_beams | length_penalty | early_stopping
----- | --------- | -------------- | --------------
42.83 | 15 | 1.0 | 0
42.79 | 8 | 1.0 | 0
42.79 | 15 | 0.9 | 0
42.79 | 15 | 1.1 | 1
42.77 | 5 | 1.0 | 0
42.76 | 8 | 0.8 | 0
```
I think I will run it for all others and use the best reasonable hparam set as the default. Here it'd be: `5 | 1 | False` - the user can of course override these during `generate`.<|||||>> This is great, impressive work @stas00!
Thank you for the kind words, @thomwolf and doing the review!
> Regarding naming I like both `FSMTModel` and `FairseqMTModel` with a preference for the later more explicit naming option.
The only issue I see with `FairseqMTModel` is that when we have to use the lowercased version of it in the code: `fairseqmt` it doesn't lend to readability `fsmt` on the other hand reads easily.
Also when typing it out often I couldn't remember whether to use FairSeq or Fairseq. This was my initial name, and it was quite painful working with it. Once I switched to FSMT I experienced much more flow.
So based on these 2 points my vote goes for `FSMTModel`<|||||>Sounds like `FSMTModel` wins!<|||||>> Sounds like `FSMTModel` wins!
Excellent! So once @LysandreJik and @sgugger get a chance to review we can finally merge it!<|||||>Hmm, since we removed the `fsmt-` prefix in model names, it is no longer possible to identify all models for this arch:
https://huggingface.co/models?search=wmt
gives models from other arch as well.
@sshleifer - do you have any Ideas how to solve this?
Restore the `fsmt-` prefix?<|||||>Your yaml front matter allows filters!
Is this page correct: https://huggingface.co/models?filter=fsmt ?<|||||>> Your yaml front matter allows filters!
> Is this page correct: https://huggingface.co/models?filter=fsmt ?
Oh, fantastic! All is good then! Thank you, @sshleifer <|||||>FYI, I moved all the data-prep-convert/eval/card writing scripts into their own place: https://github.com/huggingface/transformers/pull/7155 so the convert script got much shorter.<|||||>I think we freaked out github, it stopped reporting checks.
@LysandreJik - this is good to go - thanks a lot for your feedback.<|||||>> I think you can do `Add suggestion to batch` to prevent the issue where you can't find the comments anymore from happening!
Oh, that was a super-helpful hint. I wish I knew about it 2 days ago. Thanks a lot!
> Otherwise, feel free to just to the modifications yourself, I don't need to be co-author!
It's a team work ;) Thank you for your contribution, @LysandreJik!
|
transformers | 6,939 | closed | PyTorch (with GPU) Trainer leaks CPU memory on Google Colab | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Trainer: @sgugger
## Information
Model I am using `RoBERTaForMaskedLM`:
`
config = RobertaConfig(
vocab_size=32_000,
max_position_embeddings=256+2,
num_attention_heads=12,
num_hidden_layers=6,
type_vocab_size=1,
)
`
The problem arises when using:
* [ ] [my own modified scripts](https://colab.research.google.com/drive/1bVv6V9IIhTNXvWbpUvQyPyRpK3LOgFfz?usp=sharing): I am trying to follow the official blog tutorial on training a language model from scratch. I have made a few changes from the official script (using Marathi OSCAR corpus, changed model config, and vocabulary size, and am fetching the sentences in the dataset on the fly
The tasks I am working on is:
* [ ] my own task or dataset: Masked Language Modelling with RoBERTa on Marathi Oscar Corpus
## To reproduce
Steps to reproduce the behavior:
1. Run this [colab notebook](https://colab.research.google.com/drive/1bVv6V9IIhTNXvWbpUvQyPyRpK3LOgFfz?usp=sharing).
## Expected behavior
The RAM consumption starts rising towards the end of the first epoch, ultimately crashing the entire session due to full memory consumption (12.72GB of Colab RAM)
| 09-04-2020 05:02:44 | 09-04-2020 05:02:44 | |
transformers | 6,938 | closed | The downloading url of GermEval 2014 dataset is out dated. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
The downloading url in GermanEval 2014 dataset is out dated in Readme file https://github.com/huggingface/transformers/tree/master/examples/token-classification The urls should be substituted by those in run.sh
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-04-2020 03:25:25 | 09-04-2020 03:25:25 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,937 | closed | Finetune other models for sentence-classification | # ❓ Questions & Help
I want to use run_glue.py to finetune pre-trained models for classification,but I find that the script can only be used for BERT, XLM, XLNet and RoBERTa.I want to finetune other models like longformer ,what should I do?
| 09-04-2020 03:16:27 | 09-04-2020 03:16:27 | Hi, there is a `LongformerForSequenceClassification` so you should be able to use `run_glue.py` with that model.
I've removed the misleading dosctring in https://github.com/huggingface/transformers/commit/1650130b0fd71eb80380c47c8ffed89d49ff3481.<|||||>thank you.I can finetune all models for downstream tasks<|||||>sorry to interrupt you, I have a question for new version of transformers.
I once used the old version, when I run the python "run_glue.py", I found the time for loading different models to GPU is defferent(the code is model.to_device() ),the more layer the model has, the more time it takes to load the model to GPU
However, when I use the new version,when I run the python "run_glue.py", I found the time for loading different models to GPU is nearly the same . (I think the new script load the model to GPU when init the Trainer, )
can you explain the reason
cczhou
[email protected]
------------------ 原始邮件 ------------------
发件人: "huggingface/transformers" <[email protected]>;
发送时间: 2020年9月7日(星期一) 晚上8:17
收件人: "huggingface/transformers"<[email protected]>;
抄送: "沉默"<[email protected]>;"Author"<[email protected]>;
主题: Re: [huggingface/transformers] Finetune other models for sentence-classification (#6937)
Closed #6937.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe. |
transformers | 6,936 | closed | Load BERT+GPT2 in EncoderDecoder | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
I am working on modelling an EncoderDecoderModel using weights of BERT and GPT2, after going through lots of repo and issues I found that currently its not possible, but I found a model card that has used this model of BERT+GPT2 on dataset cnn-dailymail [here](https://huggingface.co/patrickvonplaten/bert2gpt2-cnn_dailymail-fp16). I would like to know in which version of transformers is that possible and one thing more that there was an attribute passed in `TrainingArguments` module, that was `predict_from_generate`, I can't find that in `transformers`: 3.1.0, 3.0.2, 2.11.0, please clear me in which version does these parameters constitute.
@patrickvonplaten Please answer my query
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. --> | 09-04-2020 03:10:18 | 09-04-2020 03:10:18 | Hi @AmbiTyga ,
Bert2GPT2 is available in the latest release, however `predict_from_generate` from generate is not yet added in `Trainer`.
You can set `predict_from_generate` to `False` and `comput_metrics` to `None` if you done't need generative metrics (ROUGE etc) at training time.
If you want to use `predict_from_generate` from generate then you'll need to install transformers from this fork.
https://github.com/huggingface/transformers/tree/more_general_trainer_metric<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,935 | closed | Replaced torch.load for loading the pretrained vocab of TransformerXL tokenizer to pickle.load | TransformerXL tokenizer requires torch to work because it uses torch.load to load the vocabulary. This means that if I'm using the TF2 implementation, I have to add torch as a dependency just for that. So I replaced the call to a call to pickle.load (which is what torch.load internally uses) to solve the issue.
Tested an all the TransformerXL related tests (also the slow ones) and they all passed. | 09-04-2020 01:53:56 | 09-04-2020 01:53:56 | Hi @w4nderlust that's a good idea!
The CI seems not happy with the change though and it seems related to your changes.
Do you think you could take a look?<|||||>@thomwolf trying to see the details of the failing tests, but circleci wat me to login with github and grant access to all my repos and orgs, I prefer to avoid it.
If you can point out the failing tests I'm happy to take a look at it.<|||||>@thomwolf I inspected further and this is what i discovered:
1. i was running the `modeling_transfo_xl.py` tests, but I should have been running the `tokenization_transfo_xl.py` test. I Imagine the errors in CI are coming from there.
2. upon further inspection, I noticed that inside `tokenization_transfo_xl.py` torch is used everywhere. There probably was a design decision that led to that which I'm not aware of, but as a user i would ask you to reconsider, because if the tokenizer uses torch, the TF version of TransformerXL can neve be used without installing torch. The extent to which torch is used goes beyond my current familiarity with the library, so i will refrain to propose modifications to it, a part from what i propose in the next point.
3. in my commit I replaced the loading of the vocab, but upon inspection i realized that, yes, torch uses pickle, but it does that in a way that is peculiar, including magic numbers and protocol versions and some custom logic that will take some time to reverse engineer ( https://github.com/pytorch/pytorch/blob/0c01f136f3c8d16f221d641befcb5a74142bbeb1/torch/serialization.py#L764-L774 ). It doesn't seem you can directly load the vocab dictionary without re-implementing quite some load code from torch, plus this doesn't sound like a sound approach because PyTorch can itself start using a new load mechanism in the future. So, what I tried to do is to replace ALSO `torch.save` usages within the context of vocabulary save with `pickle.dump`.( line 260-262) in my last commit). The effect is that now all `test_tokenize_transfo_cl.py` tests pass, the vocab can be saved and loaded, but, because the vocab that ships with the pretrained models was saved originally with torch, If it try to load from pretrained model, loading doesn't work (what is loaded is just the torch magic number). So here I guess you have to make a call about what you want to do: if you want to use pickle to load and save vocab, this PR does it for you, but you have to change the TransformerXL pretrained model that you ship by replacing the vocab file saved with PyTorch with one saved with pickle (the code to do it from the current vocab file is straightforward `pickle.dump(torch.load(vocab_file), vocab_file)`).
As I realized the issue is bigger than I originally thought, it would be great if someone could look at it in more detail from the HF side.<|||||>Hi @w4nderlust ok, I'm reaching this PR now.
So the original tokenizer for Transformer-XL was copied from the original research work to be able to import the trained checkpoints. The reliance on PyTorch is thus not really a design decision of us but more of the original author.
We can definitely reconsider it and if you don't mind, I'll try to build upon your PR to relax this reliance on PyTorch while keeping backward compatibility if possible.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=h1) Report
> Merging [#6935](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/aba4e22944f0c985bebdcde51d47a565dd4f551d?el=desc) will **increase** coverage by `1.96%`.
> The diff coverage is `79.16%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6935 +/- ##
==========================================
+ Coverage 74.71% 76.67% +1.96%
==========================================
Files 194 181 -13
Lines 39407 35738 -3669
==========================================
- Hits 29441 27401 -2040
+ Misses 9966 8337 -1629
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.09% <60.00%> (+0.17%)` | :arrow_up: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `42.48% <84.21%> (+0.74%)` | :arrow_up: |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.71% <0.00%> (-77.89%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `18.73% <0.00%> (-74.53%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `21.53% <0.00%> (-68.15%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.75% <0.00%> (-66.38%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `60.63% <0.00%> (-20.14%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `55.70% <0.00%> (-15.11%)` | :arrow_down: |
| [src/transformers/integrations.py](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9pbnRlZ3JhdGlvbnMucHk=) | `29.00% <0.00%> (-5.66%)` | :arrow_down: |
| ... and [71 more](https://codecov.io/gh/huggingface/transformers/pull/6935/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=footer). Last update [aba4e22...cd57922](https://codecov.io/gh/huggingface/transformers/pull/6935?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thank you for the work on this! Much appreciated! :) |
transformers | 6,934 | closed | non-interactive transformers-cli upload? | # 🚀 Feature request
Currently, `transformers-cli upload` works only interactively due to its prompt:
`Proceed? [Y/n]`
After running the updated model conversion, I would like to be able to do:
```
cd data
transformers-cli upload fsmt-wmt19-ru-en
transformers-cli upload fsmt-wmt19-en-ru
transformers-cli upload fsmt-wmt19-de-en
transformers-cli upload fsmt-wmt19-en-de
cd -
```
But this won't work:
Would it be possible to add a `-y` override?
Alternatively, would it be possible to give it all dirs to upload in one command?
```
transformers-cli upload fsmt-wmt19-ru-en fsmt-wmt19-en-ru fsmt-wmt19-de-en fsmt-wmt19-en-de
```
## Motivation
I have been re-uploading 4 x 1.1GB models on a relatively slow connection, and I have to be around to hit Y for each one of them, which is very counter-productive, as I have to go back and re-check whether each upload has been completed. I can probably code some shell expect script to feed it automatically, but this defeats the purpose.
Thank you!
| 09-04-2020 00:46:46 | 09-04-2020 00:46:46 | FWIW, I found a workaround (god bless Stackoverflow):
```
cd data
yes Y | transformers-cli upload fsmt-wmt19-ru-en
yes Y | transformers-cli upload fsmt-wmt19-en-ru
yes Y | transformers-cli upload fsmt-wmt19-de-en
yes Y | transformers-cli upload fsmt-wmt19-en-de
cd -
```<|||||>Ah, nice find :)
I think a `-y` flag would be reasonable if you want to open a PR<|||||>Will do. Thank you.<|||||>Done: https://github.com/huggingface/transformers/pull/7035 |
transformers | 6,933 | closed | [docstring] missing arg | add the missing `tie_word_embeddings` entry
| 09-03-2020 22:14:07 | 09-03-2020 22:14:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=h1) Report
> Merging [#6933](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `1.66%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6933 +/- ##
==========================================
+ Coverage 77.70% 79.36% +1.66%
==========================================
Files 161 161
Lines 30119 30119
==========================================
+ Hits 23403 23905 +502
+ Misses 6716 6214 -502
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `100.00% <ø> (+78.37%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `48.80% <0.00%> (-46.43%)` | :arrow_down: |
| [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |
| [src/transformers/modeling\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG0ucHk=) | `88.77% <0.00%> (-2.55%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6933/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=footer). Last update [e95d262...ca0f022](https://codecov.io/gh/huggingface/transformers/pull/6933?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Perhaps this is not the place to ask, but why do we use rst in some docs and md in others? I am yet to use rst, so I don't know what the cons/pros are. Perhaps it has to do with sphinx's preferred format for its linking features? <|||||>We need rst in the docstrings because that's the format sphinx uses. Then we need to use rst in the doc files that want to link to some functions/classes to be able to leverage sphinx autolinking features. Markdown is also supported, but you can't automatically link to a class/function in it, so I prefer using rst.
In the source docs most of the files are in rst apart from some simlinks to some READMEs (that need to be in Markdown to properly display on GitHub), the CONTRIBUTING and one file bout migration (this one could be converted to rst if we really wanted to). For a new file, I'd strongly encourage rst unless there is a reason to use Markdown.<|||||>Excellent. I didn't know any of this. Will be adding .rst for new files in the future (though can't help but notice that markdown seems a way easier/more intuitive to write). |
transformers | 6,932 | closed | [docstring] misc arg doc corrections | - fix docstring s/int/bool/
- correct arg description
- fix num_labels to match reality | 09-03-2020 22:07:34 | 09-03-2020 22:07:34 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=h1) Report
> Merging [#6932](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `1.83%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6932 +/- ##
==========================================
+ Coverage 77.70% 79.53% +1.83%
==========================================
Files 161 161
Lines 30119 30119
==========================================
+ Hits 23403 23956 +553
+ Misses 6716 6163 -553
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `16.26% <0.00%> (-66.67%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9ncHQyLnB5) | `71.84% <0.00%> (-23.17%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.55% <0.00%> (-20.48%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-6.08%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.26% <0.00%> (-0.17%)` | :arrow_down: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6932/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=footer). Last update [e95d262...1c08fdb](https://codecov.io/gh/huggingface/transformers/pull/6932?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks. Will in these fixes don't hesitate to replace the True or False by :obj:\`True\` and :obj:\`False\` for consistency (also don't hesitate to tag me on doc PR for quicker reviews :-) )<|||||>Understood!
Do we have a model of perhaps one largish module that we can use a reference for how the rest should be done? So that you polish the hell out of it, and then this will be the model to follow.
<|||||>`tokenization_utils_base` is a good example for instance, all other utils modules too. `config_utils` has an example of how to split parameters in several subgroups if you ever need a model of that. Rules are the usual sphinx-like and some more personal nits are:
- not writing "defaults to :obj:\`None\`" for optional things that have default (it's implied)
- using :obj:\`foo\` syntax for objects (like False, True, all strings) or mention to other arguments
but not numbers (like 0, 1.0...)
- using italics for optional<|||||>Great tips on the model docs and the small specifics. I see some are already here:
https://github.com/huggingface/transformers/tree/master/docs#writing-documentation---specification
add the others too?
Loving params subgroup docs - it's very helpful. I'd organize the params in the function in the same groups too.
Thank you for sharing all these, @sgugger!<|||||>Yes we could add those general rules to that section of the docs README. (I am unsure people actually read that so did not take the time to properly update it :-) )<|||||>I didn't know it was there, but now that I do, I'd definitely per-use it - so yes, please update it!<|||||>> * not writing "defaults to :obj:`None`" for optional things that have default (it's implied)
```
grep -r "defaults to :obj:.None." src | wc -l
```
```
580
```
might be easy to replace in one swoop.
<|||||>Feel free to do it in one PR :-)<|||||>Done: https://github.com/huggingface/transformers/pull/6956 |
transformers | 6,931 | closed | remove arg that is not being used | `extra_pos_embeddings` is passed but not being used anywhere, so deleting it.
| 09-03-2020 21:56:37 | 09-03-2020 21:56:37 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=h1) Report
> Merging [#6931](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/e95d262f2553859af9bffbfe5f5bc7e362259939?el=desc) will **increase** coverage by `0.42%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6931 +/- ##
==========================================
+ Coverage 77.70% 78.12% +0.42%
==========================================
Files 161 161
Lines 30119 30119
==========================================
+ Hits 23403 23530 +127
+ Misses 6716 6589 -127
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `94.00% <ø> (ø)` | |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6931/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `90.76% <0.00%> (+20.74%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=footer). Last update [e95d262...716864a](https://codecov.io/gh/huggingface/transformers/pull/6931?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I think the optimal solution here is the deletion and then hardcode
`self.extra_pos_embeddings = 2` lower in the file.
Otherwise LGTM.<|||||>But it's there already in a different form: https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_bart.py#L197
```
self.extra_pos_embeddings = self.pad_token_id + 1
```
just to validate, you're suggesting to replace ` = self.pad_token_id + 1` with `= 2`, yes?<|||||>@stas00 yes! you can even delete the config attribute. and replace it with 2 in `modeling_bart.py` code.
github wont let me suggest cause too low in file :)<|||||>I didnt know blenderbot was active. We may need this.
Don't merge yet pls.<|||||>I think I will need this for blenderbot, otherwise I'll reopen. |
transformers | 6,930 | closed | Trainer with grad accum | As mentioned on the forum, the behavior of `Trainer` can be confusing when using gradient accumulation as the count of steps becomes the count of update steps, not the count of training examples seen. This PR adds a warning in the doc. | 09-03-2020 20:54:23 | 09-03-2020 20:54:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=h1) Report
> Merging [#6930](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/207ed8cb78ceb4980e40c89f867b06202e660395?el=desc) will **decrease** coverage by `3.53%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6930 +/- ##
==========================================
- Coverage 80.60% 77.07% -3.54%
==========================================
Files 161 161
Lines 30119 30119
==========================================
- Hits 24278 23214 -1064
- Misses 5841 6905 +1064
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.66% <ø> (ø)` | |
| [src/transformers/training\_args\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzX3RmLnB5) | `47.45% <ø> (ø)` | |
| [src/transformers/configuration\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2x4bWVydC5weQ==) | `20.00% <0.00%> (-80.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `23.50% <0.00%> (-67.27%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG5ldC5weQ==) | `60.81% <0.00%> (-22.62%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6930/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=footer). Last update [207ed8c...68c12f3](https://codecov.io/gh/huggingface/transformers/pull/6930?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,929 | closed | replace torch.triu with onnx compatible code | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #5075
There was a [draft pull request](https://github.com/huggingface/transformers/pull/6334) to this effect a few months ago but the author withdrew it. I'm not sure why. It resolves the _torch.triu_ issue with ONNX. It gives the same output in my tests and runs at the same speed. Entirely possible that I've missed something though!
| 09-03-2020 20:49:15 | 09-03-2020 20:49:15 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=h1) Report
> Merging [#6929](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/207ed8cb78ceb4980e40c89f867b06202e660395?el=desc) will **decrease** coverage by `0.58%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6929 +/- ##
==========================================
- Coverage 80.60% 80.02% -0.59%
==========================================
Files 161 161
Lines 30119 30122 +3
==========================================
- Hits 24278 24105 -173
- Misses 5841 6017 +176
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.93% <100.00%> (+0.03%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+1.95%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (+5.26%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.71% <0.00%> (+5.26%)` | :arrow_up: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `97.08% <0.00%> (+19.34%)` | :arrow_up: |
| ... and [2 more](https://codecov.io/gh/huggingface/transformers/pull/6929/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=footer). Last update [207ed8c...82d9234](https://codecov.io/gh/huggingface/transformers/pull/6929?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This is great, want to check in your tests?<|||||>I'm not quite sure what the most appropriate way of including them would be to be honest!
Currently I have a little script that looks like this
```python
import torch
from transformers.modeling_bart import fill_with_neg_inf
def test_upper_right_triangle(torch_device):
tgt_len = 512
causal_mask_dtype = torch.float32
causal_mask_short = torch.triu(
fill_with_neg_inf(torch.zeros(tgt_len, tgt_len)),
1).to(dtype=torch.float32, device=torch_device)
tmp = fill_with_neg_inf(torch.zeros(tgt_len, tgt_len))
mask = torch.arange(tmp.size(-1))
tmp.masked_fill_(mask < (mask + 1).view(tmp.size(-1), 1), 0)
causal_mask_long = tmp.to(dtype=causal_mask_dtype, device=torch_device)
assert torch.all(torch.eq(causal_mask_short, causal_mask_long))
if __name__ == "__main__":
test_upper_right_triangle('cpu')
```
as well as the fact that when I run the ```convert``` function, the output gives the same predictions at the same speed.
Since this this would be testing one version of the code against another possible version, as opposed to some external ground truth or expected value, it feels a bit self referential?<|||||>you're right, LGTM @LysandreJik ! |
transformers | 6,928 | closed | onnx-export example notebook is failing for TF | hi, I'm using the latest 3.1.0 release.
When I run
```
from transformers.convert_graph_to_onnx import convert
# Tensorflow
convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11)
```
as shown in https://github.com/huggingface/transformers/blob/master/notebooks/04-onnx-export.ipynb,
the following error occurs
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-1-bc5982e91176> in <module>
7
8 # Tensorflow
----> 9 convert(framework="tf", model="bert-base-cased", output="onnx/bert-base-cased.onnx", opset=11)
/nix/store/w8xw8jng4dfjcqijfjw1sps8pim669kj-python3.7-transformers-3.1.0/lib/python3.7/site-packages/transformers/convert_graph_to_onnx.py in convert(framework, model, output, opset, tokenizer, use_external_format, pipeline_name)
335 nlp = load_graph_from_args(pipeline_name, framework, model, tokenizer)
336
--> 337 if not output.parent.exists():
338 print(f"Creating folder {output.parent}")
339 makedirs(output.parent.as_posix())
AttributeError: 'str' object has no attribute 'parent'
``` | 09-03-2020 18:50:26 | 09-03-2020 18:50:26 | The following should work:
```
from pathlib import Path
from transformers.convert_graph_to_onnx import convert
# Tensorflow
convert(framework="tf", model="bert-base-cased", output=Path("onnx/bert-base-cased.onnx"), opset=11)
```<|||||>@subho406 thanks! I thought the error was about the model output since using string in output path had worked in the previous version. |
transformers | 6,927 | closed | [s2s] support early stopping based on loss, rather than rouge | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-03-2020 17:05:53 | 09-03-2020 17:05:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=h1) Report
> Merging [#6927](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/207ed8cb78ceb4980e40c89f867b06202e660395?el=desc) will **decrease** coverage by `3.98%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6927 +/- ##
==========================================
- Coverage 80.60% 76.61% -3.99%
==========================================
Files 161 161
Lines 30119 30119
==========================================
- Hits 24278 23077 -1201
- Misses 5841 7042 +1201
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/configuration\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX21vYmlsZWJlcnQucHk=) | `26.47% <0.00%> (-70.59%)` | :arrow_down: |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `23.49% <0.00%> (-65.97%)` | :arrow_down: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `66.66% <0.00%> (-23.43%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `52.98% <0.00%> (-13.44%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `67.10% <0.00%> (-12.67%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.65% <0.00%> (-2.18%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |
| ... and [10 more](https://codecov.io/gh/huggingface/transformers/pull/6927/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=footer). Last update [207ed8c...b1d4604](https://codecov.io/gh/huggingface/transformers/pull/6927?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,926 | closed | [s2s] use --eval_beams command line arg | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-03-2020 16:19:35 | 09-03-2020 16:19:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=h1) Report
> Merging [#6926](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f360d3d1c606d6d79cdf1efa53c3d719249573d?el=desc) will **increase** coverage by `0.32%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6926 +/- ##
==========================================
+ Coverage 80.23% 80.56% +0.32%
==========================================
Files 161 161
Lines 30119 30119
==========================================
+ Hits 24167 24265 +98
+ Misses 5952 5854 -98
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9seG1lcnQucHk=) | `22.49% <0.00%> (-71.63%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.24% <0.00%> (-0.17%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.13%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+0.67%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (+1.00%)` | :arrow_up: |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6926/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=footer). Last update [0f360d3...90aec7a](https://codecov.io/gh/huggingface/transformers/pull/6926?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,925 | closed | Reopen: Unable to use run_squad with xla_spawn.py on TPU | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0a0+ab76067 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script? TPU
- Using distributed or parallel set-up in script?: YES
### Who can help
@LysandreJik
## Information
I see there is an issue (#5470) closed on July, because the SQuAD example didn't have trainer support yet, but it seems that now it does have according to the table (https://github.com/huggingface/transformers/tree/master/examples#the-big-table-of-tasks)
Model I am using (Bert, XLNet ...):
BERT
The problem arises when using:
* [X] the official example scripts: (give details below)
the official example scripts: RUN_squad.py + xla_spawn.py
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: (give the name)
SQuAD v2.0
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Install pytorch-xla on colab using:
VERSION = "20200325" #@param ["1.5" , "20200325", "nightly"]
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
!python pytorch-xla-env-setup.py --version $VERSION
Trying to run_squad on colab TPUs using xla_spawn.py
python examples/xla_spawn.py --num_cores 8 \
examples/question-answering/run_squad.py \
--model_type electra \
--model_name_or_path google/electra-base-discriminator \
--do_train \
--do_eval \
--do_lower_case \
--train_file "/content/drive/My Drive/bert/train.json" \
--predict_file "/content/drive/My Drive/bert/val.json" \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2.0 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir "/content/drive/My Drive/bert/newdir6"
Error is thrown up
Traceback (most recent call last):
File "examples/xla_spawn.py", line 72, in <module>
main()
File "examples/xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
AttributeError: module 'run_squad' has no attribute '_mp_fn'
Expected behavior
Training should run properly using xla_spawn.py
| 09-03-2020 14:55:36 | 09-03-2020 14:55:36 | hi @christian-janiake-movile ,
`run_squad` won't work with `xla_spawn` since it doesn't use `Trainer`. You can use `run_squad_trainer.py` with `xla_spawn.py` if you want to fine-tune on TPU<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,924 | closed | AttributeError: 'list' object has no attribute 'clone' with BartTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: MacOS
- Python version: 3.7.6
- PyTorch version (GPU?): 1.6.0 (No GPU)
- Tensorflow version (GPU?): 2.3.0 (No GPU)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
- Summarization & Bart: @sshleifer
## Information
- Model I am using (Bert, XLNet ...): **`BartTokenizer, BartForConditionalGeneration`**
- I'm loading the model from the directory. I saved the model which I initially loaded with `'facebook/bart-large-cnn'` and saved later after using `.save_pretrained(tmp_model_dir)` command.
The problem arises when using:
* [x] example scripts: (give details below)
The tasks I am working on is:
* [x] summarization task: (give the name)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BartTokenizer, BartForConditionalGeneration
model = BartTokenizer.from_pretrained('/Downloads/facebook-bart-large-cnn')
tokenizer = BartForConditionalGeneration.from_pretrained('/Downloads/facebook-bart-large-cnn')
raw_text = """ New York (CNN)When Liana Barrientos was 23 years old, she got married in Westchester County, New York.
A year later, she got married again in Westchester County, but to a different man and without divorcing her first husband.
Only 18 days after that marriage, she got hitched yet again. Then, Barrientos declared "I do" five more times, sometimes only within two weeks of each other.
In 2010, she married once more, this time in the Bronx. In an application for a marriage license, she stated it was her "first and only" marriage.
Barrientos, now 39, is facing two criminal counts of "offering a false instrument for filing in the first degree," referring to her false statements on the
2010 marriage license application, according to court documents.
Prosecutors said the marriages were part of an immigration scam.
On Friday, she pleaded not guilty at State Supreme Court in the Bronx, according to her attorney, Christopher Wright, who declined to comment further.
After leaving court, Barrientos was arrested and charged with theft of service and criminal trespass for allegedly sneaking into the New York subway through an emergency exit, said Detective
Annette Markowski, a police spokeswoman. In total, Barrientos has been married 10 times, with nine of her marriages occurring between 1999 and 2002.
All occurred either in Westchester County, Long Island, New Jersey or the Bronx. She is believed to still be married to four men, and at one time, she was married to eight men at once, prosecutors say.
Prosecutors said the immigration scam involved some of her husbands, who filed for permanent residence status shortly after the marriages.
Any divorces happened only after such filings were approved. It was unclear whether any of the men will be prosecuted.
The case was referred to the Bronx District Attorney\'s Office by Immigration and Customs Enforcement and the Department of Homeland Security\'s
Investigation Division. Seven of the men are from so-called "red-flagged" countries, including Egypt, Turkey, Georgia, Pakistan and Mali.
Her eighth husband, Rashid Rajput, was deported in 2006 to his native Pakistan after an investigation by the Joint Terrorism Task Force.
If convicted, Barrientos faces up to four years in prison. Her next court appearance is scheduled for May 18.
"""
inputs = tokenizer([raw_text], max_length=1024, return_tensors='pt', truncation=True)
```
- Error:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-4-74e209dd3da0> in <module>
----> 1 inputs = tokenizer([raw_text], max_length=1024, return_tensors='pt', truncation=True)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, encoder_outputs, decoder_input_ids, decoder_attention_mask, past_key_values, labels, use_cache, output_attentions, output_hidden_states, return_dict, **unused)
1074 output_attentions=output_attentions,
1075 output_hidden_states=output_hidden_states,
-> 1076 return_dict=return_dict,
1077 )
1078 lm_logits = F.linear(outputs[0], self.model.shared.weight, bias=self.final_logits_bias)
~/anaconda3/lib/python3.7/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
720 result = self._slow_forward(*input, **kwargs)
721 else:
--> 722 result = self.forward(*input, **kwargs)
723 for hook in itertools.chain(
724 _global_forward_hooks.values(),
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in forward(self, input_ids, attention_mask, decoder_input_ids, encoder_outputs, decoder_attention_mask, past_key_values, use_cache, output_attentions, output_hidden_states, return_dict, **kwargs)
904 decoder_input_ids=decoder_input_ids,
905 decoder_padding_mask=decoder_attention_mask,
--> 906 causal_mask_dtype=self.shared.weight.dtype,
907 )
908 else:
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in _prepare_bart_decoder_inputs(config, input_ids, decoder_input_ids, decoder_padding_mask, causal_mask_dtype)
146 pad_token_id = config.pad_token_id
147 if decoder_input_ids is None:
--> 148 decoder_input_ids = shift_tokens_right(input_ids, pad_token_id)
149 bsz, tgt_len = decoder_input_ids.size()
150 if decoder_padding_mask is None:
~/anaconda3/lib/python3.7/site-packages/transformers/modeling_bart.py in shift_tokens_right(input_ids, pad_token_id)
204 def shift_tokens_right(input_ids, pad_token_id):
205 """Shift input ids one token to the right, and wrap the last non pad token (usually <eos>)."""
--> 206 prev_output_tokens = input_ids.clone()
207 index_of_eos = (input_ids.ne(pad_token_id).sum(dim=1) - 1).unsqueeze(-1)
208 prev_output_tokens[:, 0] = input_ids.gather(1, index_of_eos).squeeze()
AttributeError: 'list' object has no attribute 'clone'
```
| 09-03-2020 14:52:29 | 09-03-2020 14:52:29 | I think you flipped model and tokenizer at the beginning. It should be
```python
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('/Downloads/facebook-bart-large-cnn')
model = BartForConditionalGeneration.from_pretrained('/Downloads/facebook-bart-large-cnn')
```<|||||>Pls reopen if there is another issue!<|||||>Damn, this was embarrassing bug on my end. Thank you! 🍻 |
transformers | 6,923 | closed | [s2s] allow task_specific_params=summarization_xsum | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-03-2020 14:10:38 | 09-03-2020 14:10:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=h1) Report
> Merging [#6923](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.90%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6923 +/- ##
==========================================
+ Coverage 77.81% 79.72% +1.90%
==========================================
Files 157 157
Lines 28853 28853
==========================================
+ Hits 22452 23002 +550
+ Misses 6401 5851 -550
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-58.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `69.06% <0.00%> (-29.32%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.72% <0.00%> (-7.19%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6923/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=footer). Last update [4ebb52a...eaef0cb](https://codecov.io/gh/huggingface/transformers/pull/6923?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,922 | closed | inference over onnx output | # ❓ inference over onnx output
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
How to do decoding on the output obtained from onnx inference
I am trying to use onnx runtime for inferring on CPU by following the https://github.com/huggingface/transformers/blob/d822ab636b6a14ed50f7bca0797c1de42c19de61/notebooks/04-onnx-export.ipynb
I have a Marian MT hindi to english fine tuned [model ](https://huggingface.co/Helsinki-NLP/opus-mt-hi-en)which i have managed to convert using conver_graph_to_onnx.py script.
On calling `sequence, pooled = cpu_model.run(None, inputs_onnx)`
I guess the pooled is the encoder output, and sequence is the final decoder output. Please correct if wrong
How can i use the api to get the final tokens (by greedy/beamsearch). For the normal way, we call the `generate` function. Is there any helper function to get the final decoded output form onnx output. Any other guidelines ?
Thanks!
| 09-03-2020 13:58:31 | 09-03-2020 13:58:31 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I have the same issue too! Please some guidelines ?<|||||>Same |
transformers | 6,921 | closed | [model_cards] Fixed some typing mistakes in usage sections in model cards. | Loodos model cards had errors on "Usage" section. It is fixed. Also "electra-base-turkish-uncased" model removed from s3 and re-uploaded as "electra-base-turkish-uncased-discriminator". Its README added.
| 09-03-2020 13:00:30 | 09-03-2020 13:00:30 | |
transformers | 6,920 | closed | (ONNX) Error while converting the model: bad allocation | I was trying to convert gpt2-xl model to onnx model using convert_graph_to_onnx.py.
It ran for a while and stopped with some errors:
`TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!`
` w = w / (float(v.size(-1)) ** 0.5)` (in modeling_gpt2.py:151)
`TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!`
`mask = self.bias[:, :, ns - nd : ns, :ns]` (in modeling_gpt2.py:151)
And the last one:
`Error while converting the model: bad allocation`
I googled about this problem but there was no effective solution (for me) at all.
Please help me, thank you in advance. | 09-03-2020 10:07:20 | 09-03-2020 10:07:20 | Ok, I know it was my fault. I didn't add the argument `--use-external-format` (gpt2-xl is more than 2GB)
Actually I had to open the convert_graph_to_onnx.py file and read each argument's description
Thanks again, I'm closing the issue now. |
transformers | 6,919 | closed | tweak tar command in readme | 09-03-2020 03:39:18 | 09-03-2020 03:39:18 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=h1) Report
> Merging [#6919](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `0.20%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6919 +/- ##
==========================================
- Coverage 80.02% 79.82% -0.21%
==========================================
Files 157 157
Lines 28586 28586
==========================================
- Hits 22876 22818 -58
- Misses 5710 5768 +58
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `36.50% <0.00%> (-60.32%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6919/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=footer). Last update [dfa10a4...252c784](https://codecov.io/gh/huggingface/transformers/pull/6919?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,918 | closed | RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())] | # ❓ When I used transformers based on jupyter, it can't get vocab,but the function AlbertModel.from_pretrained is available
**CODE:**
from transformers import AlbertTokenizer, AlbertModel
import torch
tokenizer = AlbertTokenizer.from_pretrained("./albert-v1/vocab.txt")
**The following error occurs:**
Calling AlbertTokenizer.from_pretrained() with the path to a single file or url is deprecated
------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-bf78623a6e4a> in <module>
----> 1 tokenizer = AlbertTokenizer.from_pretrained("./albert-v1/vocab.txt")
~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, *inputs, **kwargs)
1138
1139 """
-> 1140 return cls._from_pretrained(*inputs, **kwargs)
1141
1142 @classmethod
~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_utils_base.py in _from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1285 # Instantiate tokenizer.
1286 try:
-> 1287 tokenizer = cls(*init_inputs, **init_kwargs)
1288 except OSError:
1289 raise OSError(
~/anaconda3/lib/python3.7/site-packages/transformers/tokenization_albert.py in __init__(self, vocab_file, do_lower_case, remove_space, keep_accents, bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, **kwargs)
153
154 self.sp_model = spm.SentencePieceProcessor()
--> 155 self.sp_model.Load(vocab_file)
156
157 @property
~/anaconda3/lib/python3.7/site-packages/sentencepiece.py in Load(self, model_file, model_proto)
365 if model_proto:
366 return self.LoadFromSerializedProto(model_proto)
--> 367 return self.LoadFromFile(model_file)
368
369
~/anaconda3/lib/python3.7/site-packages/sentencepiece.py in LoadFromFile(self, arg)
175
176 def LoadFromFile(self, arg):
--> 177 return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)
178
179 def Init(self,
RuntimeError: Internal: /sentencepiece/src/sentencepiece_processor.cc(818) [model_proto->ParseFromArray(serialized.data(), serialized.size())]

| 09-03-2020 02:41:03 | 09-03-2020 02:41:03 | The `AlbertTokenizer` in `transformers` is a SentencePiece based tokenizer, so it cannot load `vocab.txt`. You could try loading it in `BertTokenizer`, as it seems to be a wordpiece tokenizer vocabulary.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,917 | closed | T5 Tokenizer fails to decode correctly and prints ⁇ | T5 Tokenizer tokenizes this following sequence to
```
>>> from transformers import T5Tokenizer
>>> tokenizer = T5Tokenizer.from_pretrained("t5-base")
>>> print(tokenizer.tokenize("My phone number is 1-${phone.number}"))
['▁My', '▁phone', '▁number', '▁is', '▁1-', '$', '{', 'phone', '.', 'num', 'ber', '}']
```
So far so good but when we decode the above sequence back, we get weird ⁇ symbols.
```
>>> print(tokenizer.decode(tokenizer.encode("My phone number is 1-${phone.number}")))
My phone number is 1-$ ⁇ phone.number ⁇
```
This along with the bug https://github.com/huggingface/transformers/issues/6150 shows that T5 Tokenizer
- Is not cycle consistent
- Ignores multiple whitespaces
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.9.0-12-amd64-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
T5: @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Shown above
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
A cycle consistent T5 Tokenizer that works on a variety of inputs | 09-03-2020 01:56:47 | 09-03-2020 01:56:47 | @mfuntowicz - Since T5 relies on google's sentencepiece tokenizer for now, can we do anything against it before our own sentencepiece tokenizer is implemented? <|||||>Verified that this is a problem with the original T5 sentencepience tokenizer. Opened an issue with the Google's T5 repository. https://github.com/google-research/text-to-text-transfer-transformer/issues/390<|||||>Closing this issue , quoting from T5 github issue
> > { is OOV because we intentionally removed any pages with { or } from C4 to avoid pre-training on anything other than natural language. So, it gets encoded to ??. SentencePiece has a byte fallback feature but it was not available when we trained our sentencepiece model. |
transformers | 6,916 | closed | [model weights caching] model upload doesn't check model weights hash | I have re-uploaded model weights via `transformers-cli upload` and noticed that when I tried to use it - it didn't get re-downloaded, and instead continued to use the cached version.
The problem seems to come from the fact that the other uploaded files haven't changed, only the model weights.
I double checked that the md5sum of the old weights file is different from the new one.
I re-uploaded the whole folder using:
```
transformers-cli upload fsmt-wmt19-en-de
```
If I hunt down the cached files (not an easy task), and delete those, it does re-download the new version.
If I diff the cached weights file and the updated cache file, which gets re-downloaded if I move away the original cached file, they aren't the same.:
```
Binary files
before/d97352d9f1f96ee4c6055f203812035b4597258a837db1f4f0803a2932cc3071.53ce64c7097bfcd85418af04a21b4a897c78c8440de3af078e577727ad9de3a0
and
after/d97352d9f1f96ee4c6055f203812035b4597258a837db1f4f0803a2932cc3071.53ce64c7097bfcd85418af04a21b4a897c78c8440de3af078e577727ad9de3a0
differ
```
Could we please include the model weights file in the hash calculation?
Thank you.
| 09-02-2020 21:47:07 | 09-02-2020 21:47:07 | I can confirm it was previously checking the model weights and re-downloading if the weights had been changed. Investigating.<|||||>This is due to the CDN caching files, with a 24 hour delay. After 24 hours it should download your file, but if you want it now you can use the `use_cdn` flag and set it to `False`. You can see the documentation for this [here](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L573-L585).<|||||>Thank you for the hint, @LysandreJik. So `from_pretrained(mname, use_cdn=False)`
But that might be tricky for end users who won't know that the code base has changed yet the model weights they get are out sync.
Is there a way to signal CDN to invalidate the cache for some files? It could then be done from the upload util.
<|||||>FWIW, I wrote a one liner to force cache update for the 4 models I'm working at the moment.
```
PYTHONPATH="src" python -c 'from transformers import AutoModel; [AutoModel.from_pretrained("stas/fsmt-wmt19-"+p, use_cdn=False) for p in ["en-ru","ru-en","en-de","de-en"]]'
```
I now have that in my script, so I don't need to think about it.<|||||>@LysandreJik, unfortunately this doesn't solve the issue
`AutoModel.from_pretrained(mname, use_cdn=False)`
Indeed forces a download of the recently updated model - but then if this flag is no longer used in the application - it still downloads the CDN cached version and ends up using the wrong version.
So, basically, this results in 2 copies (different hashes) sitting in the cache dir.
And normal usage w/o using `use_cdn=False` looks up the old version and not the new one. (so things like `run_eval.py` still use the old one)
Thanks.
<|||||>can you run `AutoModel.from_pretrained(mname, use_cdn=False)` in a debugger and check whether the downloaded url is a `https://cdn.huggingface.co` or a `https://s3.amazonaws.com/models.huggingface.co` url?<|||||>I can do that, but I already checked that it downloads the updated model w/ `use_cdn=False`. But then if you run it again w/o `use_cdn=False` it ignores the new download and uses the old model again (if I delete the cached version, it redownloads the old cached version w/o `use_cdn=False` ).<|||||>Oh yeah ok, I see. Can you `run_eval.py` on a local folder path then?<|||||>> Can you `run_eval.py` on a local folder path then?
Yes. Except others can't as they don't have my local copy.
e.g. @sshleifer wants to eval my PR https://github.com/huggingface/transformers/pull/6940, but now has to wait till tomorrow for CDN to expire (or hack around it).
Last night I uploaded an experimental model, which proved to be invalid, thought I re-downloaded it OK as it was working OK and made a PR, except I was testing against the non-current cached version, which was a good one.<|||||>Can we please re-open this ticket? It hasn't been resolved<|||||>Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?
In our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)<|||||>> Can we add a `--no_cdn` boolean flag to `run_eval.py` that would then call `AutoModelForSeq2SeqLM.from_pretrained(use_cdn=False)`?
It could be done. I have a feeling then there will be others.
Perhaps an alternative solution would be to introduce an env var, that would transparently override cdn cache in any situation w/o needing to change every script? `TRANSFORMERS_USE_CDN=False`?
> In our dev workflow we mostly don't use the cdn while the files are still in-flux. Cloudfront invalidation comes with its own set of issues so it's better to view cdn as a means to distribute permanent files. (for this reason we don't serve config.json files from Cloudfront)
Understood!
How do you let others onto testing the model files? Putting them on dropbox or something and sharing the link?
<|||||>No, just S3 links!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>https://github.com/huggingface/transformers/pull/8324 should resolve this. |
transformers | 6,915 | closed | Fix mixed precision issue in TF DistilBert | Fix mixed precision issue in TF DistilBert by removing hard-coded uses of float32.
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6858
| 09-02-2020 20:54:10 | 09-02-2020 20:54:10 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=h1) Report
> Merging [#6915](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.01%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6915 +/- ##
==========================================
+ Coverage 77.81% 79.83% +2.01%
==========================================
Files 157 157
Lines 28853 28853
==========================================
+ Hits 22452 23034 +582
+ Misses 6401 5819 -582
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `98.82% <100.00%> (ø)` | |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6915/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=footer). Last update [4ebb52a...481baa3](https://codecov.io/gh/huggingface/transformers/pull/6915?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I run a test with this change on my ubuntu 18.04 with a 2080Ti GPU, tensorflow-gpu 2.2.0:
```
from tensorflow.keras.layers import Input, Embedding, Bidirectional, GRU, Dense
from tensorflow.keras.models import Model
from transformers import TFDistilBertModel
from tensorflow.keras.mixed_precision import experimental as mixed_precision
policy = mixed_precision.Policy('mixed_float16')
mixed_precision.set_policy(policy)
bert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
inputs = Input(shape=(None,), dtype='int32')
bert_out = bert(inputs)[0]
output = Dense(9, activation='softmax', dtype='float32')(bert_out)
model = Model(inputs, output)
model.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.summary()
x = [[5, 2, 3] * 3] * 100
y = [[1, 2, 3] * 3] * 100
model.fit(x=x, y=y, epochs=20, batch_size=16)
```
And get error info:
```
Traceback (most recent call last):
File "test.py", line 8, in <module>
bert = TFDistilBertModel.from_pretrained('distilbert-base-uncased')
File "/home/xingya/transformers/src/transformers/modeling_tf_utils.py", line 602, in from_pretrained
model(model.dummy_inputs, training=False) # build the network with dummy inputs
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py", line 615, in call
outputs = self.distilbert(inputs, **kwargs)
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py", line 508, in call
tfmr_output = self.transformer(
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py", line 401, in call
layer_outputs = layer_module(hidden_state, attn_mask, head_mask[i], output_attentions, training=training)
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py", line 355, in call
ffn_output = self.ffn(sa_output, training=training) # (bs, seq_length, dim)
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py", line 304, in call
x = self.activation(x)
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/keras/layers/core.py", line 420, in call
return self.activation(inputs)
File "/home/xingya/transformers/src/transformers/modeling_tf_distilbert.py", line 79, in gelu
cdf = 0.5 * (1.0 + tf.math.erf(x / tf.math.sqrt(2.0)))
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 984, in binary_op_wrapper
return func(x, y, name=name)
File "/home/xingya/.conda/envs/transformers/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py", line 1081, in _truediv_python3
raise TypeError("x and y must have the same dtype, got %r != %r" %
TypeError: x and y must have the same dtype, got tf.float16 != tf.float32
```
I made a modification to L299:
`self.activation = (
tf.keras.layers.Activation(gelu, dtype='float32') if config.activation == "gelu" else tf.keras.activations.relu
)`
And then the model began to train, however the loss don't decrease and the accuracy is always 0:
```
7/7 [==============================] - 0s 28ms/step - loss: 2.1972 - accuracy: 0.0000e+00
Epoch 2/20
7/7 [==============================] - 0s 29ms/step - loss: 2.1972 - accuracy: 0.0000e+00
Epoch 3/20
7/7 [==============================] - 0s 30ms/step - loss: 2.1972 - accuracy: 0.0000e+00
Epoch 4/20
7/7 [==============================] - 0s 31ms/step - loss: 2.1972 - accuracy: 0.0000e+00
```
I have trid this code in float32 precision, and it works.
```
Epoch 1/20
7/7 [==============================] - 0s 31ms/step - loss: 2.5418 - accuracy: 0.2800
Epoch 2/20
7/7 [==============================] - 0s 33ms/step - loss: 1.2452 - accuracy: 0.3356
Epoch 3/20
7/7 [==============================] - 0s 31ms/step - loss: 1.1438 - accuracy: 0.3267
Epoch 4/20
7/7 [==============================] - 0s 33ms/step - loss: 1.1219 - accuracy: 0.3400
```<|||||>@xuxingya , the accuracy not improved during training is due to a line
> scores = scores - 1e30 * (1.0 - mask)
while `1e30` with `half precision` will cause `nan` values. I am still trying to figure out a way to deal with it.<|||||>@xuxingya Would you mind to run the test on your side again, please? I tested it with your example, and it is fine now.<|||||>@chiapas Yes, I run the test and now it's fine. |
transformers | 6,914 | closed | Template updates | When adding the Funnel Transformer, I noticed a few things wrong in the template. This PR fixes those.
- using `transformers.testing_utils` instead of `.utils`
- remove xxx from names in tests (as @patrickvonplaten has done recently on Bert)
- add multiple choice model test
- fix label names in masked lm model
- remove the mention to add to pipelines.py in the checklist since there is nothing to do there
| 09-02-2020 19:57:03 | 09-02-2020 19:57:03 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=h1) Report
> Merging [#6914](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `1.21%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6914 +/- ##
==========================================
+ Coverage 77.81% 79.03% +1.21%
==========================================
Files 157 157
Lines 28853 28853
==========================================
+ Hits 22452 22804 +352
+ Misses 6401 6049 -352
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `30.15% <0.00%> (-65.08%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.18%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.83%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <0.00%> (+1.34%)` | :arrow_up: |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <0.00%> (+1.61%)` | :arrow_up: |
| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <0.00%> (+2.46%)` | :arrow_up: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6914/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=footer). Last update [4ebb52a...408286d](https://codecov.io/gh/huggingface/transformers/pull/6914?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,913 | closed | Small bug on website | Hi, I am not sure if this is the correct place to report this but all the web pages of https://huggingface.co/transformers/master/index.html are having some issues with scrolling. Scrolling the main text (right side) also scrolls the table of contents(left part) | 09-02-2020 19:49:36 | 09-02-2020 19:49:36 | Hi! Yes, this isn't an issue, this is the intended behavior. It's the standard behavior with Sphinx/ReadTheDocs. You can see a similar example with the [PyTorch docs](https://pytorch.org/docs/stable/tensors.html). |
transformers | 6,912 | closed | batch_encode_plus does not lead to the same predictions as encode_plus | I use batch_encode_plus to speed up the predictions but it leads to different results compared to "encode_plus"
```
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
inputs = tokenizer.encode_plus(QA_input['question'], QA_input['context'], padding = True, add_special_tokens=True,
```
and
```
tokenizer = AutoTokenizer.from_pretrained("deepset/roberta-base-squad2")
questions = [(q['question'],q['context']) for q in q_dict]
inputs = tokenizer.batch_encode_plus(questions, padding=True, add_special_tokens=True, return_tensors="pt")
```
Most of the time in the predictions are the same but sometimes they are different:
```
def predict_batch_using_model(model, model_name, q_dict):
tokenizer = AutoTokenizer.from_pretrained(model_name)
questions = [(q['question'],q['context']) for q in q_dict]
inputs = tokenizer.batch_encode_plus(questions, padding=True, add_special_tokens=True, return_tensors="pt")
logger.debug('inputs batch_encode_plus: %s\n',inputs)
if torch.cuda.is_available():
inputs.to('cuda')
answer_start_scores, answer_end_scores = model(**inputs)
# a list of (answer, probs_start)
answer_probs_start_batch = []
for i in range(len(q_dict)):
input_ids = inputs["input_ids"].tolist()[i]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start = torch.argmax(
answer_start_scores[i]
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores[i]) + 1 # Get the most likely end of answer with the argmax of the score
logger.debug('answer_start, answer_end: %d %d %d\n',i, answer_start, answer_end)
answer = tokenizer.convert_tokens_to_string(text_tokens[answer_start:answer_end])
total_scores = answer_start_scores[i].add_(answer_end_scores[i]) # in place addition
total_scores = total_scores.cpu().data.numpy()
probs = _compute_softmax(total_scores)
answer_probs_start_batch.append( ( answer, probs[answer_start]))
return answer_probs_start_batch
def predict_using_model(model, model_name, QA_input):
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode_plus(QA_input['question'], QA_input['context'], padding = True, add_special_tokens=True, return_tensors="pt")
logger.debug('inputs encode_plus: %s\n',inputs)
if torch.cuda.is_available():
inputs.to('cuda')
input_ids = inputs["input_ids"].tolist()[0]
text_tokens = tokenizer.convert_ids_to_tokens(input_ids)
answer_start_scores, answer_end_scores = model(**inputs)
answer_start = torch.argmax(
answer_start_scores
) # Get the most likely beginning of answer with the argmax of the score
answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
logger.debug('answer_start, answer_end: %d %d %d\n',0, answer_start, answer_end)
answer = tokenizer.convert_tokens_to_string(text_tokens[answer_start:answer_end])
total_scores = answer_start_scores.add_(answer_end_scores) # in place addition
total_scores = total_scores.cpu().data.numpy()
probs = _compute_softmax(total_scores)
return answer, probs[answer_start]
```
the input dictionary
`
{"q": "what is color of thomas train", "gt_answer": "blue", "results": [{"system": "Google KG", "response": [], "latency": 350}, {"system": "Google CSE", "response": [{"source": "www.strasburgrailroad.com", "title": "15 Fun Facts About Thomas the Tank Engine - Strasburg Rail Road", "snippet": "Aug 15, 2017 ... Thomas' iconic blue color is also the official color of the North Western Railway. \nBefore Thomas was blue he was originally teal green with\u00a0..."}, {"source": "www.youtube.com", "title": "Learn Colors with My First Railways | Playing Around with Thomas ...", "snippet": "Oct 21, 2017 ... About Thomas & Friends: Based on a series of children's books, \"Thomas & \nFriends\" features Thomas the Tank Engine adventures with other\u00a0..."}, {"source": "en.wikipedia.org", "title": "Thomas the Tank Engine - Wikipedia", "snippet": "Thomas the Tank Engine is an anthropomorphised fictional steam locomotive in \nThe Railway ... In The Adventure Begins which is a retelling of Thomas's early \ndays on Sodor, he is a bluish-green colour when he first arrives on Sodor, his \ntanks\u00a0..."}, {"source": "play.thomasandfriends.com", "title": "Meet the Thomas & Friends Engines | Thomas & Friends", "snippet": "Discover all the engines from Sodor! Thomas & Friends fans can learn about all \ntheir favorite characters from the Thomas & Friends books, TV series and\u00a0..."}, {"source": "www.theguardian.com", "title": "Thomas the Tank Engine had to shut the hell up to save children ...", "snippet": "Jul 22, 2014 ... Thomas the Tank Engine had to shut the hell up to save children everywhere. \nThis article is more than 6 years old. Tracy Van Slyke. Classism\u00a0..."}, {"source": "www.amazon.com", "title": "RoomMates RMK1035SCS Thomas & Friends Peel ... - Amazon.com", "snippet": "RoomMates RMK1035SCS Thomas & Friends Peel and Stick Wall Decals ,Multi \ncolor. +. RoomMates RMK1831SCS Thomas The Tank Engine Peel and Stick\u00a0..."}, {"source": "ttte.fandom.com", "title": "Nia | Thomas the Tank Engine Wikia | Fandom", "snippet": "Nia is a Kenyan tank engine who befriended and accompanied Thomas on his \njourney ... Noticing how heavy his train was getting, she offered to help, but \nThomas ... of the Steam Team to have a snowplough that is not the same colour \nas her."}, {"source": "www.amazon.com", "title": "Thomas The Tank Engine Color Block Cotton Hand ... - Amazon.com", "snippet": "Buy Thomas The Tank Engine Color Block Cotton Hand Towel: Home & Kitchen - \nAmazon.com \u2713 FREE DELIVERY possible on eligible purchases."}, {"source": "ttte.fandom.com", "title": "Thomas the Tank Engine Colors", "snippet": "Thomas the Tank Engine: Colors is a book. Characters Thomas, Edward, Henry, \nJames, Percy, Bill..."}, {"source": "www.pinterest.com", "title": "Train cake, Thomas train cake, Thomas the train", "snippet": "Fondant Train Topper with Mini Train Cupcake Toppers. Each Topper is made to \norder and can be customized to suit your color scheme. Lot comes\u00a0..."}], "latency": 663}, {"system": "Bing entity", "response": [], "latency": 698}, {"system": "Bing web", "response": [{"source": "www.youtube.com", "title": "What Color Was Thomas the Tank Engine? | The Earl's Quiz ...", "snippet": "Based on a series of children's books, \"Thomas & Friends\" features Thomas the Tank Engine adventures with other locomotives on the island of Sodor. Thomas often gets into trouble, but never gives ..."}, {"source": "british-learning.com", "title": "Thomas The Train Color Pages To Print \u2013 Learning How to Read", "snippet": "Thomas and friends coloring pages 55 thomas and friends pictures to print and color. 55 thomas and friends printable coloring pages for kids. 30 free printable thomas the train coloring pages. For boys and girls kids and adults teenagers and toddlers preschoolers and older kids at school."}, {"source": "www.hometalk.com", "title": "Does anybody know what color blue is used for Thomas the ...", "snippet": "Here is a step by step YouTube guide to painting Thomas The Tank Engine and midway through, the blue used is referred to as a medium blue. Lighter than Navy, darker than Sky, maybe like a colonial blue? https://www.youtube.com/watch?v=MU8L6tIHk08"}], "latency": 879}], "dt": "2020-08-14T15:06:39.638346+00:00"}`
we observe difference in the prediction of the tenth context. Any reason for that?
| 09-02-2020 19:34:48 | 09-02-2020 19:34:48 | Hi, are you sure your issue comes from the tokenizer? If you encode your text using `encode_plus` and `batch_encode_plus`, do you see a difference in the tokens generated?<|||||>I only use encode_plus and batch_encode_plus and call model inference. I do not think the model inference is the problem as you see in the function calls. so I think it is coming from encode_plus and batch_encode_plus. Regarding your question, I see that that batch_encode_plus add ones at the end of the list " 1, 1, 1, 1, 1, 1]". and I thought this is this difference may be a reason for the problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,911 | closed | [s2s]: script to convert pl checkpoints to hf checkpoints | 09-02-2020 19:15:41 | 09-02-2020 19:15:41 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=h1) Report
> Merging [#6911](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/4ebb52afdb4dc4bcd599e7cb503763e5d4afc962?el=desc) will **increase** coverage by `2.25%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6911 +/- ##
==========================================
+ Coverage 77.81% 80.06% +2.25%
==========================================
Files 157 157
Lines 28853 28853
==========================================
+ Hits 22452 23102 +650
+ Misses 6401 5751 -650
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.37%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.14%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |
| ... and [18 more](https://codecov.io/gh/huggingface/transformers/pull/6911/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=footer). Last update [4ebb52a...87055d8](https://codecov.io/gh/huggingface/transformers/pull/6911?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,910 | closed | adding additional additional_special_tokens to tokenizer has inconsistent behavior | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-5.4.0-45-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@mfuntowicz
## Information
affected: all tokenizers based on `SpecialTokensMixin`
The behavior of the `add_special_tokens()` method seems irregular to me, when adding `additional_special_tokens` to a tokenizer that already holds a list of `additional_special_tokens`. In this case the value of `self._additional_special_tokens` will simply be replaced, while the previous additional special tokens still remain in `PreTrainedTokenizer.added_tokens_encoder` and `PreTrainedTokenizer.added_tokens_decoder`.
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import GPT2Tokenizer
def print_special_tokens():
print(tokenizer.all_special_tokens)
print(tokenizer.all_special_ids)
print(tokenizer.additional_special_tokens)
print(tokenizer.additional_special_tokens_ids)
print(tokenizer.special_tokens_map)
print(tokenizer.added_tokens_encoder)
print(tokenizer.added_tokens_decoder)
tokenizer = GPT2Tokenizer.from_pretrained(
'gpt2',
pad_token='[PAD]',
additional_special_tokens=['<A>', '<B>', '<C>']
)
print_special_tokens()
tokenizer.add_special_tokens({
'cls_token': '[CLS]',
'additional_special_tokens': ['<B>', '<X>', '<X>']
})
print('-'*50)
print_special_tokens()
```
Output:
```
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '<A>', '<B>', '<C>']
[50256, 50256, 50256, 50257, 50258, 50259, 50260]
['<A>', '<B>', '<C>']
[50258, 50259, 50260]
{'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'additional_special_tokens': "['<A>', '<B>', '<C>']"}
{'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260}
{50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>'}
--------------------------------------------------
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '[CLS]', '<B>', '<X>']
[50256, 50256, 50256, 50257, 50261, 50259, 50262]
['<B>', '<X>', '<X>']
[50259, 50262, 50262]
{'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'additional_special_tokens': "['<B>', '<X>', '<X>']"}
{'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260, '[CLS]': 50261, '<X>': 50262}
{50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>', 50261: '[CLS]', 50262: '<X>'}
```
## Expected behavior
Additional special tokens added by `add_special_tokens()` should be appended to the existing `_additional_special_tokens` list and not replace them. Also, there should be some deduplication logic.
The following code change in `SpecialTokensMixin.add_special_tokens()` will do exactly this:
```python
for key, value in special_tokens_dict.items():
assert key in self.SPECIAL_TOKENS_ATTRIBUTES, f"Key {key} is not a special token"
if key == "additional_special_tokens":
assert isinstance(value, (list, tuple)) and all(
isinstance(t, (str, AddedToken)) for t in value
), f"Tokens {value} for key {key} should all be str or AddedToken instances"
if self.verbose:
logger.info("Adding %s to `additional_special_tokens`", value)
for token in value:
if token not in self.additional_special_tokens:
self._additional_special_tokens.append(token)
added_tokens += self.add_tokens(value, special_tokens=True)
else:
assert isinstance(
value, (str, AddedToken)
), f"Token {value} for key {key} should be a str or an AddedToken instance"
if self.verbose:
logger.info("Assigning %s to the %s key of the tokenizer", value, key)
setattr(self, key, value)
added_tokens += self.add_tokens([value], special_tokens=True)
```
Now, when running the above code example the output is as expected (imho):
```
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '<A>', '<B>', '<C>']
[50256, 50256, 50256, 50257, 50258, 50259, 50260]
['<A>', '<B>', '<C>']
[50258, 50259, 50260]
{'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'additional_special_tokens': "['<A>', '<B>', '<C>']"}
{'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260}
{50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>'}
--------------------------------------------------
['<|endoftext|>', '<|endoftext|>', '<|endoftext|>', '[PAD]', '[CLS]', '<A>', '<B>', '<C>', '<X>']
[50256, 50256, 50256, 50257, 50261, 50258, 50259, 50260, 50262]
['<A>', '<B>', '<C>', '<X>']
[50258, 50259, 50260, 50262]
{'bos_token': '<|endoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<|endoftext|>', 'pad_token': '[PAD]', 'cls_token': '[CLS]', 'additional_special_tokens': "['<A>', '<B>', '<C>', '<X>']"}
{'[PAD]': 50257, '<A>': 50258, '<B>': 50259, '<C>': 50260, '[CLS]': 50261, '<X>': 50262}
{50257: '[PAD]', 50258: '<A>', 50259: '<B>', 50260: '<C>', 50261: '[CLS]', 50262: '<X>'}
```
I could open a PR if you agree that this is indeed the expected behavior. | 09-02-2020 18:40:55 | 09-02-2020 18:40:55 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,909 | closed | [style] automate reformatting with pre-commit hooks | # 🚀 Feature request
I was just reading how `make style` can be automated with pre-commit hooks. Noticing how often I run and even more often forget to run `make style` before committing, perhaps others are in the same boat - and therefore I thought to propose to the dev community to (mostly) automate this process. The only cons is that each dev will still have to run `pre-commit install` once after cloning the project. This is a security feature of git, so it won't run anything automatically unless you take action to enable such thing.
If I understand it correctly, if an individual dev doesn't run `pre-commit install` inside the repo, things are just as normal as they are now. That dev will just run `make style` manually. i.e. the proposed feature is optional for those who want it.
I read about it [here](https://www.mattlayman.com/blog/2018/python-code-black/), please scroll down to the section: "Black as a Git pre-commit hook". And it links to the whole detailed website: https://pre-commit.com/ | 09-02-2020 17:38:07 | 09-02-2020 17:38:07 | I personally wouldn't like having a pre-commit hook change all my commits without me being able to see the end result.
On my setup, I have a pre-push hook that aborts a push if make quality fails. I think if we had an install script, we could handle both options?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hi! bring back this because I think in suggest pre-commit instead of `make ...`
With the pre-commit, we can see the results/modifications, like by example:
`git add .`
`git commit -m "any"` **this will run the pre-commit**
- if everything it's ok at the pre-commit pipeline, the commit will be created
- else if he modifies something (like black or style hook) he will not create the commit and change the files
- when this occurs, we can see with git diff what the pre-commit change, or can just use the `--show-diff-on-failure` flag when running pre-commit.
I think that doesn't need everybody use pre-commit, can use both option (the actual format with running manually `make ...` and also using pre-commit) – but maybe don't make sense because will duplicate things?
A little setup for pre-commit, i have tested here:
add `.pre-commit-config.yaml` -
```yml
repos:
- repo: https://github.com/psf/black
rev: 22.1.0
hooks:
- id: black
- repo: https://github.com/pycqa/isort
rev: 5.10.1
hooks:
- id: isort
name: isort (python)
- repo: https://github.com/PyCQA/flake8
rev: 4.0.1
hooks:
- id: flake8
- repo: local
hooks:
- id: autogenerate_code
name: autogenerate_code
entry: python setup.py deps_table_update
language: python
types: [python]
pass_filenames: false
- id: extra_style_checks
name: extra_style_checks
entry: make extra_style_checks
language: system
```
Note:
- The hooks _autogenerate_code_ and _extra_style_checks_, can be call using the make command or running the python.
Install pre-commit:
`pre-commit install`
Modify src/transformers/activations.py:
```diff
@@ -31,7 +31,8 @@ class NewGELUActivation(nn.Module):
"""
def forward(self, input: Tensor) -> Tensor:
- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /
+ math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
```
```console
$ git add -u
$ git commit -m "test pre-commit pipeline"
black....................................................................Failed
- hook id: black
- files were modified by this hook
reformatted src/transformers/activations.py
All done! ✨ 🍰 ✨
1 file reformatted.
isort (python)...........................................................Passed
flake8...................................................................Passed
autogenerate_code........................................................Passed
extra_style_checks.......................................................Passed
$ git status
On branch master
Your branch is up to date with 'origin/master'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
modified: src/transformers/activations.py
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: src/transformers/activations.py
$ git diff
--- a/src/transformers/activations.py
+++ b/src/transformers/activations.py
@@ -31,8 +31,7 @@ class NewGELUActivation(nn.Module):
"""
def forward(self, input: Tensor) -> Tensor:
- return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 /
- math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
+ return 0.5 * input * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (input + 0.044715 * torch.pow(input, 3.0))))
```
to show git diff automatically after the pre-commit can add:
```yml
- repo: local
hooks:
- id: git-diff
name: git diff
entry: git diff --exit-code
language: system
pass_filenames: false
always_run: true
```
<|||||>Even though I originally created this thread 1.5 years later I now agree with @sgugger, that I don't want format changes done while pushing - I need to see what has been changed since sometimes the autoformatter messes things up badly and I need to rewrite things to make the end result readable.
If this can be done as an option and not a requirement then I'm not against it, but there needs to be a way to validate/reformat files before git is involved.
BTW, `precommit` can be run manually as well and not via git, which doesn't require `pre-commit install`:
```
pre-commit run --all-files
```
And we have 2 ways to reformat files: `fixup` (fast - only modified files) - `style` (slow)<|||||>yes use pre-commit don't make sense if does not want to always run the pipeline...
About the `fixup` and `style`, i think can be done equal... by default pre-commit will run just in modified files (files at the commit) and if wants to run for all files can do as you shows.
For me, by default, i think makes sense always just run at modified files. And if the autoformatter messes things we can see, and if we prefer not to use some hook (like the autoformatter that have messed up something), by example run again with `SKIP=black ...`
And the pre-commit tool will not let the commit be created if something fails, if the dev wants “force” the failed hook will need to add the `SKIP=hook ...` before the commit command<|||||>(i personally agree with @sgugger that local hooks are best left as user-level tooling) |
transformers | 6,908 | closed | Funnel transformer | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #4844
This PR adds the Funnel Transformer architecture in PyTorch. For now, I have uploaded two of the ten checkpoints for this model, I will convert and upload the other ones while this PR is under review and add them before it's merged.
Due to the fact there are two versions of the Funnel model (one that returns hidden states with a sequence length divided by 4 and one that return hidden states with the same sequence length, that has 2 more layers), I add to make to different Tester in the test file (because the expected number of hidden states / attentions change depending on which model is used). I adapted the script that check all models are tested to account for that.
| 09-02-2020 16:21:06 | 09-02-2020 16:21:06 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=h1) Report
> Merging [#6908](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/0f360d3d1c606d6d79cdf1efa53c3d719249573d?el=desc) will **increase** coverage by `0.71%`.
> The diff coverage is `87.71%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6908 +/- ##
==========================================
+ Coverage 80.23% 80.95% +0.71%
==========================================
Files 161 164 +3
Lines 30119 30925 +806
==========================================
+ Hits 24167 25035 +868
+ Misses 5952 5890 -62
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/commands/convert.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb21tYW5kcy9jb252ZXJ0LnB5) | `26.98% <20.00%> (-0.61%)` | :arrow_down: |
| [src/transformers/modeling\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19mdW5uZWwucHk=) | `86.76% <86.76%> (ø)` | |
| [src/transformers/tokenization\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZnVubmVsLnB5) | `97.67% <97.67%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.31% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.47% <100.00%> (+0.14%)` | :arrow_up: |
| [src/transformers/configuration\_funnel.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Z1bm5lbC5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.97% <100.00%> (+0.12%)` | :arrow_up: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `97.87% <100.00%> (+2.22%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `72.25% <0.00%> (-10.00%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6908/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=footer). Last update [0f360d3...8c684cc](https://codecov.io/gh/huggingface/transformers/pull/6908?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Awesome! The model seems quite complex so I didn't really understand all the functionality.
A couple of things from my side:
1) IMO, it's super useful to have hard coded integration tests in the test file which makes the model a lot easier to maintain (every change can quickly be checked by making sure the model stays mathematically equivalent).
2) I guess a couple of comments and assert statements would be nice to make the code a bit easier to understand
3) Personally, I don't like single letter variables. Search replace commands don't work on such variables and it is very difficult to understand what they mean. <|||||>Thanks for all the comments. I think I replied/addressed all of them except the fast small integration tests, which are going to take a bit more work (starting on this now). Let me know if I missed anything since there are a lot of comments!<|||||>All checkpoints uploaded so I updated the incomplete lists. Also added mention of the model in all indexes, the model summary and the big table of pretrained models (sorry about the diff on that file, Funnel Transformer is one character too long and required to add an extra space on every line).
Should be good to merge at the beginning of next week!<|||||>@sgugger although you've named the models "`funnel-base`", "`funnel-medium`" so on so forth, the paper talks about all this in a different format, could a docstring be added saying `funnel-base` is `B4-4-4H768` and same for the rest. If someone wants to replicate the papers' results that would be great.
edit: my bad, its there in the comments next to the model name, but still would be better in a docstring too. Sorry!
|
transformers | 6,907 | closed | Torchscript benchmark measure | This PR is just there to show some benchmarking results of `BertScriptableModel` vs. `BertModel`. It shows the results of running the script: `benchmark_pytorch_scripting.py`.
In a nutshell, the script does the following:
1) Create a list of 500 and 2500 `input_tensors` of `batch_size` 1 with a sequence length varying between 1 and 128 or 1 and 512.
Then take a scripted model `model = torch.jit.script(BertScriptableModel(...))` and loop over all 500 / 2500 `input_tensors` in a standard for loop. The script model is warmed up by running the loop 5 times before measuring the time. The loop is run 10 times and the fastest run is taken as a measurement.
2) Create a list of 64 and 512 input_tensors of batch_size 8 with a sequence length varying between 1 and 128 or 1 and 512.
Then take a scripted model `model = torch.jit.script(BertScriptableModel(...))` and loop over all 64 / 512 `input_tensors` in a standard for loop. The script model is warmed up by running the loop 5 times before measuring the time. The loop is run 10 times and the fastest run is taken as a measurement.
All this was done on the following environment information:
```
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.0
- framework: PyTorch
- use_torchscript: True
- framework_version: 1.6.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-09-02
- time: 16:26:10.562635
- fp16: False
- use_multiprocessing: False
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
=> So only on GPU.
To run this script, one can simply run:
```
./benchmark_pytorch_scripting.py
```
**Important**:
The "for" loop corresponds to the function defined in lines 32 - 37 of the file `benchmark_pytorch_scripting.py`.
This function then overwrites the function that is usually measured in benchmarks, by setting `benchmark._prepare_inference_func = _prepare_inference_func` in line 49.
It would be awesome if @sbrody18 could take a look at the `benchmark_pytorch_scripting.py` f file to check if torchscript was used correctly.
| 09-02-2020 15:31:10 | 09-02-2020 15:31:10 | Results for 1):
```
1 / 1
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: multiple - Script: True 500 128 2.575
Type: multiple - Script: True 500 512 3.898
Type: multiple - Script: True 2500 128 13.173
Type: multiple - Script: True 2500 512 18.263
--------------------------------------------------------------------------------
1 / 1
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: multiple - Script: False 500 128 3.733
Type: multiple - Script: False 500 512 3.857
Type: multiple - Script: False 2500 128 19.101
Type: multiple - Script: False 2500 512 19.356
--------------------------------------------------------------------------------
```
For the smaller sequence length 128 we can see a significant speed-up (~30%) - for the longer sequence length 512, the speed-up is much smaller (and only for the bigger list of inputs).<|||||>Results for 2)
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: batched - Script: True 512 128 0.819
Type: batched - Script: True 512 512 3.769
Type: batched - Script: True 4096 128 6.705
Type: batched - Script: True 4096 512 26.549
--------------------------------------------------------------------------------
1 / 1
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: batched - Script: False 512 128 0.837
Type: batched - Script: False 512 512 3.88
Type: batched - Script: False 4096 128 6.75
Type: batched - Script: False 4096 512 27.162
--------------------------------------------------------------------------------
```
Here no clear speed gains can be seen. <|||||>I'm not sure I understand all the interactions in the benchmarking framework, but I think in line 9 (non-script model) we should be returning torch.jit.trace(model, sample_input), not the untraced model. And the sample input would have be max_length for it to work. That's were most of the gain comes from.
Then the comparison is between using torch.jit.trace() and torch.jit.script(). Or maybe I'm missing some code that does that elsewhere?
<|||||>Okey, yeah that makes sense! I changed the benchmarking script accordingly and have the following results now:
1)
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: multiple - Script: True 500 128 1.793
Type: multiple - Script: True 500 512 3.628
Type: multiple - Script: True 2500 128 8.774
Type: multiple - Script: True 2500 512 19.471
--------------------------------------------------------------------------------
1 / 1
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: multiple - Trace: True 500 128 1.83
Type: multiple - Trace: True 500 512 3.783
Type: multiple - Trace: True 2500 128 9.083
Type: multiple - Trace: True 2500 512 20.569
--------------------------------------------------------------------------------
```
and
2)
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: batched - Script: True 512 128 1.043
Type: batched - Script: True 512 512 4.913
Type: batched - Script: True 4096 128 8.499
Type: batched - Script: True 4096 512 34.187
--------------------------------------------------------------------------------
1 / 1
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
Type: batched - Trace: True 512 128 1.046
Type: batched - Trace: True 512 512 4.916
Type: batched - Trace: True 4096 128 8.042
Type: batched - Trace: True 4096 512 30.874
--------------------------------------------------------------------------------
```
=> So my understanding is now that `torch.trace(...)` is much more efficient for dynamic input shapes than not using torch.jit at all, but I also don't see how `torch.script(...)` is better than `torch.trace(...)`. If our models are compatible with `torch.trace(...)`, why do we need to have a model that is compatible with `torch.script(...)`? It is definitely more convenient to just call `torch.trace(model)` without having to provide any `input_ids`, but I'm not 100% sure whether it's worth a huge refactoring.
also cc @sgugger @LysandreJik <|||||>We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.<|||||>> We saw different behavior in our experiments a few months ago. Will try to reproduce and update here.
Was `torch.script()` much faster than `torch.trace()` in your experiments?<|||||>In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths
What that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().
Upon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().
There are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.
```
TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
```<|||||>> In our experiments, using trace(model, example_input) would result in a model that would only accept a sequence of the same length as example_sequence, whereas script(model) had no such restriction. This is the case mentioned in your documentation here: https://huggingface.co/transformers/torchscript.html#dummy-inputs-and-standard-lengths
>
> What that meant in practice is that you needed to trace with an example sequence of length = max_length, and then pad every example of length < max_length with zeros. Since the speed of the model is basically linear in the sequence length, for a set of inputs with varying sequence lengths we got a speed up of avg_len/max_length by using script() instead of trace().
>
> Upon further investigation, it looks like when we ran these experiments, several months ago, we were using Torch 1.2. It looks like in Torch 1.3 the fixed-length problem is no longer an issue for your BERT models (we still encounter it with other models architectures we build). So there's no longer a big speed gain from script() vs trace().
>
> There are still some good reasons for preferring script() to trace() - scripting is guaranteed to capture the model codepath logic, whereas tracing might miss a logic branch if the example input doesn't flow through it. Also, currently tracing your models produces several warnings like the one below. But I'm not sure if those on their own are enough of a motivation to make major changes in your code base.
>
> ```
> TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
> ```
@sgugger - what are your thoughts on this? <|||||>I think adding the scriptable layers seems cleaner to make sure everything works right with scripting/tracing. Not the approach in this PR but the other linked in a comment (@sbrody18 I don't know if you saw my PR to rebase on master for this branch). It ends up with most changes being helpful to read the code (type annotations and asserts) and a few extra classes for the scriptable layers but not much added code.<|||||>@sgugger I agree - I think the extra benefit of the type and None-checking is really helpful to prevent bugs and makes the code better.
I saw your PR late Friday and didn't have time to look into it. Will try to do so by end of day.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,906 | closed | Update to the huBERT model card. | Added a link to the thesis. | 09-02-2020 14:27:22 | 09-02-2020 14:27:22 | |
transformers | 6,905 | closed | Changed link to the correct paper in the second paragraph | 09-02-2020 14:02:51 | 09-02-2020 14:02:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=h1) Report
> Merging [#6905](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/8f2723caf0f1bf7e1f639d28d004f81c96d19bbc?el=desc) will **decrease** coverage by `0.12%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6905 +/- ##
==========================================
- Coverage 79.81% 79.69% -0.13%
==========================================
Files 157 157
Lines 28853 28853
==========================================
- Hits 23029 22994 -35
- Misses 5824 5859 +35
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/tokenization\_gpt2.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZ3B0Mi5weQ==) | `87.50% <0.00%> (-9.73%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `89.97% <0.00%> (-4.07%)` | :arrow_down: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6905/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=footer). Last update [8f2723c...0037bd4](https://codecov.io/gh/huggingface/transformers/pull/6905?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thx for fixing this! |
|
transformers | 6,904 | closed | Greedy decoding for non-beam-search appears to ignore postprocessing | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-4.4.0-18362-Microsoft-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): 2.2.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
TextGeneration: @TevenLeScao
## Information
Model I am using (Bert, XLNet ...): Bart
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I was experimenting with `generate` method to understand behavior on small cases.
## To reproduce
See https://github.com/huggingface/transformers/blob/8f2723caf0f1bf7e1f639d28d004f81c96d19bbc/src/transformers/generation_utils.py#L535-L566
The last line should probably take `argmax` of post-processed `scores` instead of `next_token_logits`. This should manifest in not respecting minimum length, generating bad words and repeats.
On a more minor note, are `next_token_logscores` really _log_ scores?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 09-02-2020 12:41:06 | 09-02-2020 12:41:06 | Didn't realize that `postprocess_next_token_scores` mutates its argument. |
transformers | 6,903 | closed | Output attention takes an s | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #6902
Stas would have come with a nice Perl magic command but I did a regex search (`output_attention[^s]`) to fix all those misspelled args. In the process, I noticed a few examples were missing a line so added that too. | 09-02-2020 11:58:40 | 09-02-2020 11:58:40 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=h1) Report
> Merging [#6903](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/485da7222f7f9ca9854db1a6df027b00d348d017?el=desc) will **increase** coverage by `0.29%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6903 +/- ##
==========================================
+ Coverage 79.30% 79.59% +0.29%
==========================================
Files 157 157
Lines 28853 28853
==========================================
+ Hits 22882 22966 +84
+ Misses 5971 5887 -84
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `93.18% <ø> (ø)` | |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.64% <ø> (+0.67%)` | :arrow_up: |
| [src/transformers/modelcard.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGNhcmQucHk=) | `85.18% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `78.73% <ø> (ø)` | |
| [src/transformers/modeling\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19kaXN0aWxiZXJ0LnB5) | `97.84% <ø> (ø)` | |
| [src/transformers/modeling\_encoder\_decoder.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbmNvZGVyX2RlY29kZXIucHk=) | `92.00% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `66.86% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <ø> (-34.36%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <ø> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <ø> (ø)` | |
| ... and [23 more](https://codecov.io/gh/huggingface/transformers/pull/6903/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=footer). Last update [485da72...e8fd79c](https://codecov.io/gh/huggingface/transformers/pull/6903?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.