repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 6,902 | closed | Example config code uses invalid 'output_attention' rather than 'output_attentions' | Looks like documentation only bug. `output_attention` used rather than `output_attentions`.
Occurs in multiple places in repo.
Maybe linked to #2985
## Environment info
- `transformers` version: 3.1.0
- Platform: Darwin-19.6.0-x86_64-i386-64bit
- Python version: 3.7.4
- PyTorch version (GPU?): 1.5.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
From docs L4-L5 of AutoConfig example:
[docs](https://huggingface.co/transformers/model_doc/auto.html#transformers.AutoConfig)
or
[code](https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_auto.py#L264)
```python
config = AutoConfig.from_pretrained('bert-base-uncased', output_attention=True, foo=False)
assert config.output_attention == True
```
causes:
```
AttributeError: 'BertConfig' object has no attribute 'output_attention'
```
## Expected behavior
The assertion given in documentation passes.
| 09-02-2020 11:25:31 | 09-02-2020 11:25:31 | Thanks for reporting! The PR mentioned above should fix all of those. |
transformers | 6,901 | closed | Relaxing `PreTrainedModel` requirement in _save | # 🚀 Feature request
It's great to see that `Trainer` is becoming flexible. Each functions seems to be more self contained now making inheritance easier. I've experimented with many custom models. For instance,
```
class Model(nn.Module):
def __init__(self, ..):
self.encoder = AutoModel.from_pretrained(..)
self.custom_modules = ..
def forward(self, **kwargs):
output = self.encoder(**kwargs)
# some custom operations
```
Many users are required to create custom models if they just don't want simple `SequenceClassification` head. In all cases, I have to override `_save` method because of [this line](https://github.com/huggingface/transformers/blob/d822ab636b6a14ed50f7bca0797c1de42c19de61/src/transformers/trainer.py#L1097) which explicitly puts a restriction on `Trainer` to be used with models that inherit from `PreTrainedModel`. It would be good to relax this requirement and give a warning about not using `PreTrainedModel` instead.
## Your contribution
I'll open a PR if I get approval.
| 09-02-2020 06:59:02 | 09-02-2020 06:59:02 | I don't see anything blocking with this. Wdyt @sgugger @julien-c ?<|||||>We can give a warning but then the rest of the method will fail. Are you thinking of aborting the save entirely for models that are not `PretrainedModel`s? Also, why are you not inheriting from `PretrainedModel` in your example? Is there something limiting?
Note that Trainer is not supposed to be a generic training loop, but we can surely make it a bit more flexible.<|||||>Yes, `Trainer` is not a general loop, but it works for custom models as I've tried. Majority of its parts are generalized. `PreTrainedModel` also inherits from `nn.Module`, so users can do that, although its quite common for users to inherit from `nn.Module` directly. I'm not sure how the method will fail ? We can just add a warning instead of raising a `ValueError`. The reason why I'm saying is that users would want to do more than just what `transformers` provide out of the box (for instance justing using `AutoModel` and not `SequenceClassification` models (I'm seeing a growing interest in using such models). I think `nlp` is heading towards that direction (making everything general). This works fine for all cases, I guess:
```
from types import MethodType
def _save(self, output_dir: Optional[str] = None):
output_dir = output_dir if output_dir is not None else self.args.output_dir
os.makedirs(output_dir, exist_ok=True)
logger.info("Saving model checkpoint to %s", output_dir)
torch.save(
{"model_state_dict": self.model.state_dict()},
os.path.join(output_dir, "pytorch_model.bin"),
)
# Good practice: save your training arguments together with the trained model
torch.save(self.args, os.path.join(output_dir, "training_args.bin"))
trainer._save = MethodType(_save, trainer)
```
Where do you think the approach may not work ? After providing the warning, its upto users if they further want to make changes by overriding this method (they would know that `transformers` is not responsible anymore since its not a `PreTrainedModel`. Current method completely breaks the training due to `ValueError`.
This is optional, I felt that it would be useful to have. I'll open a PR if you approve.<|||||>`save_pretrained` does more than the method you mention, but we could refactor the code inside to work with all models probably. I don't see any place it uses specific stuff from `PretrainedModel`. The thing we don't want is to add and maintain too generic code, but if it's easy enough I see no objection.
You didn't tell me why subclassing `PreTrainedModel` did not work however ;-) That is what I would expect a user building a custom model using transformers to do .<|||||>The `PreTrainedModel` is a generic class amongst all models in `transformers`, all classes pertaining to it comply in terms of the methods it provides and can use functionalities such as `init_weights`, `prune_heads`. They might not work for custom models. For instance, some methods require `.config.` attribute which custom models may not directly have. I guess one can define their custom model to be exactly what `PreTrainedModel` requires them to be (haven't looked into that), but that would be asking users to read through what `PreTrainedModel` expects or maybe specifying in docs. It's totally up to you what you expect the users to do in case they use custom models.<|||||>After some internal discussion with @julien-c we will lower the requirement from `PreTrainedModel` to some lower abstractclass/protocol so the user knows exactly what they have to implement for their model to work seamlessly with `Trainer`. I will work on this end of this week beginning of next. <|||||>Sounds good. I'll look forward to that part then. |
transformers | 6,900 | closed | Can DistilBert.forward() support token_type_ids ? | I am using DistilBert to distill a pretrained Bert model. That is Bert -> DistilBert.
The input of Bert is a sentence pair: [CLS] Hello word [SEP] Hello Python [SEP].
But DistilBert dose not support sentence pair inputs.
Can DistilBert support sentence pair-like inputs? | 09-02-2020 06:50:04 | 09-02-2020 06:50:04 | DistilBERT can support sentence pair-like inputs but does not make use of token type IDs. It detects sentence pairs according to the special tokens. cc @VictorSanh <|||||>@Yusifu Did you find a solution for this problem? I'm also doing sentence-pair classification (NLI) with Distilbert.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,899 | closed | Can the GPT2 of Transformers receive output hidden_states from external Encoder? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I want to use the GPT2 receive the output hidden_states from bert to calculate self_attention ,how can I do?
Thanks.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 09-02-2020 04:41:52 | 09-02-2020 04:41:52 | Hey @wulaoshi - I don't fully understand your question. Could you maybe post such a higher level question on the forum at `discuss.huggingface.co` ? :-) <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,898 | closed | [testing] fix ambiguous test | Since `generate()` does:
```
num_beams = num_beams if num_beams is not None else self.config.num_beams
```
This test fails if `model.config.num_beams > 1` (which is the case in the model I'm porting).
This fix makes the test setup unambiguous by passing an explicit `num_beams=1` to `generate()`.
Thanks. | 09-02-2020 04:31:31 | 09-02-2020 04:31:31 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=h1) Report
> Merging [#6898](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `1.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6898 +/- ##
==========================================
+ Coverage 79.61% 80.62% +1.00%
==========================================
Files 157 157
Lines 28826 28826
==========================================
+ Hits 22951 23241 +290
+ Misses 5875 5585 -290
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `90.10% <0.00%> (-3.93%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.34%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6898/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=footer). Last update [d822ab6...6b67e49](https://codecov.io/gh/huggingface/transformers/pull/6898?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,897 | closed | Update modeling_bert.py | outptus -> outputs in example of BertForPreTraining
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-02-2020 03:40:44 | 09-02-2020 03:40:44 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=h1) Report
> Merging [#6897](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.77%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6897 +/- ##
==========================================
+ Coverage 79.61% 80.39% +0.77%
==========================================
Files 157 157
Lines 28826 28826
==========================================
+ Hits 22951 23174 +223
+ Misses 5875 5652 -223
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <ø> (ø)` | |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9yb2JlcnRhLnB5) | `45.41% <0.00%> (-47.81%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `57.29% <0.00%> (-39.79%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (-0.41%)` | :arrow_down: |
| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6897/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=footer). Last update [d822ab6...b6c59a1](https://codecov.io/gh/huggingface/transformers/pull/6897?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,896 | closed | the result of translation task on en-zh is not good,especially in short text | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 09-02-2020 03:26:33 | 09-02-2020 03:26:33 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,895 | closed | Create README.md | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-02-2020 03:04:27 | 09-02-2020 03:04:27 | |
transformers | 6,894 | closed | Getting import error | ```
from transformers import BertPreTrainedModel, RobertaModel
import torch
class RobertaForMD(BertPreTrainedModel): # Metaphor Detection, modified from BertForTokenClassification
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
self.bert = RobertaModel(config)
self.dropout = torch.nn.Dropout(config.hidden_dropout_prob)
self.classifier = torch.nn.Linear(config.hidden_size, self.config.num_labels)
# self.loss = torch.nn.BCEWithLogitsLoss(pos_weight=torch.tensor([3], dtype=torch.float32))
self.loss = torch.nn.BCEWithLogitsLoss()
self.init_weights()
def forward(
self,
input_ids=None,
attention_mask=None,
token_type_ids=None,
position_ids=None,
head_mask=None,
inputs_embeds=None,
labels=None,
word_posi=None
):
outputs = self.bert(
input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
position_ids=position_ids,
head_mask=head_mask,
inputs_embeds=inputs_embeds,
)
last_hidden_state = outputs[0]
last_hidden_state = self.dropout(last_hidden_state)
batch_size = input_ids.shape[0]
word_state = torch.empty((0, last_hidden_state.shape[2]), dtype=torch.float32).cuda()
for i in range(batch_size):
word_state = torch.cat((word_state, last_hidden_state[i][word_posi[i]].unsqueeze(0)))
logits = self.classifier(word_state)
outputs = (logits,) + outputs[2:] # add hidden states and attention if they are here
if labels is not None:
loss = self.loss(logits.view(-1), labels.to(torch.float32))
outputs = (loss,) + outputs
return outputs # (loss), logits, (hidden_states), (attentions)
```
I am calling this using
model = RobertaForMD.from_pretrained(model_name, num_labels=1)
Name: transformers
Version: 2.7.0
File "main.py", line 276, in main
model = RobertaForMD.from_pretrained(model_name, num_labels=1)
File "/nas/home/tuhinc/miniconda3/lib/python3.7/site-packages/transformers/modeling_utils.py", line 438, in from_pretrained
**kwargs,
File "/nas/home/tuhinc/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 199, in from_pretrained
config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
File "/nas/home/tuhinc/miniconda3/lib/python3.7/site-packages/transformers/configuration_utils.py", line 269, in get_config_dict
raise EnvironmentError(msg)
OSError: Can't load 'roberta-large'. Make sure that:
- 'roberta-large' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'roberta-large' is the correct path to a directory containing a 'config.json' file
| 09-01-2020 23:47:16 | 09-01-2020 23:47:16 | You can try the following changes:
```python
from transformers import BertPreTrainedModel, RobertaModel, ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST, RobertaConfig
class RobertaForMD(BertPreTrainedModel): # Metaphor Detection, modified from BertForTokenClassification
config_class = RobertaConfig
pretrained_model_archive_map = ROBERTA_PRETRAINED_MODEL_ARCHIVE_LIST
base_model_prefix = "roberta"
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,893 | closed | Model card for huBERT | 09-01-2020 23:14:07 | 09-01-2020 23:14:07 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=h1) Report
> Merging [#6893](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/d822ab636b6a14ed50f7bca0797c1de42c19de61?el=desc) will **increase** coverage by `0.46%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6893 +/- ##
==========================================
+ Coverage 79.61% 80.08% +0.46%
==========================================
Files 157 157
Lines 28826 28826
==========================================
+ Hits 22951 23086 +135
+ Misses 5875 5740 -135
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.73% <0.00%> (-19.35%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-7.59%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6893/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=footer). Last update [d822ab6...3979cda](https://codecov.io/gh/huggingface/transformers/pull/6893?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>👍 |
|
transformers | 6,892 | closed | [t5] Missing requirements in examples/seq2seq | Hi! This is a very small fix. But it seems like some requirements for examples/seq2seq are missing? Namely, rouge-score, gitpython, sacrebleu. Is this intentional (conflicts with other example requirements)? Of course, would be happy to open a PR
## Environment info
- `transformers` version: 8b884dadc6bd70600c98bb35b522beb0005e3f28
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): GPU, '1.6.0+cu101'
- Tensorflow version (GPU?): n/a
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
Summarization: @sshleifer
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) CNN Summary
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create new `conda` env
2. Install requirements.txt in /examples
3. Try running finetune_t5.sh
## Expected behavior
Finetune should run correctly. | 09-01-2020 22:49:09 | 09-01-2020 22:49:09 | Those requirements are all in [here](https://github.com/huggingface/transformers/blob/master/examples/requirements.txt). Are you sure you ran `pip install -r ./examples/requirements.txt` as mentioned in the [README of all examples](https://github.com/huggingface/transformers/tree/master/examples#important-note)?
They are not, and won't be requirements of the main library since they are only used for some specific tasks.<|||||>Huh, probably just a local issue then. Thanks! |
transformers | 6,891 | closed | AttributeError: 'DistilBertConfig' object has no attribute 'return_dict' | ## Environment info
- `transformers` version: 3.1.0
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: <True>
- Using distributed or parallel set-up in script?: <False>
### Who can help
Trainer: @sgugger
nlp datasets: [different repo](https://github.com/huggingface/nlp)
Bart: @sshleifer
examples/bert-loses-patience: @JetRunner
examples/token-classification: @stefan-it
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```python
def flair_lstm(text):
sentence = flair.data.Sentence(text)
flair_sent.predict(sentences=sentence)
total_sent = sentence.labels
for label in total_sent:
value = label.value
score = label.score
return '1' if value == 'POSITIVE' else '-1'
```
The tasks I am working on is:
* I'm working with flair to get classification polarities, but the issue seems refer to transformers
## To reproduce
Steps to reproduce the behavior:
1. write
```python
import flair
flair_sent = flair.models.TextClassifier.load('en-sentiment')
def flair_lstm(text):
sentence = flair.data.Sentence(text)
flair_sent.predict(sentences=sentence)
total_sent = sentence.labels
for label in total_sent:
value = label.value
score = label.score
return '1' if value == 'POSITIVE' else '-1'
df_test = "some test dataframe"
df_test['flair'] = df_test['word'].apply(lambda x: flair_lstm(x))
```
2. See error:
## Traceback:
```python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-26-1ee39d7138b3> in <module>()
----> 1 df_test['flair'] = df_test['word'].apply(lambda x: flair_lstm(x))
10 frames
pandas/_libs/lib.pyx in pandas._libs.lib.map_infer()
/usr/local/lib/python3.6/dist-packages/transformers/configuration_utils.py in use_return_dict(self)
217 """
218 # If torchscript is set, force `return_dict=False` to avoid jit errors
--> 219 return self.return_dict and not self.torchscript
220
221 @property
AttributeError: 'DistilBertConfig' object has no attribute 'return_dict'
``` | 09-01-2020 21:40:28 | 09-01-2020 21:40:28 | I don't see the transformers code that creates this bug. In 3.1.0, `DistilBertConfig` definitely has a 'return_dict' `attribute`. I tried to use your code to investigate the error, but it fails on the line `flair_sent = flair.models.TextClassifier.load('en-sentiment')` for me.
Happy to investigate a code sample that uses transformers and creates the bug, but this looks like a problem to report on the fair GitHub. <|||||>> I don't see the transformers code that creates this bug. In 3.1.0, `DistilBertConfig` definitely has a 'return_dict' `attribute`. I tried to use your code to investigate the error, but it fails on the line `flair_sent = flair.models.TextClassifier.load('en-sentiment')` for me.
>
> Happy to investigate a code sample that uses transformers and creates the bug, but this looks like a problem to report on the fair GitHub.
I already did, just in case I wanted to report the bug in here. Thank you anyway!<|||||>Don't hesitate to reopen if it ends up being on our side, with a small repro using only transformers ideally.<|||||>It ended being on flair side. Here I'll attached the link for future references [/flairNLP/flair/issues/1841](https://github.com/flairNLP/flair/issues/1841) |
transformers | 6,890 | closed | [Docs, Examples] Fix QA example for PT | Fixes #6738.
@sgugger - PyTorch QA example was wrong IMO.
| 09-01-2020 20:24:27 | 09-01-2020 20:24:27 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=h1) Report
> Merging [#6890](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `0.49%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6890 +/- ##
==========================================
- Coverage 80.05% 79.56% -0.50%
==========================================
Files 157 157
Lines 28822 28822
==========================================
- Hits 23074 22932 -142
- Misses 5748 5890 +142
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <ø> (ø)` | |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-48.39%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.05%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| ... and [7 more](https://codecov.io/gh/huggingface/transformers/pull/6890/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=footer). Last update [3726754...eb044f1](https://codecov.io/gh/huggingface/transformers/pull/6890?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,889 | closed | minor docs grammar fixes | just some minor document edits
| 09-01-2020 19:50:08 | 09-01-2020 19:50:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=h1) Report
> Merging [#6889](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **increase** coverage by `0.64%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6889 +/- ##
==========================================
+ Coverage 79.06% 79.71% +0.64%
==========================================
Files 157 157
Lines 28823 28823
==========================================
+ Hits 22789 22976 +187
+ Misses 6034 5847 -187
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `87.04% <0.00%> (-5.27%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| ... and [9 more](https://codecov.io/gh/huggingface/transformers/pull/6889/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=footer). Last update [3119926...e119e42](https://codecov.io/gh/huggingface/transformers/pull/6889?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,888 | closed | Create README.md | Add language meta attribute
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-01-2020 18:27:28 | 09-01-2020 18:27:28 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=h1) Report
> Merging [#6888](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **decrease** coverage by `0.84%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6888 +/- ##
==========================================
- Coverage 79.06% 78.22% -0.85%
==========================================
Files 157 157
Lines 28823 28823
==========================================
- Hits 22789 22546 -243
- Misses 6034 6277 +243
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |
| [src/transformers/tokenization\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcGVnYXN1cy5weQ==) | `46.03% <0.00%> (-49.21%)` | :arrow_down: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |
| [src/transformers/modeling\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tb2JpbGViZXJ0LnB5) | `79.21% <0.00%> (-10.25%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `87.04% <0.00%> (-5.27%)` | :arrow_down: |
| ... and [21 more](https://codecov.io/gh/huggingface/transformers/pull/6888/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=footer). Last update [3119926...157d717](https://codecov.io/gh/huggingface/transformers/pull/6888?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Thanks @mrm8488 , cc @dccuchile |
transformers | 6,887 | closed | Create README.md | Add language meta attribute
<!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 09-01-2020 18:26:16 | 09-01-2020 18:26:16 | |
transformers | 6,886 | closed | Create README.md | <!-- This line specifies which issue to close after the pull request is merged. -->
model card for akhooli/xlm-r-large-arabic-sent
| 09-01-2020 18:16:35 | 09-01-2020 18:16:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=h1) Report
> Merging [#6886](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/311992630cfd6c776bc2672d94dcd81624ad023b?el=desc) will **increase** coverage by `1.04%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6886 +/- ##
==========================================
+ Coverage 79.06% 80.10% +1.04%
==========================================
Files 157 157
Lines 28823 28823
==========================================
+ Hits 22789 23089 +300
+ Misses 6034 5734 -300
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iZXJ0LnB5) | `88.42% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `86.97% <0.00%> (+0.65%)` | :arrow_up: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (+0.75%)` | :arrow_up: |
| ... and [6 more](https://codecov.io/gh/huggingface/transformers/pull/6886/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=footer). Last update [3119926...0e167b5](https://codecov.io/gh/huggingface/transformers/pull/6886?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,885 | closed | Create README.md | <!-- This line specifies which issue to close after the pull request is merged. -->
Model card for akhooli/mbart-large-cc25-en-ar | 09-01-2020 17:44:33 | 09-01-2020 17:44:33 | @sshleifer want to update the inference API so that the correct pipeline shows up at https://huggingface.co/akhooli/mbart-large-cc25-en-ar ? (cc @mfuntowicz)<|||||>Seems fixed?
https://huggingface.co/akhooli/mbart-large-cc25-en-ar

<|||||>>
>
> Seems fixed?
> https://huggingface.co/akhooli/mbart-large-cc25-en-ar
> 
Sure, just after the model card was merged. Not sure if it was due to the 'translation' tag in the card or some other magic done by your team.<|||||>Just uploaded https://huggingface.co/akhooli/mbart-large-cc25-ar-en and it seems inference type is not recognized automatically. It defaults to fill-mask (model card submitted).<|||||>model card merged. |
transformers | 6,884 | closed | [Electra] fix warning for position ids | <!-- This line specifies which issue to close after the pull request is merged. -->
~Fixes 6882~ (might only be partly)
| 09-01-2020 17:34:02 | 09-01-2020 17:34:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=h1) Report
> Merging [#6884](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `0.22%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6884 +/- ##
==========================================
- Coverage 80.05% 79.83% -0.23%
==========================================
Files 157 157
Lines 28822 28823 +1
==========================================
- Hits 23074 23010 -64
- Misses 5748 5813 +65
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19lbGVjdHJhLnB5) | `82.18% <100.00%> (+0.05%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `76.71% <0.00%> (-21.92%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.32%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.14% <0.00%> (-2.15%)` | :arrow_down: |
| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/6884/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=footer). Last update [3726754...8f97406](https://codecov.io/gh/huggingface/transformers/pull/6884?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,883 | closed | Create README.md | Adds model card for Longformer2Roberta | 09-01-2020 17:23:57 | 09-01-2020 17:23:57 | |
transformers | 6,882 | closed | Bert Checkpoint Breaks 3.02 -> 3.1.0 due to new buffer in BertEmbeddings | Hi,
Thanks for the great library. I noticed this line being added (https://github.com/huggingface/transformers/blob/v3.1.0/src/transformers/modeling_bert.py#L190) in the latest update.
It breaks checkpoints that were saved when this line wasn't there.
```
Missing key(s) in state_dict: "generator_model.electra.embeddings.position_ids", "discriminator_model.electra.embeddings.position_ids".
```
| 09-01-2020 17:22:02 | 09-01-2020 17:22:02 | I understand it makes the code slightly cleaner; in terms of speed it is most likely negligible (compared to the embedding lookup, for example).
But not sure what to do now as all the pretrained models (that used a lot of compute to pretrain) don't work anymore in the new update.<|||||>Hey @Laksh1997 - note that this line does not break anything. You can neglect warnings about `position_ids` since those are created at instantiation. Will open a PR to fix the warning<|||||>@patrickvonplaten seems to break it for me:
```
16:43:52
Traceback (most recent call last):
16:43:52
File "/opt/conda/envs/py36/bin/transformervae", line 33, in <module>
16:43:52
sys.exit(load_entry_point('exs-transformervae', 'console_scripts', 'transformervae')())
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py", line 829, in __call__
16:43:52
return self.main(*args, **kwargs)
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py", line 782, in main
16:43:52
rv = self.invoke(ctx)
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py", line 1259, in invoke
16:43:52
return _process_result(sub_ctx.command.invoke(sub_ctx))
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py", line 1066, in invoke
16:43:52
return ctx.invoke(self.callback, **ctx.params)
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/click/core.py", line 610, in invoke
16:43:52
return callback(*args, **kwargs)
16:43:52
File "/app/transformervae/cli.py", line 355, in train
16:43:52
model = model_cls(hparams, pretrained_model=pretrained_model_path_or_config)
16:43:52
File "/app/transformervae/models/regression.py", line 35, in __init__
16:43:52
pretrained_model,
16:43:52
File "/app/transformervae/models/finetuning_model.py", line 37, in __init__
16:43:52
self.encoder, self.tokenizer = self.load_pretrained_encoder(pretrained_model)
16:43:52
File "/app/transformervae/models/finetuning_model.py", line 89, in load_pretrained_encoder
16:43:52
pl_model = AutoModel.load(pretrained_model)
16:43:52
File "/app/transformervae/models/automodel.py", line 98, in load
16:43:52
return model_cls.load(path)
16:43:52
File "/app/transformervae/models/base.py", line 229, in load
16:43:52
return cls.load_from_checkpoint(filepath)
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/core/saving.py", line 169, in load_from_checkpoint
16:43:52
model = cls._load_model_state(checkpoint, *args, **kwargs)
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/pytorch_lightning/core/saving.py", line 207, in _load_model_state
16:43:52
model.load_state_dict(checkpoint['state_dict'])
16:43:52
File "/opt/conda/envs/py36/lib/python3.6/site-packages/torch/nn/modules/module.py", line 1045, in load_state_dict
16:43:52
self.__class__.__name__, "\n\t".join(error_msgs)))
16:43:52
RuntimeError: Error(s) in loading state_dict for ElectraLanguageModel:
16:43:52
Missing key(s) in state_dict: "generator_model.electra.embeddings.position_ids", "discriminator_model.electra.embeddings.position_ids".
```<|||||>Note, `generator_model.electra` is `ElectraModel`, which uses `BertEmbeddings`.<|||||>Can you send me a code snippet so that I can reproduce your error?
<|||||>It's a big library. But I can try to recreate in a Colab. One sec.<|||||>@patrickvonplaten Colab: https://colab.research.google.com/drive/167CwTImG5T-4c9xeIVEkH9Xrracbn30h?usp=sharing
Let me know if you can access?<|||||>It also breaks to me. The attribute embedding.position_ids can't be loaded if the model artifact is trained with v3.0.2. So it will raise an KeyError<|||||>Hey @Laksh1997, I can't access the notebook - could you make it public for everybody to see? :-) <|||||>@patrickvonplaten apologies. Here is the script:
```python
!pip install transformers==3.0.2
from transformers import ElectraModel, ElectraConfig
import torch
import transformers
print(transformers.__version__)
model = ElectraModel(ElectraConfig())
state_dict = model.state_dict()
torch.save(state_dict, 'checkpoint.pt')
```
```python
!pip install transformers==3.1.0
from transformers import ElectraModel, ElectraConfig
import torch
import transformers
print(transformers.__version__)
model = ElectraModel(ElectraConfig())
state_dict = torch.load('checkpoint.pt')
model.load_state_dict(state_dict)
```<|||||>I encountered the same issue. Old checkpoints (3.0.2) can not be loaded in (3.1.0) due to KeyError.<|||||>@Barcavin @easonnie As a temporary fix, I've just reverted back to 3.0.2. @patrickvonplaten I am hoping something can be done !<|||||>Hi, while we work on patching this issue, you can still use version v3.1.0 by using the `from_pretrained` method. Taking @Laksh1997's example, you would do:
1. Save the checkpoint in `saved_model_location/pytorch_model.bin`
```py
from transformers import ElectraModel, ElectraConfig
import torch
import transformers
print(transformers.__version__)
model = ElectraModel(ElectraConfig())
state_dict = model.state_dict()
torch.save(state_dict, 'saved_model_location/pytorch_model.bin')
```
2. Load it using the method `.from_pretrained`
```py
from transformers import ElectraModel, ElectraConfig
import transformers
print(transformers.__version__)
model = ElectraModel.from_pretrained("saved_model_location", config=ElectraConfig())
``` <|||||>You can also use the `load_state_dict` method with the `strict` option set to `False`:
```py
model.load_state_dict(state_dict, strict=False)
```<|||||>The reason this additional buffer is here now is due to this [PR](https://github.com/huggingface/transformers/pull/5773#issue-449530988).
Is there a reason why you would use the `load_state_dict` instead of `from_pretrained`, as `from_pretrained` exists in part to prevent such issues from happening?<|||||>Hi @LysandreJik
Thanks for the proposed solution.
In my case, I am using Pytorch Lightning which has its own saving and loading infrastructure. Thus the `from_pretrained` method can't exactly be used.
The `strict` flag is a good patch for now.
I think, in general, when building on top of the library, for complex projects one cannot rely on `from_pretrained`, especially if using other ecosystems.<|||||>Using the `strict` flag can enable a number of errors to go undetected, so I would refrain from using it. I think the best solution is to use version 3.0.2 for already trained models until the fix comes out.<|||||>Any update on this @LysandreJik @patrickvonplaten ?<|||||>As the `torch.load` method in `strict` mode does not allow unexpected/missing keys, this is an issue that won't be resolved. Three options are available here:
- Use the recommended `from_pretrained` method, which exists specifically to work around this kind of issues
- Use the `torch.load` method with the `strict` flag set to `False`
- Pin to version v3.0.2 if none of these can be applied.
Minor changes in model infrastructure can unfortunately happen as we try to optimize for efficiency, which will lead to this kind of issues. We're internally working on having our models on the hub be versionable, which should solve most of these problems. It's at least a couple of months away, however.<|||||>@LysandreJik That is unfortunate that the library will probably have to be pinned, as the first two options are unviable for reasons described in this thread. Especially because pretraining large models is computationally quite expensive (100s of GPU hours)...<|||||>You can also use the work-around explained [here](https://github.com/huggingface/transformers/issues/6882#issuecomment-685509938) if you want to convert your weights to the updated architecture.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Just wanted to add that there is another non-trivial reason why `from_pretrained` might not be useful in all cases: fine-tuning. If I fine-tune BERT's weights on a specific dataset, most likely I will have to use `load_state_dict` afterwards to use the new weights, rather than the original weights that `from_pretrained` would load.<|||||>@LysandreJik @Laksh1997 Setting the [persistent flag ](https://pytorch.org/docs/master/generated/torch.jit.ScriptModule.html#torch.jit.ScriptModule.register_buffer)to False when registering the buffer will avoid adding it to the state_dict and can address the BC issue. <|||||>Hello there,
I encountered the same problem. I was using transformers version 4.7.0; but the checkpoint was trained with transformer 3.0.2. I just did `pip uninstall transformers`, and then `pip install transformers==3.0.2` for running the training. Presumably, you can try: `model.load_state_dict(state_dict, strict=False)` as well. However, I don't feel comfortable with the latter solution since I think that might affect the model performance in abstraction –since `position_ids` **might** be used by the model, and putting some random values when it's not present in pre-trained checkpoint might ruin the performance. So safer way is to down-grade the transformers, in my opinion.
Hope this helps you out!<|||||>Can someone confirm if the `position_ids` are used by the model and by not loading it correctly would it affect the performance of the model in transfer learning or continuing to train or inference? Thank you<|||||>I think it's safe to use `model.load_state_dict(state_dict, strict=False)` if the only missing information is the `position_ids` buffer. This tensor is indeed used by the model, but it's just a constant tensor containing a list of integers from 0 to the maximum number of position embeddings. The tensor is first created in the constructor of the `BertEmbeddings` class, in this line:
https://github.com/huggingface/transformers/blob/fcf83011dffce3f2e8aad906f07c1ec14668f877/src/transformers/models/bert/modeling_bert.py#L182
As such, it's not really part of the optimizable parameters of the model. This means that it doesn't matter if `position_ids` is not available when calling `load_state_dict`, because the line above will create it anyway in the constructor with the required values.<|||||>> I think it's safe to use `model.load_state_dict(state_dict, strict=False)` if the only missing information is the `position_ids` buffer. This tensor is indeed used by the model, but it's just a constant tensor containing a list of integers from 0 to the maximum number of position embeddings. The tensor is first created in the constructor of the `BertEmbeddings` class, in this line:
>
> https://github.com/huggingface/transformers/blob/fcf83011dffce3f2e8aad906f07c1ec14668f877/src/transformers/models/bert/modeling_bert.py#L182
>
> As such, it's not really part of the optimizable parameters of the model. This means that it doesn't matter if `position_ids` is not available when calling `load_state_dict`, because the line above will create it anyway in the constructor with the required values.
Thank you very much @dfdazac for your detailed reply. |
transformers | 6,881 | closed | 'BertEmbeddings' object has no attribute 'bias' while converting tf checkpoint | When trying to convert the checkpoint of a self pre-trained tensorflow BERT model (using the **[create-pretraining.py][1]** script from google) into a pytorch model using **convert_bert_original_tf_checkpoint_to_pytorch.py**
I always end up with the following error:
**AttributeError: 'BertEmbeddings' object has no attribute 'bias'**
The init_vars names (just the first ones) look like this:
```
['bert/embeddings/layer_normalization/beta', 'bert/embeddings/layer_normalization/beta/adam_m', 'bert/embeddings/layer_normalization/beta/adam_v', 'bert/embeddings/layer_normalization/gamma', 'bert/embeddings/layer_normalization/gamma/adam_m', 'bert/embeddings/layer_normalization/gamma/adam_v']
```
Code that produces the error looks like this:
```
for m_name in name:
if re.fullmatch(r"[A-Za-z]+_\d+", m_name):
scope_names = re.split(r"_(\d+)", m_name)
else:
scope_names = [m_name]
if scope_names[0] == "kernel" or scope_names[0] == "gamma":
pointer = getattr(pointer, "weight")
elif scope_names[0] == "output_bias" or scope_names[0] == "beta":
print(scope_names)
pointer = getattr(pointer, "bias")
elif scope_names[0] == "output_weights":
pointer = getattr(pointer, "weight")
elif scope_names[0] == "squad":
pointer = getattr(pointer, "classifier")
else:
try:
pointer = getattr(pointer, scope_names[0])
except AttributeError:
logger.info("Skipping {}".format("/".join(name)))
```
Going through all the names and getting the right attributes from the model. When it comes to the Layer Normalization in the BertEmbeddings the script produces an error. Did anyone else encouter that error before? How did you fix this? Did I mix something up with the tensorflow versions? Thanks for your help in advance!
Here again the whole stacktrace:
```
Traceback (most recent call last):
File "convert_bert_original_tf_checkpoint_to_pytorch.py", line 62, in <module>
convert_tf_checkpoint_to_pytorch(args.tf_checkpoint_path, args.bert_config_file, args.pytorch_dump_path)
File "convert_bert_original_tf_checkpoint_to_pytorch.py", line 37, in convert_tf_checkpoint_to_pytorch
load_tf_weights_in_bert(model, config, tf_checkpoint_path)
File "/modeling_bert.py", line 136, in load_tf_weights_in_bert
pointer = getattr(pointer, "bias")
File "module.py", line 594, in __getattr__
type(self).__name__, name))
AttributeError: 'BertEmbeddings' object has no attribute 'bias'
```
Bert Config is the following:
```
Building PyTorch model from configuration: BertConfig {
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 512,
"initializer_range": 0.02,
"intermediate_size": 2048,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 8,
"num_hidden_layers": 8,
"pad_token_id": 0,
"type_vocab_size": 2,
"vocab_size": 30522
}
```
[1]: https://github.com/google-research/bert/blob/master/run_pretraining.py | 09-01-2020 16:14:50 | 09-01-2020 16:14:50 | It was just the naming of "layer_norm" instead of "LayerNorm" I changed the script and now it works.<|||||>@blueberry-cake which script was that naming in? <|||||>@blueberry-cake could you tell me the details of how you solve this problem? I have this problem, too,I do not understand the word "It was just the naming of "layer_norm" instead of "LayerNorm" I changed the script and now it works." Thanks for your help in advance!<|||||>Hi, I encountered the same problem. I spent quite a while googling online but didn't get a solution. Could you please let me know if you get the solution? @blueberry-cake @roxannemiller @ankunw <|||||>> Hi, I encountered the same problem. I spent quite a while googling online but didn't get a solution. Could you please let me know if you get the solution? @blueberry-cake @roxannemiller @ankunw
maybe you could use he latest transformer have a try<|||||>No it still doesn't work. Sign :(<|||||>So I solved this problem with other people's help. Basically, I need to change the key name in my tf1 checkpoints. Here is the code. For further details, please see: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb?hl=id#scrollTo=NPQsXQveuQiC
```
import re
def change_name(checkpoint_path, output_prefix):
ckpt = tf.train.Checkpoint(vars={name: variable})
ckpt.restore(converted_ckpt_path)
"""
Args:
checkpoint_path: Path to the TF1 checkpoint.
output_prefix: Path prefix to the converted checkpoint.
Returns:
Path to the converted checkpoint.
"""
vars = {}
reader = tf.train.load_checkpoint(checkpoint_path)
dtypes = reader.get_variable_to_dtype_map()
for key in dtypes.keys():
new_key = key
if key=='bert/embeddings/layer_normalization/beta' or key=='bert/embeddings/layer_normalization/gamma':
new_key=key.replace('layer_normalization','LayerNorm')
elif re.search('layer_normalization_+\d+',key):
new_key = re.sub('layer_normalization_+\d+','LayerNorm',key)
elif re.search('layer_normalization',key):
new_key = re.sub('layer_normalization','LayerNorm',key)
print(new_key)
vars[new_key] = tf.Variable(reader.get_tensor(key))
return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)<|||||>> So I solved this problem with other people's help. Basically, I need to change the key name in my tf1 checkpoints. Here is the code. For further details, please see: https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/guide/migrate/migrating_checkpoints.ipynb?hl=id#scrollTo=NPQsXQveuQiC
>
> ```
> import re
> def change_name(checkpoint_path, output_prefix):
> ckpt = tf.train.Checkpoint(vars={name: variable})
> ckpt.restore(converted_ckpt_path)
> """
> Args:
> checkpoint_path: Path to the TF1 checkpoint.
> output_prefix: Path prefix to the converted checkpoint.
>
> Returns:
> Path to the converted checkpoint.
> """
> vars = {}
> reader = tf.train.load_checkpoint(checkpoint_path)
> dtypes = reader.get_variable_to_dtype_map()
>
> for key in dtypes.keys():
> new_key = key
> if key=='bert/embeddings/layer_normalization/beta' or key=='bert/embeddings/layer_normalization/gamma':
> new_key=key.replace('layer_normalization','LayerNorm')
> elif re.search('layer_normalization_+\d+',key):
> new_key = re.sub('layer_normalization_+\d+','LayerNorm',key)
> elif re.search('layer_normalization',key):
> new_key = re.sub('layer_normalization','LayerNorm',key)
> print(new_key)
> vars[new_key] = tf.Variable(reader.get_tensor(key))
>
> return tf1.train.Saver(var_list=vars).save(sess=None, save_path=output_prefix)
> ```
Dear friend, is there a complete integration of your code in "convert_bert_original_tf_checkpoint_to_pytorch.py"? I don't know how to adjust it using your code. |
transformers | 6,880 | closed | Fix TF Trainer for TPU | This PR try to fix the trainer for TensorFlow by updating the order of some steps:
1. Dataset preprocessing
2. Strategy creation
3. Model creation
Instead of
1. Strategy creation
2. Model creation
3. Dataset preprocessing
Fixes #6672
| 09-01-2020 15:40:14 | 09-01-2020 15:40:14 | I have added a model_init function in the PyTorch Trainer to support hp-search. Is it possible to use this instead of changing the `args`? This would make a very big difference between the PT Trainer and TF Trainer.<|||||>OK, I will check this. Thanks.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Hello did you have any success fixing this? Can I help? I'm on a tight art student collegiate budget and tpu speed would be awesome if not necessary. I've spent like 20-30 hours on fixing the tpu issue myself and no luck. Any help getting run_clm.py on a tpu so I can quickly iterate would be awesome. But generally I'd love to move mainly to tpu but I'm not sure its there yet. New to open source really want to learn as much as possible. Can I help?<|||||>@arccoxx This PR should be closed because we have identified two different issues:
1. The first one don't come from Transformers but from TensorFlow. To make it short, TPU don't handle `tf.data.Dataset.from_generator`, Google is currently working on it and we have to wait they release the fix once they have one.
2. Currently you cannot train a LM from scratch with any TF model. We are currently working on this, and it will be possible in our next release.
So for your project the best solution would be to use the PyTorch version that works on TPU and you can train from scratch any LM model.<|||||>> @arccoxx This PR should be closed because we have identified two different issues:
>
> 1. The first one don't come from Transformers but from TensorFlow. To make it short, TPU don't handle `tf.data.Dataset.from_generator`, Google is currently working on it and we have to wait they release the fix once they have one.
> 2. Currently you cannot train a LM from scratch with any TF model. We are currently working on this, and it will be possible in our next release.
>
> So for your project the best solution would be to use the PyTorch version that works on TPU and you can train from scratch any LM model.
I was not able to get any pytorch version to run on xla. Is there any reference notebook that could be linked? I tried finetuning in native pytorch, running (pytorch) tuner, run_language_modeling with multiple transformers library versions 2.1.0-2.9.1, and run_clm with 3.4.0 all with no luck. Ive also tried building a pytorch lightning module and no luck. As the speedup would be that helpful (provided it can handle gpt2 medium) it would be awesome to figure out reduce these compatibility issues. My hope is to then use the tpu in a more complicated model that will use this fine tuned model. Any help would be super appreciated. Thank you! |
transformers | 6,879 | closed | Add cache_dir to save features TextDataset | This is in case the dataset is in a RO filesystem, for which is the case
in tests (GKE TPU tests).
| 09-01-2020 15:15:29 | 09-01-2020 15:15:29 | Thanks for the quick reviews @LysandreJik and @sgugger! And yep updating black version did it (I think) thanks. |
transformers | 6,878 | closed | [EncoderDecoder] Add xlm-roberta to encoder decoder | This PR adds `XLM-Roberta` to the EncoderDecoder framework by adding a new `XLMRobertaForCausalLM` to the models.
The XLM-Roberta EncoderDecoder can be used as follows:
```python
from transformers import EncoderDecoderModel
import torch
model = EncoderDecoderModel.from_encoder_decoder_pretrained("xlm-roberta-base", "xlm-roberta-base")
input_ids = torch.tensor([10 * [0]])
outputs = model(input_ids, decoder_input_ids=input_ids, labels=input_ids, return_dict=True)
print("Loss", outputs.loss)
```
| 09-01-2020 14:17:00 | 09-01-2020 14:17:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=h1) Report
> Merging [#6878](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3726754a6c646adcf9cb2135ab7f72dffe074473?el=desc) will **decrease** coverage by `3.21%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6878 +/- ##
==========================================
- Coverage 80.05% 76.84% -3.22%
==========================================
Files 157 157
Lines 28822 28825 +3
==========================================
- Hits 23074 22150 -924
- Misses 5748 6675 +927
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.28% <ø> (ø)` | |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <ø> (-14.37%)` | :arrow_down: |
| [src/transformers/modeling\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ194bG1fcm9iZXJ0YS5weQ==) | `100.00% <100.00%> (ø)` | |
| [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsLnB5) | `16.25% <0.00%> (-63.52%)` | :arrow_down: |
| [src/transformers/configuration\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3RyYW5zZm9feGwucHk=) | `27.27% <0.00%> (-61.82%)` | :arrow_down: |
| [src/transformers/modeling\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `6.71% <0.00%> (-59.71%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `25.63% <0.00%> (-54.32%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6878/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=footer). Last update [3726754...06cc500](https://codecov.io/gh/huggingface/transformers/pull/6878?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>cc @laibamehnaz <|||||>> cc @laibamehnaz
Thank you :) |
transformers | 6,877 | closed | [WIP, TF] replace keras dense by keras.layers.DenseEinsum | Fixes #6771.
This PR might speed up TF runtime on GPU.
Note that the change requires TF 2.3.0 | 09-01-2020 11:25:35 | 09-01-2020 11:25:35 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=h1) Report
> Merging [#6877](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/a32d85f0d405be53117b96075eef2875d2185892?el=desc) will **increase** coverage by `0.16%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6877 +/- ##
==========================================
+ Coverage 80.48% 80.65% +0.16%
==========================================
Files 157 157
Lines 28794 28796 +2
==========================================
+ Hits 23175 23224 +49
+ Misses 5619 5572 -47
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9iZXJ0LnB5) | `98.38% <100.00%> (+<0.01%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `86.63% <0.00%> (-5.27%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <0.00%> (-0.68%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| ... and [11 more](https://codecov.io/gh/huggingface/transformers/pull/6877/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=footer). Last update [a32d85f...ddbccd8](https://codecov.io/gh/huggingface/transformers/pull/6877?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>As a comparision. When running this line on current `master`:
```
TF_CPP_MIN_LOG_LEVEL=3 python examples/benchmarking/run_benchmark_tf.py --models bert-base-uncased --no_memory --batch_sizes 1 --sequence_lengths 128 256 512
```
one gets the following results:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 1 128 0.006
bert-base-uncased 1 256 0.009
bert-base-uncased 1 512 0.017
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: TensorFlow
- eager_mode: False
- use_xla: False
- framework_version: 2.3.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-09-01
- time: 11:23:30.836691
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
for a TITAN RTX GPU.
When running the above line on this branch, one gets the following results:
```
==================== INFERENCE - SPEED - RESULT ====================
--------------------------------------------------------------------------------
Model Name Batch Size Seq Length Time in s
--------------------------------------------------------------------------------
bert-base-uncased 1 128 0.006
bert-base-uncased 1 256 0.008
bert-base-uncased 1 512 0.016
--------------------------------------------------------------------------------
==================== ENVIRONMENT INFORMATION ====================
- transformers_version: 3.0.2
- framework: TensorFlow
- eager_mode: False
- use_xla: False
- framework_version: 2.3.0
- python_version: 3.6.10
- system: Linux
- cpu: x86_64
- architecture: 64bit
- date: 2020-09-01
- time: 11:28:12.021389
- fp16: False
- use_multiprocessing: True
- only_pretrain_model: False
- cpu_ram_mb: 32088
- use_gpu: True
- num_gpus: 1
- gpu: TITAN RTX
- gpu_ram_mb: 24217
- gpu_power_watts: 280.0
- gpu_performance_state: 2
- use_tpu: False
```
So, I cannot see a real difference here :-/ @jlei2<|||||>I will see whether the benchmark results are better on a GPU-V100 GPU.
@jlei2 - could you post the code you used to benchmark HF Bert vs. Google Bert ? This would help a lot for reproducability.<|||||>oops didn't see it was [WIP]<|||||>@patrickvonplaten can you update your code with the version given by @jlei2 [here](https://github.com/jlei2/transformers/pull/2) when you have time please. Thanks a lot! |
transformers | 6,876 | closed | [TF T5] Possible Error using TF T5 with Keras | See: https://discuss.huggingface.co/t/how-to-train-tft5forconditionalgeneration-model/888.
| 09-01-2020 10:28:04 | 09-01-2020 10:28:04 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>This issue has been stale for 1 month. |
transformers | 6,875 | closed | Restore PaddingStrategy.MAX_LENGTH on QAPipeline while no v2. | QA Pipeline will be slower but will work in all situations.
Need to shift towards pipeline v2 with priority on QA Pipeline to provide a workaround.
Signed-off-by: Morgan Funtowicz <[email protected]>
| 09-01-2020 09:33:26 | 09-01-2020 09:33:26 | |
transformers | 6,874 | closed | gradient_accumulation_steps in trainer_tf | part-1:
self.total_train_batch_size = self.args.train_batch_size * self.args.gradient_accumulation_steps
ds = (
self.train_dataset.repeat()
.shuffle(self.num_train_examples, seed=self.args.seed)
.batch(self.total_train_batch_size, drop_remainder=self.args.dataloader_drop_last)
.prefetch(tf.data.experimental.AUTOTUNE)
)
part-2:
for _ in tf.range(self.args.gradient_accumulation_steps):
reduced_features = {
k: ft[: self.args.train_batch_size // self.args.n_replicas] for k, ft in features.items()
}
reduced_labels = labels[: self.args.train_batch_size // self.args.n_replicas]
self.training_step(reduced_features, reduced_labels)
features = {
k: tf.concat(
[ft[self.args.train_batch_size // self.args.n_replicas :], reduced_features[k]],
axis=0,
)
for k, ft in features.items()
}
labels = tf.concat(
[labels[self.args.train_batch_size // self.args.n_replicas :], reduced_labels], axis=0
)
the implementation of gradient_accumulation seems not friendly to users who have small gpu memory.
| 09-01-2020 09:15:50 | 09-01-2020 09:15:50 | What costs the most is the gradient computation, storing few predictions is ok general. I can run a sequence classification training with a batch of 32 of 128 sequence length and an acummulation of 3 with a 8GB GPU.
Did you encounter during your experiments a memory issue? If yes, let me know and I will look at it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,873 | closed | Memory blowup with TPU Trainer in master | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 (master)
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0a0+8fb7c50 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?:Yes, TPU v2-8
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
-->
@sgugger @sshleifer @patrickvonplaten
## Information
Recent changes to the Trainer for TPU has resulted in memory blowup during training.
On a machine with 208GB of RAM [sic], this was the memory profile with the master branch on 20th August.

This only has increase in memory during evaluation (which is another memory leak bug https://github.com/huggingface/transformers/issues/5509). If you throw enough RAM to the problem, it stays in control.
After the recent changes the memory profile has become this.

Look how quickly the memory blows up even on this huge machine. I have implemented some optimizations to save memory where I am caching only a single copy of features on redis-server but that is not enough now. The most interesting thing to see is that now the memory also increases during training and not just evaluation.
After these changes, Trainer for TPUs has become unusable for training any practical model and I request you to please look into fixing this.
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Use the TPU example run_language_modelling to reproduce.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Memory stays constant with the number of training and evaluation iterations.
<!-- A clear and concise description of what you would expect to happen. -->
| 09-01-2020 07:57:24 | 09-01-2020 07:57:24 | Indeed this seems very problematic. Let's look into it cc @sgugger <|||||>Some hints - The main process takes 3.5x more RAM than the other processes individually.<|||||>Do you have a commit id that gives the first graph, so we can look into the diff?<|||||>I think I'm having a similar issue. I'm using `n1-highmem-16 (16 vCPUs, 104 GB memory)` with `v3-8` TPU for pre-training a RoBERTa model on 24GB text data.
I was able to load the dataset using `nlp` (https://github.com/huggingface/nlp/issues/532), but it eats up all the available memory during training.
<img width="860" alt="Screen Shot 2020-09-01 at 9 19 17 PM" src="https://user-images.githubusercontent.com/20531705/91850804-213cb700-ec99-11ea-853a-2e8a433bfbff.png">
(master branch on Aug 25 installed with `pip install git+https://github.com/huggingface/transformers`. Not sure how to check a commit id...)
<|||||>Same question. I was wondering are there any strategies implemented to save memory?
Something like lazyDataloader?<|||||>@sgugger I retried a run with the commit id 86c07e634f3624cdf3f9e4e81ca53b808c4b22c6 (20 Aug) and it seems to not have this memory blowup that we see on the current master

<|||||>@shizhediao Because the default behavior of Huggingface TPU Trainer is to load features into memory 8 times into all the processes separately, it quickly eats up vast amounts of system memory.
There are two options to save memory-
1. Write a lazy loading Dataset whose `__getitem__` function quickly loads features from disk when provided with the key. This could save the most memory. Even though I haven't tested this I suspect the disk random lookup and IO in the critical path of the training loop could become a bottleneck.
2. Cache the features in memory only once and share them among all the processes. I did this by using an in-memory key value server Redis by dumping all the pickled features to redis server and writing the `__getitem__` function where it loads the key from the redis server when requested. I saw empirically that this made by training about 20% faster on my workload than loading all the features 8 times into memory (probably due to cache thrashing). I used unix sockets to make the lookups even faster.<|||||>Thanks for your reply!
Would you like to share your code or are there any open-sourced code I can refer to?
Thanks!<|||||>Sure, this is in the `__init__` function of my Dataset function. As compared to Huggingface TextDataset, this particular way sped up training by 20% for me while using around 1/7 memory and generating features faster (due to less tail-latency in multiprocessing and not writing and reading features from disk)
```
file_paths_copy = copy.deepcopy(file_paths)
file_paths_copy = sorted(file_paths_copy) #multiprocess env, we want all processes to have the files in the same order
self.redis = redis.Redis(unix_socket_path="/tmp/redis.sock")
self.pipe = self.redis.pipeline()
file_lineno_map = {}
line_counter = 0
for file in file_paths_copy:
num_lines = count_lines(file)
file_lineno_map[file] = line_counter
line_counter += num_lines
# This is so that lines in each file gets assigned a unique line number in a multi-process env
self.num_examples = line_counter
for index, file_path in enumerate(file_paths_copy): # Can be multiple files
if index % xm.xrt_world_size() == xm.get_ordinal():
# If this process is assigned to process the following file, so we can use 8 cpu cores to load data parallely
logger.info("Creating features from dataset file at %s", file_path)
with open(file_path, encoding="utf-8") as f:
for line_num, line in enumerate(f.read().splitlines()): # Text to Text file where each file is an example and source and target is separated by a tab symbol
if (len(line) > 0 and not line.isspace()):
if line.find('\t') == -1:
logger.warning(
f"Encountered a line without tab separator in file {file_path} line {line_num+1}"
)
continue
input, output = line.split('\t')
features = self.text_pair_to_features(input, output)
key = line_num + file_lineno_map[
file_path] if not self.val else "val-" + str(
line_num + file_lineno_map[file_path]) # The name of the redis key
self.pipe.set(key, pickle.dumps(features))
if line_num % self.num_operations_pipelined == 1:
self.pipe.execute() # So that we only dump to redis as a batch, can speed up writing
self.pipe.execute()
if is_torch_tpu_available():
xm.rendezvous(tag="featuresGenerated") # So that the multi-process environment all wait for each other before doing anything else
```
With the `__getitem__` function being
```
def __getitem__(self, i) -> Dict[str, torch.Tensor]:
if self.val:
key = f"val-{i}"
else:
key = i
example = pickle.loads(self.redis.get(key))
return {"input_ids": example[0], "attention_masks": example[1], "labels": example[2]}
```<|||||>Thanks so much!<|||||>Cool dataset!
`Seq2SeqDataset` is also lazy, but no redis. I wonder the speed difference: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/utils.py#L159
@patil-suraj is this going to be an issue for `Seq2SeqTrainer`? We can't read all examples into memory for MT.<|||||>@sshleifer Not sure. I have yet to experiment with `Seq2SeqTrainer` on TPU so can't say much. But I have managed to successfully train t5-base and on TPU using `Trainer` with lazy dataset.<|||||>@sshleifer @patil-suraj I studied the linecache way of doing things and the reasons for not going with linecache for me were
- Our data files are on mounted network disks so first byte access time would be too large.
- Data sharded in multiple files leading to linecache being less effective as compared to just one file.
- I also suspect how much would linecache help because we are not reading lines sequentially where caching would have helped but rather reading random lines where reading a whole block of text from disk would still mean that on average we only use only one line from the block.
- I am also generally wary of involving disks in the critical path of the training loop as disks are very slow. Given that TPU requires higher input feed rate and evidence that Huggingface Trainer only uses a single CPU worker rather than many which could have helped with CPU generating features from disk in parallel while the TPU was working. See https://github.com/huggingface/transformers/issues/6316 . I believe if multiple workers were allowed in DataLoader then loading features from disk would be a valid solution.<|||||>@misrasaurabh1 We just merged a simple fix that was obviously leaking memory for training (non-detached tensors) and that came from a recent change, so it might very well be the source of your leaks. Could you confirm whether or not current master has the leak or not? If so, using the same fix in the evaluation loop should also fix the eval memory leak we currently have.<|||||>Yes, with the latest master the memory leak during training is not there anymore! Memory usage seems to be constant during training.

Although if the same `.detach()` method would fix the evaluation memory leak, that would be huge! I could go down from a 32-CPU 208GB machine I am using right now to something like 16-CPU 64GB machine resulting in big monetary savings over time.<|||||>Will look at the evaluation leak a bit more. From a first read, it looks like everything is properly detached, so it seems like this leak has another cause.
Thanks a lot for checking!<|||||>
> @shizhediao Because the default behavior of Huggingface TPU Trainer is to load features into memory 8 times into all the processes separately, it quickly eats up vast amounts of system memory.
> There are two options to save memory-
>
> 1. Write a lazy loading Dataset whose `__getitem__` function quickly loads features from disk when provided with the key. This could save the most memory. Even though I haven't tested this I suspect the disk random lookup and IO in the critical path of the training loop could become a bottleneck.
> 2. Cache the features in memory only once and share them among all the processes. I did this by using an in-memory key value server Redis by dumping all the pickled features to redis server and writing the `__getitem__` function where it loads the key from the redis server when requested. I saw empirically that this made by training about 20% faster on my workload than loading all the features 8 times into memory (probably due to cache thrashing). I used unix sockets to make the lookups even faster.
Recently I had the same issue and such behavior is on GPU as well. One good solution is to use memory-mapped dataset, which is in spirit similar to Option 1 here. I used the awesome [huggingface/datasets](https://github.com/huggingface/datasets) library which provides memory-mapped dataset class automatically through Apache Arrow and it is fairly easy to use. I reduced my RAM usage from 90G to 6G and it won't grow with the dataset size.<|||||>Is there any update on this? Is the memory leak during evaluation fixed?<|||||>@sgugger Is the memory leak during evaluation fixed by https://github.com/huggingface/transformers/pull/7767 ?<|||||>I don't know, as I have not had time to investigate the leak during evaluation on TPUs yet.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,872 | closed | transformer multitasking | Doing multitasking using transformers library such as this one https://github.com/JayYip/bert-multitask-learning. cws|NER|weibo_ner&weibo_cws, one problem will be sampled at each turn, say weibo_ner&weibo_cws, then weibo_ner and weibo_cws will trained for this turn together. | 09-01-2020 07:45:56 | 09-01-2020 07:45:56 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,871 | closed | Albert loads model on both CPU and GPU at the same time | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
MODEL_DEVICE = torch.device('cuda')
MODEL_PATH='./models'
tokenizer = AlbertTokenizer.from_pretrained(MODEL_PATH)
qa_model = AlbertForQuestionAnswering.from_pretrained(MODEL_PATH).to(MODEL_DEVICE)
I am using the above code to load the model.
This model occupies both RAM memory(1.5 GB) and GPU memory(650 MB).
I have specified torch device as Cuda, But still, it doesn't behave as expected.
When "cpu" is specified it works well and doesn't load into GPU. But when cuda is specified it load in CPU and GPU as well.
I tried , "cuda", "cuda:0"
Any solution to this ?
| 09-01-2020 07:06:50 | 09-01-2020 07:06:50 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,870 | closed | Updata tokenization_auto.py | Updata tokenization_auto.py to handle Inherited config class.
If ConfigB is inherited from ConfigA, then isinstance(ConfigB(), ConfigA) is true. We hope to use TokenizerB, but TokenizerA will be used incorrectly.
| 09-01-2020 06:45:46 | 09-01-2020 06:45:46 | Hi, we have this [test](https://github.com/huggingface/transformers/blob/master/tests/test_configuration_auto.py#L45) to prevent exactly this. In what situation did you face an issue?<|||||>@LysandreJik There is no problem for pretrained models of huggingface transformers, because the config class of them are inherited from "PretrainedConfig". However, for users who want to add new models, their self-defined config class may be inherited from config class of some existing pretrained models. For example, I am trying to add a new model based on "BART" and my NewBartConig is inherited from "BartConig". My new tokenizer will not be used because a “NewBartConig” object is an instance of "BartConig" and bart tokenizer will be used incorrectly.<|||||>Yes, but we have similar issues with models, for example the `RobertaModel` inherits from `BertModel`. The test I mentioned above checks that (the example here is for configurations but we have the same test for models and tokenizers).
Currently the way to make sure your tokenizer is used and not the one on which it's depending is to put your tokenizer above the one it's inheriting from in the mapping. The for loop will then see this one first and use this one instead of the next one.<|||||>@LysandreJik Changing the order of items in TOKENIZER_MAPPING can solve the problem indeed. But get the rid of mapping order is more user-friendly, right? Close the pull request if you don't think the pr is necessary. Thanks for the review |
transformers | 6,869 | closed | How does relative distance is computed for cross-attention in T5 model? | Let's assume we have a source sequence of length 7 and a target sequence of length 5. In cross-attention sublayer at each decoder layer, every token in the target sequence attends every token in the input sequence.
In T5 model, we compute the relative distance to compute bias using query-len and key-len as in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_t5.py#L289.
My question is, how the distance between two tokens are computed if they belong to the source and target sequence. The relative distance matrix (5 x 7) would look like:
```
tensor([[ 0, 1, 2, 3, 4, 5, 6],
[-1, 0, 1, 2, 3, 4, 5],
[-2, -1, 0, 1, 2, 3, 4],
[-3, -2, -1, 0, 1, 2, 3],
[-4, -3, -2, -1, 0, 1, 2]])
```
Once we put the distances into bucket for the cross-attention, it would look like:
```
tensor([[0, 0, 0, 0, 0, 0, 0],
[1, 0, 0, 0, 0, 0, 0],
[2, 1, 0, 0, 0, 0, 0],
[3, 2, 1, 0, 0, 0, 0],
[4, 3, 2, 1, 0, 0, 0]])
```
Given that cross-attention is a part of the decoder, the `bidirectional` flag is set to False. So, it means while decoding at step `i`, the decoder will treat all the source tokens at position `i, i+1, i+2, ...` having a distance `0` from the target token at position `i`. Is this correct?
| 09-01-2020 06:18:40 | 09-01-2020 06:18:40 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,868 | closed | MarianMTModel.generate error: Segmentation fault (core dumped) | When I use MarianMTModel to generate sequences, that is running "translated = model.generate(**tokenizer.prepare_seq2seq_batch(src_text))", the python development environment will quit automatically. There is a message, i. e. Segmentation fault (core dumped).
When I run the program in Jupyter, it will also quit. Leave the message like this, "Kernel restarting The kernel appears to have died. It will restart automatically" | 09-01-2020 06:13:06 | 09-01-2020 06:13:06 | I run the same demo program on another server. The program can work properly. |
transformers | 6,867 | closed | [doc] typos | fixed typos
| 09-01-2020 06:09:49 | 09-01-2020 06:09:49 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=h1) Report
> Merging [#6867](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59a6a32a61a87f9a1cccb57c3b4df725384d34ae?el=desc) will **decrease** coverage by `1.75%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6867 +/- ##
==========================================
- Coverage 79.91% 78.16% -1.76%
==========================================
Files 157 157
Lines 28795 28795
==========================================
- Hits 23012 22508 -504
- Misses 5783 6287 +504
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.54% <0.00%> (-41.13%)` | :arrow_down: |
| [src/transformers/tokenization\_mbart.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWJhcnQucHk=) | `57.14% <0.00%> (-39.69%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `59.57% <0.00%> (-19.15%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| ... and [16 more](https://codecov.io/gh/huggingface/transformers/pull/6867/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=footer). Last update [59a6a32...e59e26a](https://codecov.io/gh/huggingface/transformers/pull/6867?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,866 | closed | test_tf_common: remove un_used mixin class parameters |
Fixes #6590
@sshleifer Is that all for this issue? are there any other cleaning left which I did not understand ❗
Thanks | 09-01-2020 05:36:05 | 09-01-2020 05:36:05 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=h1) Report
> Merging [#6866](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/59a6a32a61a87f9a1cccb57c3b4df725384d34ae?el=desc) will **decrease** coverage by `0.24%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6866 +/- ##
==========================================
- Coverage 79.91% 79.67% -0.25%
==========================================
Files 157 157
Lines 28795 28795
==========================================
- Hits 23012 22942 -70
- Misses 5783 5853 +70
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-2.26%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |
| [src/transformers/tokenization\_xlnet.py](https://codecov.io/gh/huggingface/transformers/pull/6866/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxuZXQucHk=) | `90.09% <0.00%> (+23.42%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=footer). Last update [59a6a32...b4f1cad](https://codecov.io/gh/huggingface/transformers/pull/6866?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>The next step is to look for other places those class attributes are defined and remove them:
```bash
$ git grep test_pruning | grep tf
tests/test_modeling_tf_distilbert.py: test_pruning = True
tests/test_modeling_tf_longformer.py: test_pruning = False # pruning is not supported
tests/test_modeling_tf_transfo_xl.py: test_pruning = False
tests/test_modeling_tf_xlnet.py: test_pruning = False
```
```bash
$ git grep test_torchscript | grep tf
tests/test_modeling_tf_common.py: test_torchscript = True
tests/test_modeling_tf_distilbert.py: test_torchscript = True
tests/test_modeling_tf_longformer.py: test_torchscript = False
tests/test_modeling_tf_transfo_xl.py: test_torchscript = False
```
<|||||>Thanks for the support.
I also found `test_head_masking` which was unused. So deleted it too. Let me know if you didn't want that to happen.
```bash
$ git grep -e "test_head" | grep tf
tests/test_modeling_tf_distilbert.py: test_head_masking = True
tests/test_modeling_tf_longformer.py: test_headmasking = False # head masking is not supported
```
Thanks
PS: suggestion for any other issue, which I can pick up would be great. I am looking under label `help wanted`, etc |
transformers | 6,865 | closed | Is it possible to finetune reformer model for summarization task? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 09-01-2020 04:58:27 | 09-01-2020 04:58:27 | There are no pre-trained reformer weights yet -> so that's a no sadly<|||||>Following this issue for updates.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,864 | closed | How to save the whole model as SavedModel format for inference? | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
Hi, I want to save the mode as SavedModel format using model.save(), but when i load it, the input_format is fixed and only input_ids can be used, how can i pass the inputs like {'input_dis':XX, 'attention_mask':XX}?
codes:

and then it reports:

| 09-01-2020 04:32:44 | 09-01-2020 04:32:44 | The model architecture is simple:

<|||||>Sorry for the late reply. This is because you did not respect the signature of `TFBertMainLayer` in order to properly use it you can do:
```python
import tensorflow as tf
from transformers import TFBertForSequenceClassification
a = tf.constant([[1,2,3,4,5]])
b = tf.constant([[1,1,1,1,1]])
inp = {"input_ids": a, "attention_mask": b}
model = TFBertForSequenceClassification.from_pretrained("bert-base-cased")
model._saved_model_inputs_spec = None
model._set_save_spec(inp)
tf.saved_model.save(model, "/tmp")
model = tf.keras.models.load_model("/tmp")
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,863 | closed | special token inconsistency for [UNK] token | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.15.0-108-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
### Who can help
tokenizers: @mfuntowicz
## Information
Bert tokenizer treats [UNK] token as **special** during `tokenizer.convert_ids_to_tokens(...,skip_special_tokens=True) `
while `tokenizer(...,return_special_tokens_mask=True)` doesn't (for obvious reasons).
I think it would be better to preserve [UNK] tokens in `convert_ids_to_tokens` for consistency of the term "**special token**".
## To reproduce
```python
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased")
# fast tokenizer also has the same problem
# tokenizer = AutoTokenizer.from_pretrained("bert-large-cased",use_fast=True)
text="sentence 한 셔 word"
print('tokens:',tokenizer.tokenize(text))
tokens=tokenizer(text,return_special_tokens_mask=True)
print("input_ids:",tokens["input_ids"])
print("special_token_mask:",tokens["special_tokens_mask"])
no_special=tokenizer.convert_ids_to_tokens(tokens["input_ids"], skip_special_tokens=True)
special=tokenizer.convert_ids_to_tokens(tokens["input_ids"])
print('tokens from ids (skip special): ',no_special)
print('tokens from ids (skip special): ',special)
print('special tokens',tokenizer.all_special_tokens)
```
## Expected behavior
Also I think keeping [UNK] would be a better behavior, as `convert_ids_to_tokens` is used in inference pipelines and using skip_special_token for getting rid of [CLS][SEP][PAD] tokens leads to unintended loss of [UNK] tokens, which is important.
Related: https://github.com/huggingface/transformers/issues/4391
| 09-01-2020 03:43:31 | 09-01-2020 03:43:31 | I think that's reasonable, the point of `skip_special_tokens` isn't to skip unknown tokens. cf @mfuntowicz @thomwolf @n1t0 <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,862 | closed | cleanup: fix typo chunk_size_feed_forward in configuration_utils.py | Problem:
```python
self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forward", 0) # line 178
self.chunk_size_feed_forward = kwargs.pop("chunk_size_feed_forwar", 0)# line 198
```
in https://github.com/huggingface/transformers/blob/master/src/transformers/configuration_utils.py#L178
Solution:
delete L198 | 09-01-2020 03:23:53 | 09-01-2020 03:23:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=h1) Report
> Merging [#6862](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/367235ee52537ff7cada5e1c5c41cdd78731f092?el=desc) will **increase** coverage by `3.77%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6862 +/- ##
==========================================
+ Coverage 76.27% 80.04% +3.77%
==========================================
Files 157 157
Lines 28795 28794 -1
==========================================
+ Hits 21963 23049 +1086
+ Misses 6832 5745 -1087
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `95.97% <ø> (-0.70%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.58% <0.00%> (-7.19%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| ... and [12 more](https://codecov.io/gh/huggingface/transformers/pull/6862/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=footer). Last update [367235e...b1b2b17](https://codecov.io/gh/huggingface/transformers/pull/6862?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,861 | closed | add a final report to all pytest jobs | we had it added for one job (run_examples_torch), please add it to all pytest jobs - we need the output of what tests were run to debug the codecov issue. thank you!
To remind `pytest -rA` finalizes the test run with a report like this:
```
PASSED examples/seq2seq/test_seq2seq_examples.py::test_seq2seq_dataset_truncation[patrickvonplaten/t5-tiny-random]
PASSED examples/seq2seq/test_seq2seq_examples.py::test_seq2seq_dataset_truncation[sshleifer/bart-tiny-random]
PASSED examples/seq2seq/test_seq2seq_examples.py::test_seq2seq_dataset_truncation[google/pegasus-xsum]
PASSED examples/seq2seq/test_seq2seq_examples.py::test_legacy_dataset_truncation[sshleifer/bart-tiny-random]
PASSED examples/seq2seq/test_seq2seq_examples.py::test_legacy_dataset_truncation[bert-base-cased]
PASSED examples/test_examples.py::ExamplesTests::test_run_language_modeling
PASSED examples/test_examples.py::ExamplesTests::test_run_pl_glue
PASSED examples/test_examples.py::ExamplesTests::test_run_squad
PASSED examples/bert-loses-patience/test_run_glue_with_pabee.py::PabeeTests::test_run_glue
SKIPPED [1] examples/seq2seq/test_bash_script.py:25: too slow to run on CPU
SKIPPED [1] examples/seq2seq/test_bash_script.py:32: too slow to run on CPU
``` | 09-01-2020 02:10:50 | 09-01-2020 02:10:50 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=h1) Report
> Merging [#6861](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/431ab19d7a467905018b165bc29b2a1130c1b188?el=desc) will **increase** coverage by `3.38%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6861 +/- ##
==========================================
+ Coverage 76.81% 80.20% +3.38%
==========================================
Files 157 157
Lines 28795 28795
==========================================
+ Hits 22118 23094 +976
+ Misses 6677 5701 -976
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.02%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/tokenization\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYXV0by5weQ==) | `95.55% <0.00%> (-2.23%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `80.96% <0.00%> (-1.30%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.08% <0.00%> (-0.51%)` | :arrow_down: |
| ... and [25 more](https://codecov.io/gh/huggingface/transformers/pull/6861/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=footer). Last update [431ab19...39d237c](https://codecov.io/gh/huggingface/transformers/pull/6861?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,860 | closed | Support nested data structures for input data | # 🚀 Feature request
Support in `train.py` for more complicated/nested data inputs rather than the current assumed single-layer dictionary structure.
## Motivation
In many applications -- for example, mine, where I wish to implement a semi-supervised learning protocol -- the training input might need to be different than the 1-layer dictionary that `train.py` is currently hard-coded to accept.
For instance, in my case, I need a nested structure that supports:
```
{'supervised_data': {'input_ids':[....], 'labels': [...], 'attention_mask'},
'unsupervised_data': {'input_ids':[...], 'attention_mask'},
'augmented_data': {'input_ids':[...], 'attention_mask'}
```
(I am attempting to implement the following paper, Unsupervised Data Augmentation for Consistency Training https://arxiv.org/abs/1904.12848).
However, I can imagine other use-cases including MAML, multi-task learning and multi-modal learning where huggingface would provide a great framework but is currently limited in it's data input format.
## Your contribution
I've identified a couple of quick fixes for this:
Lines 962-974 of `train.py`, or the `_prepare_inputs` function should be rewritten as:
```
def _prepare_inputs(self, inputs):
"""
Prepare :obj:`inputs` before feeding them to the model, converting them to tensors if they are not already and
handling potential state.
"""
def map_nested_dicts_modify(ob, func):
if isinstance(ob, dict):
return {k: map_nested_dicts_modify(v, func) for k, v in ob.items()}
if isinstance(ob, list):
return list(map(lambda x: map_nested_dicts_modify(x, func), ob))
else:
return func(ob)
def to_device(v):
if isinstance(v, torch.Tensor):
v = v.to(self.args.device)
return v
inputs = map_nested_dicts_modify(inputs, to_device)
if self.args.past_index >= 0 and self._past is not None:
inputs["mems"] = self._past
return inputs
```
Line 1135 of `train.py` should be expanded to:
```
def _finditem(obj, key):
if key in obj: return True
for k, v in obj.items():
if isinstance(v, dict):
return _finditem(v, key)
has_labels = any(_finditem(inputs, k) is not None for k in ["labels", "lm_labels", "masked_lm_labels"])
```
I'm not quite sure how to handle line 1254, or whether it is really that necessary, but one way might be to again use `_finditem` for `["labels", "input_ids"]`.
And that's it -- it's then up to the user to modify their models so that their `DataCollator` generates the expected structure, and `forward` takes in the expected structure!
Even if you explicitly don't wish to support these different training protocols mentioned above, I do think, from a software engineering perspective, that `train.py` should be more fully abstracted from particulars of the data inputs and model inputs. This feature takes you a step closer to that (although not completely, as the `has_labels` line still expects a certain set of keys _somewhere_ in the data input.)
Better yet than the suggestions here would be to make the data input it's own class for full abstraction, but I can see the argument against you doing that, as it is yet another data class for users to learn and code up, and would be a breaking update for all those who have implemented `DataCollators` that adhere to your guidelines.
Alex | 08-31-2020 23:51:59 | 08-31-2020 23:51:59 | Note that there are multiple frameworks that provide generic training loops. The goal of `Trainer` (I'm assuming you're talking about it since there is no `train.py` file) is not to replace them or compete with them but to provide an easy way to train and finetune Transformers models. Those models don't take nested inputs, so Trainer does not support this. Those models are expected to return the loss as the first item of their output, so Trainer expects it too.
Making Trainer more easily customizable by providing better hooks for subclassing (your use case could be done by overriding the two private methods you mention for instance) is something we are working on, but we won't have a base Trainer that is too generic, it will remain customized to the models the library provides.<|||||>Thank you for your consideration and comments! <|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,859 | closed | [fix] typo in available in helper function | cc @sgugger will merge on ci passing. | 08-31-2020 20:13:53 | 08-31-2020 20:13:53 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=h1) Report
> Merging [#6859](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/bbdba0a76d70ff347884cbe62e0f13de903d84c7?el=desc) will **increase** coverage by `2.94%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6859 +/- ##
==========================================
+ Coverage 77.22% 80.17% +2.94%
==========================================
Files 157 157
Lines 28793 28793
==========================================
+ Hits 22235 23084 +849
+ Misses 6558 5709 -849
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [...c/transformers/modeling\_tf\_transfo\_xl\_utilities.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsX3V0aWxpdGllcy5weQ==) | `10.00% <0.00%> (-76.00%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl90cmFuc2ZvX3hsLnB5) | `19.85% <0.00%> (-68.29%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `77.37% <0.00%> (-19.71%)` | :arrow_down: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.44% <0.00%> (-7.59%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| ... and [17 more](https://codecov.io/gh/huggingface/transformers/pull/6859/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=footer). Last update [bbdba0a...b8eb2a3](https://codecov.io/gh/huggingface/transformers/pull/6859?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,858 | closed | Remove hard-coded uses of float32 to fix mixed precision use in Distilbert | In this commit [Remove hard-coded uses of float32 to fix mixed precision use](https://github.com/huggingface/transformers/commit/4fca874ea995f3d23ad7062b07b5ed7c4f87c0cd#diff-e3ab4f29f29fe1d243a6b55fafaab097), the mixed precision issue is fixed for [modeling_tf_bert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_bert.py).
However, for [modeling_tf_distilbert.py](https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_tf_distilbert.py#L171), the line 171 is not fixed yet, and we get
> 173 embeddings = inputs_embeds + position_embeddings # (bs, max_seq_length, dim)
> --> 174 embeddings = self.LayerNorm(embeddings) # (bs, max_seq_length, dim)
> InvalidArgumentError: cannot compute AddV2 as input #1(zero-based) was expected to be a bfloat16 tensor but is a float tensor
while using `mixed_bfloat16 mixed precision` with `TPU`.
A very quick fix is the same as the fix for `modeling_tf_bert.py`:
position_embeddings = tf.cast(self.position_embeddings(position_ids), inputs_embeds.dtype)
@schmidek | 08-31-2020 20:10:27 | 08-31-2020 20:10:27 | Indeed! Do you want to open a PR to fix this?<|||||>@LysandreJik
I can do that. However @patrickvonplaten has already self-assigned for this. How do you think, @patrickvonplaten?<|||||>Hey @chiapas, it would be great if you can open a PR for it :-) <|||||>Hi @patrickvonplaten , OK, that would be my first contribution to transformers :) |
transformers | 6,857 | closed | Split hp search methods | Follow-up from #6747.
Cleanly separates the two backend-specific code (optuna vs Ray) with some small code duplication in the objective function.
| 08-31-2020 18:56:09 | 08-31-2020 18:56:09 | Cool! |
transformers | 6,856 | closed | Changes in Pytorch 1.6 multinomial could break backward compatibility | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-5.4.0-42-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.6.10
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: False
### Who can help
@TevenLeScao
## Information
Model I am using:
Gpt2, but relevant to all language models using sampling.
This is not really a bug report on behalf of transformers, but rather a suggestion to handle a breaking change in the behavior of `torch.multinomial()` that was introduced with v1.6. I've noticed, that the sampling in `GenerationMixin._generate_no_beam_search()` has changed between PT1.5.1 and PT1.6, even when given the same inputs and random state. This is due to a new implementation path of `torch.multinomial()` with `replacement=False`. This new implementation breaks determinism compared to older PT versions. However, there is an easy fix to return to the previous behavior in order to maintain backward compatibility. Since `num_samples` is set to 1, the actual `replacement` value is irrelevant. But setting `replacement` to `True` will use the old sampling implementation and return deterministic results equal to those of earlier versions.
I would therefore recommend to change the call to `multinomial()` in `_generate_no_beam_search()` like this:
```next_token = torch.multinomial(probs, num_samples=1, replacement=True).squeeze(1)```
Best regards,
Andreas | 08-31-2020 18:40:29 | 08-31-2020 18:40:29 | Hey @andifunke,
Thanks a lot for your issue! Could you link the different implementation of `torch.multinomial` between PT v1.5.1 and PT v1.6.0 ?
I understand your argument, but I think setting `replacement=True` is logically false...<|||||>Hi @patrickvonplaten ,
thanks for your reply!
> Could you link the different implementation of torch.multinomial between PT v1.5.1 and PT v1.6.0 ?
Sure. The PR for the implementation is here: https://github.com/pytorch/pytorch/pull/39742 and the merge commit here: https://github.com/pytorch/pytorch/commit/97dfdaaad89c2082c90aebfa9180293847cffd60
> I understand your argument, but I think setting replacement=True is logically false...
I agree, it feels a bit hacky, but let me give you an example, why I think this workaround is justified:
The following code will behave differently in PT1.5.1 vs 1.6:
```python
import torch
torch.manual_seed(0)
t = torch.rand(10, 10)
torch.manual_seed(0)
a = torch.multinomial(t, num_samples=1, replacement=False)
torch.manual_seed(0)
b = torch.multinomial(t, num_samples=1, replacement=True)
torch.__version__, a, b, all(a == b)
```
Pytorch 1.5.1:
```
Out[1]:
('1.5.1',
tensor([[9],
[7],
[3],
[9],
[7],
[6],
[1],
[3],
[5],
[1]]),
tensor([[9],
[7],
[3],
[9],
[7],
[6],
[1],
[3],
[5],
[1]]),
True)
```
Pytorch 1.6:
```
('1.6.0',
tensor([[7],
[7],
[6],
[1],
[6],
[1],
[9],
[5],
[1],
[2]]),
tensor([[9],
[7],
[3],
[9],
[7],
[6],
[1],
[3],
[5],
[1]]),
False)
```
This of course breaks reproducibility between versions when generating text.<|||||>Oh, and here is another option, if `replacement=True` feels irritating:
You could use `torch.distributions.categorical.Categorical` instead, which uses the same sampling approach.
example:
```python
import torch
torch.manual_seed(0)
t = torch.rand(10, 10)
torch.manual_seed(0)
a = torch.distributions.categorical.Categorical(t).sample()
torch.manual_seed(0)
b = torch.multinomial(t, num_samples=1, replacement=True)
torch.__version__, a, b, all(a == b.reshape(10))
```
```
Out[1]:
('1.6.0',
tensor([9, 7, 3, 9, 7, 6, 1, 3, 5, 1]),
tensor([[9],
[7],
[3],
[9],
[7],
[6],
[1],
[3],
[5],
[1]]),
True)
```
<|||||>Hey @andifunke,
Thanks for your detailed comments - this is great! So it seems like the change was made to speed up the `torch.multinomial(do_replacement=False)` function. This is not really of interest to us though as it will never be the bottleneck in the `.generate()` function.
I agree with you that we want to keep backward compatibility here. I think the best option in this case in to use `torch.distributions.categorical.Categorical(t).sample()` in this case.
Will open a PR about it :-) <|||||>Great, thanks!<|||||>Actually, I just noticed that `torch.distributions.categorical.Categorical(...)` uses `torch.multinomial` under the hood with `do_replacement=True` - so that this is not a better option.
I'm not 100% sure how to proceed here now. @LysandreJik, @sgugger - what is your opinion on that?
The problem is the following: Because of a change in PyTorch's `torch.multinomial` function for 1.6, our generation method with `do_sample=True` yields different results when setting `torch.manual_seed(0)` between torch > 1.6 and 1.6.
As @andifunke pointed out, a simple fix would be to set `do_replacement=True`, which is logically not correct IMO, but it does not make a difference for sampling with `num_beams = 1`. For sampling with `num_beams > 1`.
Do you guys think we should go for the simple fix of `do_replacement=True` to keep backward compatibility when using `torch.manual_seed(0)` ?
It seems like backwards compatibility for `num_beams > 1` is broken either way since it would be false to set `do_replacement=True` there. <|||||>Can we copy the old implementation somewhere and just use that or is it hidden in C/CUDA?<|||||>Did we also reach out to the PyTorch team and make sure they are aware of this BC?<|||||>Looks like this is hidden in C/CUDA: https://github.com/pytorch/pytorch/pull/39742/files .
Not sure whether the PyTorch is aware of it...<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,855 | closed | Hugging face - RuntimeError: Caught RuntimeError in replica 0 on device 0 on Azure Databricks | How do I run the run_language_modeling.py script from hugging face using the pretrained roberta case model to fine-tune using my own data on the Azure databricks with a GPU cluster.
Using Transformer version 2.9.1 and 3.0 .
Python 3.6
Torch `1.5.0
torchvision 0.6
This is the script I ran below on Azure databricks
`
%run '/dbfs/FileStore/tables/dev/run_language_modeling.py' \
--output_dir='/dbfs/FileStore/tables/final_train/models/roberta_base_reduce_n' \
--model_type=roberta \
--model_name_or_path=roberta-base \
--do_train \
--num_train_epochs 5 \
--train_data_file='/dbfs/FileStore/tables/final_train/train_data/all_data_desc_list_full.txt' \
--mlm
`
This is the error I get after running the above command.
`
RuntimeError Traceback (most recent call last)
/dbfs/FileStore/tables/dev/run_language_modeling.py in <module>
279
280 if __name__ == "__main__":
--> 281 main()
/dbfs/FileStore/tables/dev/run_language_modeling.py in main()
243 else None
244 )
--> 245 trainer.train(model_path=model_path)
246 trainer.save_model()
247 # For convenience, we also re-save the tokenizer to the same directory,
/databricks/python/lib/python3.7/site-packages/transformers/trainer.py in train(self, model_path)
497 continue
498
--> 499 tr_loss += self._training_step(model, inputs, optimizer)
500
501 if (step + 1) % self.args.gradient_accumulation_steps == 0 or (
/databricks/python/lib/python3.7/site-packages/transformers/trainer.py in _training_step(self, model, inputs, optimizer)
620 inputs["mems"] = self._past
621
--> 622 outputs = model(**inputs)
623 loss = outputs[0] # model outputs are always tuple in transformers (see doc)
624
/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
548 result = self._slow_forward(*input, **kwargs)
549 else:
--> 550 result = self.forward(*input, **kwargs)
551 for hook in self._forward_hooks.values():
552 hook_result = hook(self, input, result)
/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
153 return self.module(*inputs[0], **kwargs[0])
154 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
--> 155 outputs = self.parallel_apply(replicas, inputs, kwargs)
156 return self.gather(outputs, self.output_device)
157
/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)
163
164 def parallel_apply(self, replicas, inputs, kwargs):
--> 165 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
166
167 def gather(self, outputs, output_device):
/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)
83 output = results[i]
84 if isinstance(output, ExceptionWrapper):
---> 85 output.reraise()
86 outputs.append(output)
87 return outputs
/databricks/python/lib/python3.7/site-packages/torch/_utils.py in reraise(self)
393 # (https://bugs.python.org/issue2651), so we work around it.
394 msg = KeyErrorMessage(msg)
--> 395 raise self.exc_type(msg)
RuntimeError: Caught RuntimeError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/databricks/python/lib/python3.7/site-packages/torch/nn/parallel/parallel_apply.py", line 60, in _worker
output = module(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_roberta.py", line 239, in forward
output_hidden_states=output_hidden_states,
File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 762, in forward
output_hidden_states=output_hidden_states,
File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 439, in forward
output_attentions,
File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 371, in forward
hidden_states, attention_mask, head_mask, output_attentions=output_attentions,
File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 315, in forward
hidden_states, attention_mask, head_mask, encoder_hidden_states, encoder_attention_mask, output_attentions,
File "/databricks/python/lib/python3.7/site-packages/torch/nn/modules/module.py", line 550, in __call__
result = self.forward(*input, **kwargs)
File "/databricks/python/lib/python3.7/site-packages/transformers/modeling_bert.py", line 240, in forward
attention_scores = attention_scores / math.sqrt(self.attention_head_size)
RuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 11.17 GiB total capacity; 10.68 GiB already allocated; 95.31 MiB free; 10.77 GiB reserved in total by PyTorch)
`
Please how do I resolve this | 08-31-2020 18:38:05 | 08-31-2020 18:38:05 | Try smaller batch sizes and/or bigger GPUs<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,854 | closed | Fix marian slow test | Fix slow failing test that depended on old seq2seq_batch logic.
| 08-31-2020 18:34:02 | 08-31-2020 18:34:02 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=h1) Report
> Merging [#6854](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/61b7ba93f5f4dfcef795e20a9fb11b2d4ee7608e?el=desc) will **decrease** coverage by `0.13%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6854 +/- ##
==========================================
- Coverage 79.94% 79.80% -0.14%
==========================================
Files 157 157
Lines 28739 28739
==========================================
- Hits 22974 22936 -38
- Misses 5765 5803 +38
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.85% <0.00%> (-7.19%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (-1.52%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.96% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.71% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `57.65% <0.00%> (+4.50%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6854/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=footer). Last update [61b7ba9...a83ab56](https://codecov.io/gh/huggingface/transformers/pull/6854?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,853 | closed | FAILED tests/test_modeling_marian.py::TestMarian_EN_DE_More::test_forward | 08-31-2020 18:24:17 | 08-31-2020 18:24:17 | ||
transformers | 6,852 | closed | Logging doc | Adds documentation for the new centralized logger.
| 08-31-2020 17:47:24 | 08-31-2020 17:47:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=h1) Report
> Merging [#6852](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/02d09c8fcc6bda2c345c84cec53289abbe7532ac?el=desc) will **increase** coverage by `1.00%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6852 +/- ##
==========================================
+ Coverage 79.01% 80.01% +1.00%
==========================================
Files 157 157
Lines 28739 28739
==========================================
+ Hits 22707 22995 +288
+ Misses 6032 5744 -288
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/utils/logging.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy91dGlscy9sb2dnaW5nLnB5) | `75.00% <ø> (ø)` | |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.41% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (-0.18%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `86.56% <0.00%> (+2.98%)` | :arrow_up: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `84.44% <0.00%> (+20.00%)` | :arrow_up: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `91.66% <0.00%> (+25.00%)` | :arrow_up: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6852/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `53.23% <0.00%> (+40.12%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=footer). Last update [02d09c8...e45ca17](https://codecov.io/gh/huggingface/transformers/pull/6852?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>How do we get `transformers.logging.*`? There is either `transformers.utils.logging.*` or `logging.*` if the latter was imported.
Unrelated, also has the default just changed from INFO to WARN? I rebased my copy and noticed this change. Ah, yes, it was https://github.com/huggingface/transformers/commit/4561f05c5fafc2b636a2fc1d0dded9057d439745<|||||>You get `transformerts.logging.*` after doing `import transformers`. logging is imported in the project init, so there is no need to add the .utils.<|||||>Ah, I see - the test I was working on was doing `from transformers import logging`. If we follow this in docs it leads to a shorter:
`logging.set_verbosity(logging.INFO)`
and it matches the actual `logging.INFO` from the logging package.
.... but then `from transformers import logging` makes it hard to do `import logging`... same `logging` name. So then:
```
import transformers
transformers.logging.set_verbosity(transformers.logging.INFO)`
```
while being quite verbose, has no collision with the normal `logging` package
Thank you for expanding the docs, @sgugger - this is awesome!<|||||>Note that you have the shortcut
```
transformers.logging.set_verbosity_info()
```
but yes, importing logging directly will create a conflict with the logging module.<|||||>You meant `transformers.logging.set_verbosity_{info|warning|...}` (must be a typo in `login` :)
Yes, this is good!<|||||>Oops, fixed my comment. |
transformers | 6,851 | closed | Distill marian | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 08-31-2020 16:48:40 | 08-31-2020 16:48:40 | |
transformers | 6,850 | closed | move wandb/comet logger init to train() to allow parallel logging | Moving the logger setup to the `train()` function allows parallel runs (e.g. in hyperparameter search) to log each run individually.
Alternative to #6791
| 08-31-2020 15:49:26 | 08-31-2020 15:49:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=h1) Report
> Merging [#6850](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/b8e4906c974101d328bdd01245bc1695f9b07088?el=desc) will **increase** coverage by `0.17%`.
> The diff coverage is `78.57%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6850 +/- ##
==========================================
+ Coverage 80.44% 80.61% +0.17%
==========================================
Files 161 161
Lines 30113 30119 +6
==========================================
+ Hits 24224 24281 +57
+ Misses 5889 5838 -51
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `54.95% <78.57%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_mobilebert.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9tb2JpbGViZXJ0LnB5) | `24.55% <0.00%> (-72.36%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `76.70% <0.00%> (-6.07%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `93.90% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `97.21% <0.00%> (+0.27%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6850/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=footer). Last update [b8e4906...a27cdd3](https://codecov.io/gh/huggingface/transformers/pull/6850?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>I don't think you want to do the logger setup within `train` as users can call `Trainer` for evaluation only as well.
It probably needs to stay within `__init__` but should also go into a hyperparameter search function, maybe `_objective`.
What's important is for loggers to setup at `__init__` and at each parameter search.
However @sgugger will have a better idea in how to organize this function.<|||||>Yes, the train method could be called several times on the same Trainer, or the Trainer could be used for evaluation only, and those logging platforms should be setup once only, so the init looks best. Maybe we could add a private attribute `_has_been_setup` that could be checked inside the log method before reporting to wandb/comet and call the setup method if needed? Would that work for the hp search with Ray?<|||||>That sounds good. Should it still be setup in the init then? For hyperparameter search this doesn't really make sense (and creates an "empty" run in wandb), and if it is setup on logging calls anyway we wouldn't necessarily need it there. But happy to leave it there, too.<|||||>We can leave the setup to the first time we try to log something or the first call to train then (if there is a check to the same flag, we can call the setup method several times safely).<|||||>I think the first time we try to log makes sense, and also allow to use `Trainer` in eval only.
If people just want to call multiple times `train`, it would be nice if it was straightforward for them to choose between logging to the same run or logging to a new run. Hyperparameter search would obviously automatically choose to log to a new run.
Note that logging several `train` calls to the same run is actually not currently supported due to `global_step` being reset to 0 [here](https://github.com/huggingface/transformers/blob/54cfefc2ac9e3e1c0968a2ed0dd3c711eee76196/src/transformers/trainer.py#L645) which will cause issues at least in both Tensorboard and W&B.<|||||>I adjusted the PR so the loggers will be initialized on the first call to `log()`. Is this what you had in mind?<|||||>Yes. I just think we should add the line to setup at the beginning of log, so that the loggers get initialized if we try to log something.<|||||>Okay, so the current position is good? (When clicking the "Files changed" link it looks like it's in `_hp_search_setup`, but it's actually right at the beginning of `log`)<|||||>Looks great!<|||||>Oh yeah, sorry I looked too fast. LGTM! |
transformers | 6,849 | closed | Printing probabilities | Hi,
I apologize if that's a stupid question. How can I print probabilities during inference produced by the softmax layer with BertForSequenceClassification? | 08-31-2020 15:47:01 | 08-31-2020 15:47:01 | Hi, you have an example of how to do exactly this in the [documentation](https://huggingface.co/transformers/task_summary.html#sequence-classification):
```py
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("bert-base-cased-finetuned-mrpc")
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased-finetuned-mrpc")
classes = ["not paraphrase", "is paraphrase"]
sequence_0 = "The company HuggingFace is based in New York City"
sequence_1 = "Apples are especially bad for your health"
sequence_2 = "HuggingFace's headquarters are situated in Manhattan"
paraphrase = tokenizer(sequence_0, sequence_2, return_tensors="pt")
not_paraphrase = tokenizer(sequence_0, sequence_1, return_tensors="pt")
paraphrase_classification_logits = model(**paraphrase).logits
not_paraphrase_classification_logits = model(**not_paraphrase).logits
paraphrase_results = torch.softmax(paraphrase_classification_logits, dim=1).tolist()[0]
not_paraphrase_results = torch.softmax(not_paraphrase_classification_logits, dim=1).tolist()[0]
# Should be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(paraphrase_results[i] * 100))}%")
# Should not be paraphrase
for i in range(len(classes)):
print(f"{classes[i]}: {int(round(not_paraphrase_results[i] * 100))}%")
``` |
transformers | 6,848 | closed | unexpected behavior on RoBERTa tokenizer when using additional special tokens | ## Environment info
- `transformers` version: 3.0.2
- Platform: Windows-10-10.0.18362-SP0
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <no
- Using distributed or parallel set-up in script?: no
### Who can help
tokenizers: @mfuntowicz
## Information
When trying to tokenize using RoBERTa tokenized and a special token and using add_prefix_space=True, the token following the special token does not get a space.
## To reproduce
Steps to reproduce the behavior:
1. run the following code
```
tokenizer = RobertaTokenizer.from_pretrained("roberta-base")
tokenizer.add_special_tokens('[d_s]')
print(tokenizer('[d_s] test', add_prefix_space=True))
print((tokenizer('test', add_prefix_space=True))
```
## output
{'input_ids': [0, 1296, 2], 'attention_mask': [1, 1, 1]}
{'input_ids': [0, 50271, 21959, 2], 'attention_mask': [1, 1, 1, 1]}
## Expected behavior
{'input_ids': [0, 1296, 2], 'attention_mask': [1, 1, 1]}
{'input_ids': [0, 50271, 1296, 2], 'attention_mask': [1, 1, 1, 1]}
The second token should not change because of the first one.
| 08-31-2020 15:21:19 | 08-31-2020 15:21:19 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,847 | closed | Fix resuming training for Windows | Fixes #6720
| 08-31-2020 14:59:45 | 08-31-2020 14:59:45 | |
transformers | 6,846 | closed | Separate implementation for Torch-Scriptable BERT model | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #5067
Separate (re-)implementation of BERT such that it can be used with TorchScript's script() method (not just trace). This allows for better/more reliable handling of internal model logic and removes the requirement for fixed input size, resulting in large speedups when average input size is << than max input size.
Tests duplicate all the ones on the original BERT model, with the only fundamental difference being the test_torchscript* tests, which now use script() rather than trace().
| 08-31-2020 14:55:51 | 08-31-2020 14:55:51 | Thanks a lot for doing this! It's a great way for us to realize the needed changes to get fully torch-scriptable models. Aside from the tests (we can help fix them once the design is approved), I'd love to see what parts we can reuse from bert (with potential non-harmful modifications) and what parts need to be rewritten because they're not compatible with the rest of our API.
For instance, the change in the embeddings layer is just a type annotation which we could do in bert (it would be a nice addition) and then import that layer. On the other hand, the whole parts with `return_dict` are probably fully incompatible with scripting.
I guess in an ideal world, we would reuse the same internal layers from bert and only change the full models if that is possible.<|||||>As you can see in a [previous comment on the thread](https://github.com/huggingface/transformers/issues/5067#issuecomment-662586999) my initial implementation tried to go the minimal-duplication route. I modified the original models to be scriptable, and then had a thin wrapper around them to transform the output into dictionary form.
So basically, you had BertScriptableModel returning a tuple of fixed size, and BertModel who's forward just ran BertScriptableModel and put the output in a dictionary, to keep the interface.
The main issue with that was that the code kept changing. Other than that, it should be doable.
<|||||>90% of the changes were type annotations, and assertions about Nullity (which would improve the code quality regardless).
The added bonus of the minimal duplication route is that it makes it easier to convert other models that use BERT components, e.g., Albert.<|||||># [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=h1) Report
> Merging [#6846](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `0.85%`.
> The diff coverage is `18.92%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6846 +/- ##
==========================================
- Coverage 80.20% 79.35% -0.86%
==========================================
Files 157 158 +1
Lines 28734 29257 +523
==========================================
+ Hits 23047 23216 +169
- Misses 5687 6041 +354
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_scriptable\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19zY3JpcHRhYmxlX2JlcnQucHk=) | `18.92% <18.92%> (ø)` | |
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `92.30% <0.00%> (+0.40%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `88.05% <0.00%> (+0.55%)` | :arrow_up: |
| [src/transformers/configuration\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (+0.66%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |
| ... and [5 more](https://codecov.io/gh/huggingface/transformers/pull/6846/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=footer). Last update [2de7ee0...a2c6c43](https://codecov.io/gh/huggingface/transformers/pull/6846?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>Yes, I can see that clearly now. Sorry for going back and forth with you on this. We definitely want the type annotations in the main bert file, and I think the first implementation is better on that regard. It just misses the `return_dict` argument, which is easy to add with the way you designed things (happy to do it myself if you give me access to a branch).<|||||>The previous implementation is at https://github.com/sbrody18/transformers/tree/scripting
As mentioned, it is a behind head, and still needs some work.
I sent an invite for access to my repo. Let me know if there's a better way to share the branch.<|||||>Yes, saw the invite and accepted it. I have some stuff to finish but it's on my TODO and I hope to be able to add the missing stuff before the end of the week. Do you prefer a PR or can I directly commit on this branch?<|||||>No rush on my side.
A PR might be better, to make it easier to comment, but direct is fine if that's too much trouble.<|||||>Super cool PR!
I can tweak our benchmarking tools a bit to get some numbers on speed improvements using your scriptable Bert model tomorrow<|||||>@patrickvonplaten That would be great!
The major improvement is expected when running a large set of inputs with varying lengths, individually or in small batches (that's where not having to pad to max_length would come into play)<|||||>> @patrickvonplaten That would be great!
> The major improvement is expected when running a large set of inputs with varying lengths, individually or in small batches (that's where not having to pad to max_length would come into play)
Got it!<|||||>This is great, looking forward to this PR!<|||||>Okey I did some benchmarking, which can be seen here: https://github.com/huggingface/transformers/pull/6907.
@sbrody18 - it would be awesome if you could take a look if I am using the function correctly.<|||||>Ok, after reviewing this PR and the other design in [this diff](https://github.com/huggingface/transformers/compare/clean_scripting?expand=1), along with @patrickvonplaten benchmark results in #6907 we've come to the conclusion that adding scriptable layers is a bit too much for almost no gain, since `script` and `trace` now have the same speed in PyTorch.
All type annotations and asserts are welcome additions on the other hand, if you want to suggest a PR with just those changes.<|||||>Sure. Makes sense. I'll see if I can put one together, but other things might take priority.
Thanks for all the work you've put in to look into this.<|||||>@sbrody18 - Thanks a lot for making us aware of this issue! I learned a lot about the differences between `torch.jit.trace` and `torch.jit.script` thanks to you!<|||||>Yes thanks a lot for all your work on this, I learned a lot on scriptable pytorch modules thanks to the PR!<|||||>I just wanted to point out that, IIUC, a big benefit of making everything scriptable is free reuse from languages other than Python (for example, from the C++ frontend). I know that the prescribed setup is to train in python, trace, then deploy at runtime with a traced TorchScript, but the freedom to train from C++, or even the JVM with a few extra bindings, is a pretty big win. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 6,845 | closed | Fix in Adafactor docstrings | 08-31-2020 14:45:46 | 08-31-2020 14:45:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=h1) Report
> Merging [#6845](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `0.36%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6845 +/- ##
==========================================
- Coverage 80.20% 79.83% -0.37%
==========================================
Files 157 157
Lines 28734 28734
==========================================
- Hits 23047 22941 -106
- Misses 5687 5793 +106
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `82.28% <ø> (ø)` | |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `32.20% <0.00%> (-66.95%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `92.41% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+1.42%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.63% <0.00%> (+7.18%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6845/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+10.95%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=footer). Last update [2de7ee0...9e2f5ef](https://codecov.io/gh/huggingface/transformers/pull/6845?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,844 | closed | Pegasus: replication and distillation results | ### Replication
[link](https://github.com/google-research/pegasus)
mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update)
| dataset | Authors| This Repo| best bart | best bart name
| ---- | ----|----|----|----|
| xsum | 47.60/24.83/39.64| 46.87/24.46/39.15|22.32/37.39|distilbart-xsum-12-6|
| cnn_dailymail | 44.16/21.56/41.30| see comment|21.26/30.59|distilbart-cnn-12-6|
| newsroom | 45.07/33.39/41.28 |41.03/29.83/36.96|
| multi_news | 47.65/18.75/24.95|47.58/19.0/24.77|
| gigaword | 39.65/20.47/36.76|39.79/20.56/36.80|
| wikihow | 46.39/22.12/38.41 *|46.85/23.64/28.73|
| reddit_tifu | 27.99/9.81/22.94|32.75/11.68/24.97|
| big_patent |52.29/33.08/41.66 *|
| arxiv | 44.21/16.95/25.67|44.83/17.34/25.60|
| pubmed | 45.97/20.15/28.25|45.40/19.42/26.93|
| aeslc | 37.68/21.25/36.51|37.09/21.40/35.93|
| billsum | 59.67/41.58/47.59|56.18/39.94/45.39|
+ (* (authors footnote)) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data
#### Final Update (2020-10-16)
Mission accomplished thanks to the work of @patil-suraj, and @stas00 !
The above table now shows that our results are close enough.
We suspect differences are due to treatment of the `<n>` character that pegasus generates and slightly different beam search implementations.
[Link to Spreadsheet with timing data](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit?usp=sharing)
Questions about specific results should be asked on the forums/separate issues with @stas00, @patil-suraj, and @sshleifer tagged.
| 08-31-2020 14:39:14 | 08-31-2020 14:39:14 | If anyone wants to help, evaluate on a dataset where the third column is not filled it.
Steps:
First, download the data from nlp package, save to disk in format described in https://github.com/huggingface/transformers/blob/master/examples/seq2seq/download_wmt.py
Helper function for run_eval
```bash
gen_test_hub_summ () {
# need to add --fp16 and --bs = whatever
model=$1
DATA_DIR=$2
echo $DATA_DIR
save_dir=$3
mkdir -p $save_dir
shift
shift
shift
python run_eval.py $model $DATA_DIR/test.source $save_dir/test_gens.txt --reference_path $DATA_DIR/test.target --score_path $save_dir/test_rouge.json --task summarization $@
}
```
Then Roughly:
```
cd examples/seq2seq
gen_test_hub_summ google/pegasus-{dataset} dataset {dataset}_results --bs 4
```
Leave the results, as well as any observations about truncation produced summaries as a comment in this issue!
<|||||>### CNN Dailymail
One possible reason for replication issue is that our beam search logic differs from the original, causing 16% of the summaries to be truncated.
Finetuning with our finetuning code and `--max_target_length=142` partially fixes this issue:
+ Can get a distilled version (16-4) `43.23/21.29/31.3` .436 S/sample (released at `sshleifer/dpx-cnn-16-4`)
+ Can finetune the 16-16 pegasus-cnn checkpoint to get `44.13/21.37/30.94` 1.4S/Sample (0.2 Rouge2 behind published.) ( `sshleifer/pegasus-cnn-ft-v2`)
+ original google/pegasus-cnn_dailymail scored 20.73 Rouge 2.
+ For both of these finetuned models, >99.8% of generations end in punctuation.
### XSUM
`sshleifer/distill-pegasus-xsum-16-4`
```
{"rouge1": 44.942, "rouge2": 23.0412, "rougeL": 37.8579,
"n_obs": 11333, "seconds_per_sample": 0.1972, "batch_size": 16}
```
Teacher metrics (I don't remember batch size):
```
{"rouge1": 46.8773, "rouge2": 24.46, "rougeL": 39.1507,
"n_obs": 11328, "seconds_per_sample": 0.3308}
```
<|||||>I intend to post a writeup on distillation techniques at some point before Oct 15!<|||||>Re: replication, best download strategy maybe to start with
https://github.com/google-research/pegasus/blob/master/pegasus/data/public_datasets_test.py and modify.<|||||>Cnn update:
- I believe we have a preprocessing issue. Ported models generate the `<n>` token at the beginning of sentences, whereas ours do not. The pegasus original code replaces newline symbol with `<n>`. `PegasusTokenizer` should probably do this: https://github.com/huggingface/transformers/issues/7327<|||||>For CNNDM, I can get this score with `google/pegasus-cnn_dailymail` model.
```
ROUGE-1:
rouge_1_f_score: 0.4436 with confidence interval (0.4413, 0.4459)
rouge_1_recall: 0.4825 with confidence interval (0.4797, 0.4853)
rouge_1_precision: 0.4368 with confidence interval (0.4339, 0.4395)
ROUGE-2:
rouge_2_f_score: 0.2145 with confidence interval (0.2120, 0.2170)
rouge_2_recall: 0.2323 with confidence interval (0.2297, 0.2350)
rouge_2_precision: 0.2124 with confidence interval (0.2097, 0.2150)
ROUGE-l:
rouge_l_f_score: 0.4141 with confidence interval (0.4118, 0.4165)
rouge_l_recall: 0.4501 with confidence interval (0.4474, 0.4530)
rouge_l_precision: 0.4079 with confidence interval (0.4051, 0.4106)
```
Script I run:
```
./run_eval.py google/pegasus-cnn_dailymail /home/ffajri/Data/huggingface/cnn_dm/test.source pred_cnndm_pegasus.txt \
--reference_path /home/ffajri/Data/huggingface/cnn_dm/test.target \
--score_path cnn_rouge.json \
--task summarization \
--device cuda \
--max_source_length 512 \
--max_target_length 128 \
--bs 4
```
I notice the first R1 output from the transformer is 43.xx something, but I recalculate ROUGE (to get the scores above) as follows:
1) First, I replace `<n>` with `\n` in the decoding results. (as you said above)
2) I don't use the gold summary provided by `huggingface` because sentences are not separated by the newline character. I think its necessary to separate sentences in the gold summary. So I use the original gold test set from See et al., 2017 to compute ROUGE.
2) I lower case all decoded and gold summary (but not sure if it really affects the ROUGE score)
3) I calculate ROUGE with the `pyrouge` code (not the ROUGE in transformer)
Hope it can help the fix.
<|||||>Would you be willing to share a few lines of
`cnn_dm/test.source`, `pred_cnndm_pegasus.txt`, and `cnn_dm/test.target`
Thanks!<|||||>Hi, for inference, I use the same set from `huggingface`
**`test.source`**
``
Marseille, France (CNN)The French prosecutor leading an investigation into the crash of Germanwings Flight 9525 insisted Wednesday that he was not aware of any video footage from on board the plane. Marseille prosecutor Brice Robin told CNN that "so far no videos were used in the crash investigation." He added, "A person who has such a video needs to immediately give it to the investigators." ............
``
**`test.target`**
``
Marseille prosecutor says "so far no videos were used in the crash investigation" despite media reports . Journalists at Bild and Paris Match are "very confident" the video clip is real, an editor says . Andreas Lubitz had informed his Lufthansa training school of an episode of severe depression, airline says .
``
**`pred_cnndm_pegasus.txt`** (Result)
``
"A person who has such a video needs to immediately give it to the investigators," prosecutor says .<n>"It is a very disturbing scene," editor-in-chief of Bild online tells "Erin Burnett: Outfront"
``
Then, I got R1 = 43.xx (as the `./run_eval.py` output)
To get the R1 = 44.xx, I separately calculate ROUGE (pyrouge) with:
**`test.target`** from [See et al., 2017 ](https://github.com/abisee/pointer-generator)
``
marseille prosecutor says '' so far no videos were used in the crash investigation '' despite media reports .\njournalists at bild and paris match are '' very confident '' the video clip is real , an editor says .\nandreas lubitz had informed his lufthansa training school of an episode of severe depression , airline says .
``
_updated_ **`pred_cnndm_pegasus.txt`**
``
"a person who has such a video needs to immediately give it to the investigators," prosecutor says .\n"it is a very disturbing scene," editor-in-chief of bild online tells "erin burnett: outfront"
``
Both now have `\n` which I think is necessary for calculating ROUGE.<|||||>We fixed our `calculate_rouge_score` to address the `\n` issue and now we are getting
44.31/21.53/41.15 for `sshleifer/pegasus-cnn-ft-v2`! Thanks for the help!
<|||||>Updated the table in the Issue description with most recent results after the `calculate_rouge_fix`
Moving forward, questions about specific results should be asked on the forums or in a separate issue with @stas00, @patil-suraj, and @sshleifer tagged.<|||||>hi guys :
is there code to pretrainning the model used for my own data .
Thank you
<|||||>Thank you for reproducing this results!
Regarding the treatment of the \<n\>, newline char "\n" in input text are being replaced by "\<n\>" and vice versa for the output.<|||||>I have tried around 10 sets of hyperparameters and only achieved nearly worse results. (ROUGE-1 ~ 43.9, for CNN/DailyMail) These are options of my experiments:
- Optimizer: Adafactor <-> AdamW
- Learning rate: 5e-4 <-> 1e-4
- Batch size: 4
- Gradient accumulation steps: 1 <-> 8 <-> 64
- Accelarator: dp <-> ddp
- Epochs: 20 - 80 (after around 10 epochs it started to overfit (val loss increases))
- Datasets: both old and new versions (old version doesn't consist of
\<n\> in the target summary)
I don't know what to continue, can someone tell me what my problems are?<|||||>Hi @thongnguyen050999
See if this comment above helps
https://github.com/huggingface/transformers/issues/6844#issuecomment-699499846<|||||>Hi @patil-suraj,
Yes, I did notice that, these are my results:
- Sentence ends with "\<n\>": ROUGE-1: 45.94, ROUGE-L: 32.24
- Sentence ends with "\\n": ROUGE-1: 43.96, ROUGE-L: 40.87<|||||>Are my results reasonable (representing the expected outcome)? :-) <|||||>> Are my results reasonable (representing the expected outcome)? :-)
Hi, can you please tell me a bit about what do you want to achieve? and which pre-trained Pegasus model are you currently using? It seems to me you are not doing only inference but some fine-tuning of the Pegasus model (based on your hyperparameter)?
<|||||>Yes, here is my experiment description:
- Goal: I want to reproduce the results from the Pegasus paper (in the future I might add some changes based upon the baseline 🧑🎓 ), in which I finetuned from the pretrained checkpoint
- Pretrained model I use: google/pegasus-large <|||||>I guess `google/pegasus-large` in `huggingface` is a Mixed & Stochastic model where we expect to have 44.16/21.56/41.30 (which is slightly lower than your current score).
Have you tried to set the hyperparameter of the original implementation? You can check it [here]( https://github.com/google-research/pegasus/blob/939830367bcf411193d2b5eca2f2f90f3f9260ca/pegasus/params/public_params.py).
The primary hyperparameter will be this:
"max_input_len": 1024, --> (longer text)
"max_output_len": 128,
"train_steps": 210000,
"learning_rate": 0.001,
"batch_size": 8,
You probably want to follow their hyperparameter for inference as well (e.g. beam size etc)<|||||>Hi @fajri91, I have tried your suggestion and achieved the following results after 210k steps:
- Huggingface version:
+ ROUGE-1 = 43.2011
+ ROUGE-L = 39.99
- Google version (I ran their default code without modifications)
+ ROUGE-1 = 43.01
+ ROUGE-L = 39.92<|||||>> ### Replication
> [link](https://github.com/google-research/pegasus)
>
> mixed & stochastic column of this [table](https://github.com/google-research/pegasus#results-update)
>
> dataset Authors This Repo best bart best bart name
> xsum 47.60/24.83/39.64 46.87/24.46/39.15 22.32/37.39 distilbart-xsum-12-6
> cnn_dailymail 44.16/21.56/41.30 see comment 21.26/30.59 distilbart-cnn-12-6
> newsroom 45.07/33.39/41.28 41.03/29.83/36.96
> multi_news 47.65/18.75/24.95 47.58/19.0/24.77
> gigaword 39.65/20.47/36.76 39.79/20.56/36.80
> wikihow 46.39/22.12/38.41 * 46.85/23.64/28.73
> reddit_tifu 27.99/9.81/22.94 32.75/11.68/24.97
> big_patent 52.29/33.08/41.66 *
> arxiv 44.21/16.95/25.67 44.83/17.34/25.60
> pubmed 45.97/20.15/28.25 45.40/19.42/26.93
> aeslc 37.68/21.25/36.51 37.09/21.40/35.93
> billsum 59.67/41.58/47.59 56.18/39.94/45.39
> * (* (authors footnote)) the numbers of wikihow and big_patent datasets are not comparable because of change in tokenization and data
>
> #### Final Update (2020-10-16)
> Mission accomplished thanks to the work of @patil-suraj, and @stas00 !
>
> The above table now shows that our results are close enough. We suspect differences are due to treatment of the `<n>` character that pegasus generates and slightly different beam search implementations.
>
> [Link to Spreadsheet with timing data](https://docs.google.com/spreadsheets/d/1ODfoK-tXOV6TLXDMnujdGLtFhA8oVTy-Cv6Ib6qKgWk/edit?usp=sharing)
>
> Questions about specific results should be asked on the forums/separate issues with @stas00, @patil-suraj, and @sshleifer tagged.
Hi Sam, I have a quick question regarding to obtain the results for Gigaword using checkpoint "google/pegasus-gigaword" provided by Google. Currently, I followed a very simple setup using "google/pegasus-gigaword" and follow directly from huggingface default codes in generating gigaword summary. For dataset, I directly load 'gigaword' from datasets library without pre-processing. I currently use rouge_score library to compute the rouge score. However, my results evaluating on 1951 test samples in Gigaword deviates almost 10 rouge points (rouge1, rouge2, rougel: 28, 12 and 25 vs 39.79/20.56/36.80). Is it OK if you can share your setup in reproducing your experiment.
Thanks in advance!
|
transformers | 6,843 | closed | Adding another translation example | - IWSLT 2017 (should be added to `nlp` via
https://github.com/huggingface/nlp/pull/470#issue-462074344. (Currently
coded file here)
- NL-EN, should hopefully work for any language pair (Could be checked).
- Training loop in lightning, poor training logging, notably no
translation example which are probably important.
| 08-31-2020 12:38:08 | 08-31-2020 12:38:08 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=h1) Report
> Merging [#6843](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/2de7ee0385bee4134ca894a208fa3a2aaf7d5371?el=desc) will **decrease** coverage by `2.38%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6843 +/- ##
==========================================
- Coverage 80.20% 77.82% -2.39%
==========================================
Files 157 157
Lines 28734 28734
==========================================
- Hits 23047 22362 -685
- Misses 5687 6372 +685
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9sb25nZm9ybWVyLnB5) | `16.37% <0.00%> (-82.31%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-79.30%)` | :arrow_down: |
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `24.53% <0.00%> (-63.81%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.46% <0.00%> (-0.76%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.40% <0.00%> (+0.34%)` | :arrow_up: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6843/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=footer). Last update [2de7ee0...52b62c3](https://codecov.io/gh/huggingface/transformers/pull/6843?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>Stale |
transformers | 6,842 | closed | Update `convert_bart` script to allow loading nonstandard model architectures and custom pretrained Fairseq models. | **edit 30/10/20** - this approach doesn't work anymore given the differences in interface from when I submitted it it now (mostly because of Hydra configs in fairseq). Closed.
# 🚀 Feature request
Allow the `convert_bart` script to load custom BART pre-trained models from fairseq.
## Motivation
The current BART conversion script to migrate from Fairseq to Huggingface assumes that you are using predefined shape/parameter values from a BART model. There might be a usecase where you want to experiment with your own Seq2Seq pretraining and train a smaller BART model (i.e. cutting dimensions by 1/3rd) because of small datasets/pruning/some other reason. There presently isn't a dedicated Fairseq tutorial on training your own BART models but it is possible using the "denoising" task and the appropriate parameters (e.g. [preprocess](https://gist.github.com/tomsherborne/ab1a5a28f9d843cf633d6f7843e96a63) and [train](https://gist.github.com/tomsherborne/ae3529375b7a538a1b03a53f34850234)). This is what I have been trying with a small dataset and a smaller BART model.
Then if you want to use this model within `transformers` - the conversion script doesn't work because (a) `load_xsum_checkpoint` assumes your model should be the shape of `bart.large.cnn` and rejects the incorrect tensor sizes and (b) an additional weight `decoder.output_projection.weight` needs to be ignored.
To convert correctly to `transformers` and pass all the tests - I found you can switch from `torch.hub.load` to `fairseq.models.bart.BARTModel.from_pretrained` which calls `torch.hub` internally and gives you the same output model. This means you can convert local, custom BART models into `transformers` models and use them in downstream tasks or upload them to the models archive.
## Your contribution
I made my own version of `convert_bart` [here](https://gist.github.com/tomsherborne/e7b629ee9cf0618febb211683a410ce5) which passes the scripts own tests. This could be improved/refactored if this feature is still useful. There could be a better way to do all this that I have missed. There's a hack because I call `torch.load` and `from_pretrained()` on the same weights and I'm not sure if they are both needed.
The execution is bit ugly because I set `hf_config` as a path to a JSON file created from `BartConfig.from_pretrained()` and then manually adjusting to the new model size.
```
SCRIPT_PATH="${HOME}/ed/sp/transformers/src/transformers/convert_bart_original_pytorch_checkpoint_to_pytorch.py"
TEMPLATE_ARCHIVE=""/Users/tom/ed/sp/pretrain/runs/fairseq/bart_base_enqa_small/bart-mini.tar.gz""
CHECKPOINT_PATH="${HOME}/ed/sp/pretrain/runs/fairseq/bart_base_enqa_small/ckpt/checkpoint_best.pt"
DUMP_PATH="${HOME}/ed/sp/pretrain/runs/hf_fromfairseq/bart-mini"
CONFIG_PATH="${HOME}/ed/sp/pretrain/config/hf_bart_configs/bart-mini/config.json"
source ~/miniconda3/bin/activate $ENV_NAME
python $SCRIPT_PATH --hf_config $CONFIG_PATH --model_template $TEMPLATE_ARCHIVE $CHECKPOINT_PATH $DUMP_PATH
``` | 08-31-2020 10:58:25 | 08-31-2020 10:58:25 | |
transformers | 6,841 | closed | TF Flaubert w/ pre-norm | #5614 fixed TF Flaubert without pre-norm, but didn't fix it with the pre-norm.
Fixes #6084
| 08-31-2020 07:52:24 | 08-31-2020 07:52:24 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=h1) Report
> Merging [#6841](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/05c3214153d30245928279724ce2a9b701ec8aab?el=desc) will **decrease** coverage by `0.10%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6841 +/- ##
==========================================
- Coverage 80.27% 80.16% -0.11%
==========================================
Files 157 157
Lines 28586 28586
==========================================
- Hits 22946 22916 -30
- Misses 5640 5670 +30
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_flaubert.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9mbGF1YmVydC5weQ==) | `88.34% <ø> (ø)` | |
| [src/transformers/tokenization\_ctrl.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fY3RybC5weQ==) | `78.64% <0.00%> (-17.48%)` | :arrow_down: |
| [src/transformers/tokenization\_xlm\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtX3JvYmVydGEucHk=) | `84.52% <0.00%> (-10.72%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `85.21% <0.00%> (-1.01%)` | :arrow_down: |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6841/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `82.66% <0.00%> (+0.25%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=footer). Last update [05c3214...f863f8e](https://codecov.io/gh/huggingface/transformers/pull/6841?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,840 | closed | Model cards for loodos | Adds cards for 6 models in the ./models/loodos directory.
albert-base-turkish-uncased
bert-base-turkish-uncased
electra-base-turkish-uncased
electra-base-turkish-64k-uncased-discriminator
electra-small-turkish-cased-discriminator
electra-small-turkish-uncased-discriminator | 08-31-2020 07:26:45 | 08-31-2020 07:26:45 | |
transformers | 6,839 | closed | pegasus-large: Can we have input text descriptions more than the maximum input length of 512? | As mentioned pegasus paper, authors used sinusoidal positional encoding to make sure the model can work with inputs longer than default 512?
So with huggingface implementation can we use inputs with longer lengths? | 08-31-2020 06:43:19 | 08-31-2020 06:43:19 | Yes ! https://discuss.huggingface.co/t/pegasus-questions/838/8?u=valhalla<|||||>Thanks |
transformers | 6,838 | closed | fix typo in comments (modeling_bert) | 08-31-2020 03:23:26 | 08-31-2020 03:23:26 | ||
transformers | 6,837 | closed | tokenization_gpt2 save vocabulary is not saving special tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.8.0
- Platform: linux
- Python version: 3.6.10
- PyTorch version (GPU?): GPU
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...):
GPT2
The problem arises when using:
the official example scripts: (give details below)
The tasks I am working on is:
my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load a pre-trained tokenizer
2. Add your intended special tokens: `tokenizer.add_tokens(SPECIAL_TOKENS_LIST)`
3. Save your tokenizer's vocabulary with: `tokenizer.save_vocabulary(PATH)`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
With the current `save_vocabulary` function, we are just saving the predefined tokens:
`with open(vocab_file, "w", encoding="utf-8") as f:
f.write(json.dumps(self.encoder, ensure_ascii=False))`
This line should be modified as follow to save the special tokens as well:
`vocab_with_special_tokens = dict(self.encoder, **self.added_tokens_encoder)
with open(vocab_file, "w", encoding="utf-8") as f:
f.write(json.dumps(vocab_with_special_tokens, ensure_ascii=False))`
<!-- A clear and concise description of what you would expect to happen. -->
| 08-31-2020 01:41:13 | 08-31-2020 01:41:13 | Hi! The `save_vocabulary` method, as its name implies and as is explained in its [docstring](https://huggingface.co/transformers/main_classes/tokenizer.html#transformers.PreTrainedTokenizer.save_vocabulary), only saves the vocabulary. If you want to save the entire tokenizer (with special tokens), you should use the `save_pretrained` method. |
transformers | 6,836 | closed | RAM MemoryError | # ❓ Questions & Help
I was wondering is the RAM MemoryError issue solved?
I encountered this issue because my 128 GB RAM memory could not load all 48 GB data.
There are some discussions before, such as #6636, #3388, #1290, #4009
However, I don't see there is lazyDataLoader right now.
Could you provide any hints about how to deal with a large dataset?
Thanks! | 08-31-2020 01:11:42 | 08-31-2020 01:11:42 | Do you solve the problem?......<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,835 | closed | DistributedSortishSampler | In examples/seq2seq/finetune.py,
`--sortish_sampler --gpus 2` will raise an assertion error, but if you remove the assert, it will raise another error. Ideally we should make a Seq2SeqDataset.get_distributed_sortish_sampler method and use it in the relevant case.
| 08-30-2020 23:09:30 | 08-30-2020 23:09:30 | |
transformers | 6,834 | closed | [s2s] distill: --normalize_hidden --supervise_forward | This appears to help pegasus and marian distillation.
running bart-large-xsum-12-3 baseline.
- No impact on bart-large-xsum distillation.
- +1 BLEU for Marian
- +20 ROUGE for pegasus (impossible to do anything without.)
- verified that the `torch.stack` math is identical to the old for loop math. | 08-30-2020 21:16:20 | 08-30-2020 21:16:20 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=h1) Report
> Merging [#6834](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/c5d43a872f0e85ce069e921c5bda02374e5b9cbf?el=desc) will **decrease** coverage by `2.98%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6834 +/- ##
==========================================
- Coverage 80.02% 77.04% -2.99%
==========================================
Files 161 161
Lines 30120 30120
==========================================
- Hits 24104 23205 -899
- Misses 6016 6915 +899
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JlZm9ybWVyLnB5) | `21.62% <0.00%> (-78.38%)` | :arrow_down: |
| [src/transformers/modeling\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yZWZvcm1lci5weQ==) | `16.87% <0.00%> (-77.64%)` | :arrow_down: |
| [src/transformers/modeling\_lxmert.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19seG1lcnQucHk=) | `70.01% <0.00%> (-20.75%)` | :arrow_down: |
| [src/transformers/data/data\_collator.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFfY29sbGF0b3IucHk=) | `91.90% <0.00%> (-0.41%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `94.03% <0.00%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `94.41% <0.00%> (+0.50%)` | :arrow_up: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `90.00% <0.00%> (+5.00%)` | :arrow_up: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6834/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `90.00% <0.00%> (+30.00%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=footer). Last update [c5d43a8...7325ecf](https://codecov.io/gh/huggingface/transformers/pull/6834?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,833 | closed | [s2s] command line args for faster val steps | cc @patil-suraj | 08-30-2020 21:09:26 | 08-30-2020 21:09:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=h1) Report
> Merging [#6833](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/dfa10a41ba3fd9c5289bebd3baeff8792b1b2281?el=desc) will **decrease** coverage by `1.18%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6833 +/- ##
==========================================
- Coverage 80.02% 78.84% -1.19%
==========================================
Files 157 157
Lines 28586 28586
==========================================
- Hits 22876 22538 -338
- Misses 5710 6048 +338
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/data/datasets/glue.py](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL2RhdGFzZXRzL2dsdWUucHk=) | `83.58% <0.00%> (-2.99%)` | :arrow_down: |
| ... and [15 more](https://codecov.io/gh/huggingface/transformers/pull/6833/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=footer). Last update [dfa10a4...14cdaee](https://codecov.io/gh/huggingface/transformers/pull/6833?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,832 | closed | Model.fit on GPT2 and TPUs | ## Environment info
- `transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0+cu101 (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: Yes
## Information
Model I am using: GPT2
@jplu When using the keras model.fit() method, it looks like there's a problem with logits and tensors: `Compilation failure: logits and labels must have the same first dimension`. Setting `from_logits = False` doesn't seem to resolve the problem. Any suggestion on how to change model compilation or the dataset to fix this?
```
# TPU and Strategy Initialization
resolver = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
tf.config.experimental_connect_to_cluster(resolver)
tf.tpu.experimental.initialize_tpu_system(resolver)
strategy = tf.distribute.TPUStrategy(resolver)
# Load and compile model
with strategy.scope():
model = TFGPT2LMHeadModel.from_pretrained('gpt2-medium')
model.resize_token_embeddings(len(tokenizer))
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer = optimizer,
loss = loss)
# Format inputs
input_ids = tf.convert_to_tensor(input_ids)
attention_mask = tf.convert_to_tensor(attention_mask)
labels = tf.convert_to_tensor(labels)
# Create dataset
dataset = tf.data.Dataset.from_tensor_slices(({'input_ids': input_ids, 'attention_mask': attention_mask}, labels))
num_examples = tf.data.experimental.cardinality(dataset).numpy()
train_dataset = dataset.repeat().shuffle(num_examples).batch(8)
# Train model
model.fit(x = train_dataset, epochs = 1, batch_size = 8, steps_per_epoch = num_examples)
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-10-4efb969ef56e> in <module>()
---> 23 model.fit(x = train_dataset, epochs = 1, batch_size = 8, steps_per_epoch = num_examples)
InvalidArgumentError: 9 root error(s) found.
(0) Invalid argument: {{function_node __inference_train_function_121591}} Compilation failure: logits and labels must have the same first dimension, got logits shape [32768,64] and labels shape [1024]
[[{{node sparse_categorical_crossentropy_24/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]]
TPU compilation failed
[[tpu_compile_succeeded_assert/_142128106034484350/_6]]
[[tpu_compile_succeeded_assert/_142128106034484350/_6/_223]]
(1) Invalid argument: {{function_node __inference_train_function_121591}} Compilation failure: logits and labels must have the same first dimension, got logits shape [32768,64] and labels shape [1024]
[[{{node sparse_categorical_crossentropy_24/SparseSoftmaxCrossEntropyWithLogits/SparseSoftmaxCrossEntropyWithLogits}}]]
TPU compilation failed
[[tpu_compile_succeeded_assert/_142128106034484350/_6]]
[[tpu_compile_succeeded_assert/_142128106034484350/_6/_307]]
```
| 08-30-2020 20:23:07 | 08-30-2020 20:23:07 | Humm for me it looks like it is an issue with the dataset creation, but I might be wrong as I don't have the code that creates the features.
Can you try without the `steps_per_epoch` parameter?<|||||>isn't steps_per_epoch equal to `num_examples/batch_size` ? futhermore your labels first dimention (batch) and dataset batch dimention must be equal i.e # rows of dataset/X == # rows of labels `where 32768 !=1024`<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,831 | closed | Update ONNX notebook to include section on quantization. | Added section regarding quantization with performance comparison against PyTorch on CPU.
Signed-off-by: Morgan Funtowicz <[email protected]>
| 08-30-2020 20:16:33 | 08-30-2020 20:16:33 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=h1) Report
> Merging [#6831](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/32fe44086c2191c4551b7ff00db7ae1cace9b02e?el=desc) will **increase** coverage by `0.66%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6831 +/- ##
==========================================
+ Coverage 78.10% 78.77% +0.66%
==========================================
Files 157 157
Lines 28586 28586
==========================================
+ Hits 22328 22519 +191
+ Misses 6258 6067 -191
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `81.45% <0.00%> (-5.02%)` | :arrow_down: |
| ... and [20 more](https://codecov.io/gh/huggingface/transformers/pull/6831/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=footer). Last update [32fe440...fb1404a](https://codecov.io/gh/huggingface/transformers/pull/6831?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,830 | closed | Related to abstractive text summarization | I was wondering if we could use XLM or XLM-R for abstractive text summarization. | 08-30-2020 20:05:44 | 08-30-2020 20:05:44 | If you are looking for summerization in non-English languages you can try using `MBartForConditionalGeneration`, or multilingual Bert using the `EncoderDecoder` framework. Not sure if xlm-r is yet supported in `EncoderDecoder`<|||||>Looking at xlm-r source code it seems that it can be easily added in EncoderDecoder as it subclasses Roberta which is supported in EncoderDecoder<|||||>Alright. I can try mBART and mBERT.
What I was wondering about XLM was, if we could use it in a language modeling setting for this task, like how we use GPT for any seq2seq task.
Sending both text and the summary together and calculating the loss only over the summaries.<|||||>Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.<|||||>EncoderDecoder class allows you to use encoder only models as both encoder and decoder and fine-tune for seq-2-seq task. Here's an example of Roberta2Roberta fine-tuned on CNN dm https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16<|||||>Makes sense. Saw XLMWithLMHead in https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py, so just got curious.
> Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.
<|||||>> EncoderDecoder class allows you to use encoder only models as both encoder and decoder and fine-tune for seq-2-seq task. Here's an example of Roberta2Roberta fine-tuned on CNN dm https://huggingface.co/patrickvonplaten/roberta2roberta-cnn_dailymail-fp16
Oh thank you so much!<|||||>Also what you said is doable, xlm-r can be used like a causal LM by configuring the attention mask. Might not give the best results though. See how RobertaForCausalLM is implemented. <|||||>> Makes sense. Saw XLMWithLMHead in https://github.com/huggingface/transformers/blob/master/examples/text-generation/run_generation.py, so just got curious.
>
> > Not sure if that'll work since it's trained with MLM and encoder only with bi-directional attention. What you described above will need a causal LM with unidirectional attention.
Aah Sorry, typo, I meant XLM-R, not xlm<|||||>> Also what you said is doable, xlm-r can be used like a causal LM by configuring the attention mask. Might not give the best results though. See how RobertaForCausalLM is implemented.
Ohh, sure. Will check it out.<|||||>Also, what will be the best way to finetune T5 in a multi-task setting.<|||||>Also, are there any models we can use for code-switched data.<|||||>> Also, are there any models we can use for code-switched data.
Not too familiar with this, but seen few models on model hub and they used Bert.
https://huggingface.co/sagorsarker/codeswitch-hineng-ner-lince
> Also, what will be the best way to finetune T5 in a multi-task setting.
If you can cast all your tasks in text-2-text format then multi-task training can be done simply using task pre-fixes as shown in the paper. Also I think the performance will depend upon the tasks and datasets so some experimentation is necessary. Most important thing when doing multi-task is how you sample examples from different tasks. See section 3.5.2 of T5 paper.
Also the best place to ask this question would be
https://discuss.huggingface.co/t/t5-finetuning-tips/684<|||||>Alright, thank you so much for the help !!<|||||>I tried using Xlmr2Xlmr but seems that regardless of what input I provide I get the same output; I checked to see the is_decoder flag is set to true in the decoder. This issue persists throughout the finetuning process<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,829 | closed | No attribute '_mp_fn' when fine-tuning mbart for en-ro translation task using TPU | I followed the TPU example in the [examples folder](9https://github.com/huggingface/transformers/tree/master/examples) and found xla_spawn.py calls
`xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)`
and [fine-tune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py) does not have the "_mp_fn" found in some training scripts.
I get
```
Traceback (most recent call last):
File "examples/xla_spawn.py", line 72, in <module>
main()
File "examples/xla_spawn.py", line 68, in main
xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores)
AttributeError: module 'finetune' has no attribute '_mp_fn'
```
Tried to fix it by adding the _mp_fn:
```
#def _mp_fn(index):
# For xla_spawn (TPUs)
# pass
#main()
```
with and without args `main(args)` but neither worked. | 08-30-2020 18:04:17 | 08-30-2020 18:04:17 | @abedkhooli Could I have the command you ran + environment details so that I can try to replicate this?
Thanks!
<|||||>Thanks @sshleifer for looking into this.
TPU type: TPU v2 which is 8 cores, 64 GB (using Google Colab)
```
%%bash
export ENRO_DIR='/content/wmt_en_ro' # Download instructions above
#export WANDB_PROJECT="MT" # optional
export MAX_LEN=32
export BS=8
cd /content/transformers
./mbart_enro.sh
```
mbart_enro.sh:
```
#!/usr/bin/env bash
export PYTHONPATH="../":"${PYTHONPATH}"
python examples/xla_spawn.py --num_cores 8 \
examples/seq2seq/finetune.py \
--learning_rate=3e-5 \
--fp16 \
--do_train \
--val_check_interval=0.25 \
--adam_eps 1e-06 \
--num_train_epochs 1 --src_lang en_XX --tgt_lang ro_RO \
--data_dir $ENRO_DIR \
--max_source_length $MAX_LEN --max_target_length $MAX_LEN --val_max_target_length $MAX_LEN --test_max_target_length $MAX_LEN \
--train_batch_size=$BS --eval_batch_size=$BS \
--task translation \
--warmup_steps 500 \
--freeze_embeds \
--model_name_or_path=facebook/mbart-large-cc25 \
--output_dir enro_finetune_baseline \
--label_smoothing 0.1 \
--fp16_opt_level=O1 --sortish_sampler --n_train 5000 --n_val 500 \
"$@"
```
I believe the issue is adding the correct _mp_fn to examples/seq2seq/finetune.py that matches the main() call (I am not an experienced coder :-)).
<|||||>I see a related [PR#5960](https://github.com/huggingface/transformers/pull/5960) - does that mean moving away from [xla_spawn](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py) ?<|||||>That PR is stalled, I am open to using any tpu implementation that works!
<|||||>If using [xla_spawn](https://github.com/huggingface/transformers/blob/master/examples/xla_spawn.py), and adding _mp_fn(..) to [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py), how should it (_mp_fn) be defined?<|||||>I don't know, great question. Maybe @LysandreJik would know the answer.<|||||>`_mp_fn(index)` should simply be an entry point to your script that leverages `transformers.Trainer`. You can see examples of it [here](https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.py).
Please note that we implemented this to mimic torch's `torch.distributed.launch`. I have no idea how this would work with a `pytorch-lightning` implementation. Doesn't pytorch-lightning have its own way of managing TPU training?<|||||>The main() function in [finetune.py](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/finetune.py#L350) takes arguments, so _mp_fn(index) signature won't work.
```
def _mp_fn(index):
# For xla_spawn (TPUs)
main()
```
`Exception in device=TPU:0: main() missing 1 required positional argument: 'args'`<|||||>Right, but even if you manage to make it work with the args, `finetune.py` is using pytorch-lightning so it won't work with `xla_spawn.py`. You can check the [pytorch-lightning docs](https://pytorch-lightning.readthedocs.io/en/latest/tpu.html) to see how to run on TPU.<|||||>So, [lightning_base.py](https://github.com/huggingface/transformers/blob/master/examples/lightning_base.py#L165) is not ready for TPU yet.<|||||>This is now supported by `Seq2SeqTrainer` which doesn't use PL.
See https://github.com/huggingface/transformers/blob/master/examples/seq2seq/builtin_trainer/finetune_tpu.sh |
transformers | 6,828 | closed | regarding the max token length of longformer | In the encode_plus function of the Tokenizer , there is a argument called max_length whose default value is 4096.
So is it possible to increase the max token length beyond 4096 or the maximum value of max_length argument is 4096
Can anyone clear my doubt
Thanks in advance !!!! | 08-30-2020 13:44:28 | 08-30-2020 13:44:28 | Hi, @rkoystart
I think this notebook will [help](https://github.com/allenai/longformer/blob/master/scripts/convert_model_to_long.ipynb)<|||||>@patil-suraj so it means by default the longformers model provided by huggingface supports maximum tokens of 4096 right ?
if suppose we want to pretrained model to support for even more longer sentences than 4096 we have to follow the instructions in the notebook you have mentioned above
<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,827 | closed | Add model card for singbert lite. | Add model card for singbert lite and update widget for singbert and singbert large.
| 08-30-2020 08:43:48 | 08-30-2020 08:43:48 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=h1) Report
> Merging [#6827](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **decrease** coverage by `0.09%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6827 +/- ##
==========================================
- Coverage 80.02% 79.93% -0.10%
==========================================
Files 157 157
Lines 28586 28586
==========================================
- Hits 22877 22851 -26
- Misses 5709 5735 +26
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.21% <0.00%> (-0.51%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6827/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=footer). Last update [22933e6...ceb655f](https://codecov.io/gh/huggingface/transformers/pull/6827?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,826 | closed | Loading a converted pytorch model in huggingface transformers properly | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
I converted a pre-trained tf model to pytorch using the following function.
```
def convert_tf_checkpoint_to_pytorch(*, tf_checkpoint_path, albert_config_file, pytorch_dump_path):
# Initialise PyTorch model
config = AlbertConfig.from_json_file(albert_config_file)
print("Building PyTorch model from configuration: {}".format(str(config)))
model = AlbertForPreTraining(config)
# Load weights from tf checkpoint
load_tf_weights_in_albert(model, config, tf_checkpoint_path)
# Save pytorch-model
print("Save PyTorch model to {}".format(pytorch_dump_path))
torch.save(model.state_dict(), pytorch_dump_path)
```
I am loading the converted model and encoding sentences in the following way:
```
def vectorize_sentence(text):
albert_tokenizer = AlbertTokenizer.from_pretrained("albert-base-v2")
config = AlbertConfig.from_pretrained(config_path, output_hidden_states=True)
model = TFAlbertModel.from_pretrained(pytorch_dir, config=config, from_pt=True)
e = albert_tokenizer.encode(text, max_length=512)
model_input = tf.constant(e)[None, :] # Batch size 1
output = model(model_input)
v = [0] * 768
# generate sentence vectors by averaging the word vectors
for i in range(1, len(model_input[0]) - 1):
v = v + output[0][0][i].numpy()
vector = v/len(model_input[0])
return vector
```
However while loading the model, a warning comes up:
> Some weights or buffers of the PyTorch model TFAlbertModel were not
> initialized from the TF 2.0 model and are newly initialized:
> ['predictions.LayerNorm.bias', 'predictions.dense.weight',
> 'predictions.LayerNorm.weight', 'sop_classifier.classifier.bias',
> 'predictions.dense.bias', 'sop_classifier.classifier.weight',
> 'predictions.decoder.bias', 'predictions.bias',
> 'predictions.decoder.weight'] You should probably TRAIN this model on
> a down-stream task to be able to use it for predictions and inference.
Can anyone tell me if I am doing anything wrong? What does the warning mean? I saw issue #5588. Don't know if my issue is the same as this.
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**:
https://stackoverflow.com/questions/63648380/loading-a-converted-pytorch-model-in-huggingface-transformers-properly | 08-30-2020 06:55:07 | 08-30-2020 06:55:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,825 | closed | Fixed open in colab link | Changed non existant link - https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb
to - https://colab.research.google.com/github/huggingface/transformers/blob/master/notebooks/03-pipelines.ipynb | 08-30-2020 05:56:51 | 08-30-2020 05:56:51 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=h1) Report
> Merging [#6825](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/22933e661fe789874ef58b13d3a9bb2554ba5891?el=desc) will **increase** coverage by `0.20%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6825 +/- ##
==========================================
+ Coverage 80.02% 80.23% +0.20%
==========================================
Files 157 157
Lines 28586 28586
==========================================
+ Hits 22877 22936 +59
+ Misses 5709 5650 -59
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdDUucHk=) | `59.43% <0.00%> (-35.85%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `86.46% <0.00%> (-0.26%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.84% <0.00%> (+0.39%)` | :arrow_up: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.51% <0.00%> (+0.44%)` | :arrow_up: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `84.09% <0.00%> (+1.51%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `94.28% <0.00%> (+2.14%)` | :arrow_up: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.49% <0.00%> (+7.18%)` | :arrow_up: |
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `98.63% <0.00%> (+21.91%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6825/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=footer). Last update [22933e6...747ed9e](https://codecov.io/gh/huggingface/transformers/pull/6825?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,824 | closed | How to convert '.bin' model to '.onnx' | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 08-30-2020 04:57:45 | 08-30-2020 04:57:45 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,823 | closed | How to use encode_plus to force padding to specific length | I am using the following code in the __getitem__() method of my dataset:
`
class MyDataset(Dataset):
def __init__(self, myargs):
#other code here
self.tokenizer = BertTokenizer.from_pretrained('bert-base-cased)
self.max_len =50
`
__getitem__ method:
`
def __getitem__(self, idx):
#other code here
text_encoded = self.tokenizer.encode_plus(
text,
add_special_tokens=True,
padding=True,
truncation=True,
max_length=self.max_len,
return_token_type_ids=False,
pad_to_max_length=True,
return_attention_mask=True,
return_tensors='pt',)
input_ids=text_encoded['input_ids'].flatten()
attention_mask=text_encoded['attention_mask'].flatten()
>>input_ids: torch.Size([9]), attention_mask: torch.Size([9])
>>input_ids: torch.Size([21]), attention_mask: torch.Size([21])
`
Even though I have set padding and truncation to True and set a max_length the returned lengths of the input_ids and attention_mask values in the retured text_encoded dict are variable depending on the input text. Is this normal behaviour? if so how can I ensure that ecery rturned sample is padded out to and truncated at a specific length? | 08-30-2020 01:47:07 | 08-30-2020 01:47:07 | I worked out that if you set padding=False then the text is padded out correctly with input_ids and attention_mask values of 0. This is the opposite setting of what I thought padding to mean but it works.
'''
>>input_ids: torch.Size([1, 50]), attention_mask: torch.Size([1, 50]) #pre flatten
``` |
transformers | 6,822 | closed | [s2s README] link to cnn dataset with empty lines removed | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 08-29-2020 22:02:00 | 08-29-2020 22:02:00 | |
transformers | 6,821 | closed | How to generate on multiple GPUs? | Hi!
How would I run generation on multiple GPUs at the same time? Running model.generate on a DataParallel layer isn't possible, and model.module.generate run on a single GPU.
Any advice would be appreciated! | 08-29-2020 21:12:11 | 08-29-2020 21:12:11 | Hey @moinnadeem,
Sorry to answer so late...yeah this will require some work I think! Will put it in the generate() project, but not sure when we manage to take a deeper look into this. Feel free to open a PR and tag me if you have some good ideas :-) <|||||>Just added this to `example/seq2seq/`: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/README.md#multi-gpu-evalulation<|||||>+1 on getting `generate()` to run when using nn.dataparallel for evaluation
Is the current finetune.py script (which I think uses DDP) running generate on all GPUs or just on one device?<|||||>it uses all devices.<|||||>Regarding getting generate() to run using nn.dataparallel, it actually probably isn't worth building the functionality. It seems that dataparallel (e.g. with 2 GPUs) is rarely faster (and actually sometimes quite a bit slower) than running with a single GPU, even when the multi-gpu is on the same node.<|||||>Hello,
I want to use generate function with single GPU. Specifically, I fine tuned a GPT-2 model (on GPU) and subsequently, I want to generate text with it.
When I run this
```
input_ids.to(device)
sample_output = model.generate(
input_ids,
do_sample=True,
max_length=150,
top_k=50,
top_p=0.92
)
```
I get this error
`RuntimeError: Input, output and indices must be on the current device`
When i move the model and the input `.to('cpu')`, it works.<|||||>@contribcode this looks like another issue, do you mind opening a new issue with the issue template filled-in? Thank you.<|||||>You are right @LysandreJik, I will open a new issue.<|||||>This issue has been stale for 1 month.<|||||>Gently pinging @stas00 here - is this possible now thanks to your recent PR in `generate()`? <|||||>1. The work I did in `generate`'s search functions is to make those work under deepspeed zero-3+ regime, where all gpus must work in sync to complete, even if some of them finished their sequence early - it uses all gpus because the params are sharded across all gpus and thus all gpus contribute their part to make it happen.
2. But otherwise, the current design is that we already use all gpus. Is it not the case? We just don't do anything with the results from all but rank 0 process.
3. In current examples we for some reason save the generated tokens only from rank0: e.g. in `translation_run.py`:
https://github.com/huggingface/transformers/blob/50f4539b8201b26b18085260bf801cdeadfa6640/examples/pytorch/translation/run_translation.py#L564
I guess it's easier than to try to somehow write them out in non-interleaved way - probably could to add `flock` and append them all to the same file if needed.
4. metrics we calculate on all gpus, but only save metrics from rank0 process
https://github.com/huggingface/transformers/blob/5c00918681d6b4027701eb46cea8f795da0d4064/src/transformers/trainer_pt_utils.py#L924
If I missed something please clarify what is not working in `generate` under multiple gpus or what is the desired functionality?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Any work on this? Is there any way to use DDP with generate?<|||||>Also curious about this<|||||>We should definitely enable `generate()` on multiple GPUs - did anybody give it a try with DDP? <|||||>See: https://github.com/huggingface/transformers/issues/6821#issuecomment-825770020<|||||>> Hi!
>
> How would I run generation on multiple GPUs at the same time? Running model.generate on a DataParallel layer isn't possible, and model.module.generate run on a single GPU.
>
> Any advice would be appreciated!
Hello,
Did you resolve this issue? If you remember can you share me the solution for it?
Thanks in advance<|||||>As I already replied earlier `generate` works on multiple gpus including `deepspeed`<|||||>>
@kkavyashankar0009 @JulesGM
Hi, it seems that model.generatre() does not support DP, but it can be used in DDP. Here is some example code:
```
import torch.multiprocessing as mp
import torch.distributed as dist
import argparse
import torch
from transformers import get_linear_schedule_with_warmup, BartConfig, BartTokenizer, BartForConditionalGeneration
parser = argparse.ArgumentParser()
parser.add_argument('--nodes', type=int, default=1) # how many nodes (machines) you have
parser.add_argument('--gpus', type=int, default=-1, help='num gpus per node')
parser.add_argument('--nr', type=int, default=0, help='ranking within the nodes')
args = parser.parse_args()
tokenizer = BartTokenizer.from_pretrained(args.tokenizer_name)
def test_model_generation(local_gpu_rank, args):
set_seed(args.seed)
args.rank = args.nr * args.gpus + local_gpu_rank # compute the rank of the current GPU
dist.init_process_group(backend="nccl", init_method="env://", world_size=args.world_size, rank=args.rank)
test_data = "../data/" + args.dataset + "/test.txt"
print("Processing data: " + test_data, flush=True)
config = BartConfig.from_pretrained(args.config_name)
bart_ctx = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)
bart_rep = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)
bart_ctx.resize_token_embeddings(len(tokenizer))
bart_rep.resize_token_embeddings(len(tokenizer))
model = MyModel(bart, config)
model_state_dict = torch.load("../output/model/gen.ddp.pt")
model.load_state_dict(model_state_dict, strict=True) # load model
torch.cuda.set_device(local_gpu_rank)
args.device = torch.device("cuda", local_gpu_rank)
model.to(args.device) # move the model to GPU
# model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_gpu_rank], find_unused_parameters=True)
model.eval()
test_dataset = FileDataset(test_data, tokenizer, max_context_len=args.max_ctx_len, max_response_len=args.max_rep_len, dataset=args.dataset)
test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)
test_dataloader = DataLoader(test_dataset, batch_size=args.test_batch_size, num_workers=2, sampler=test_sampler) # the batch size on each GPU
if args.rank == 0:
count = 0
fw = open("../output/test.responses.txt", "w", encoding="utf-8") # open file for writting result
with torch.no_grad():
test_sampler.set_epoch(0) # keep all data the same on all GPUs, it is usually used in training, I'm not sure if it is necessary in inference
for test_data in test_dataloader:
for key in test_data.keys():
test_data[key] = test_data[key].to(args.device)
outputs = model.bart_model.generate(
input_ids=test_data["input_ids"],
attention_mask=test_data["attention_mask"],
max_length=args.max_rep_len,
no_repeat_ngram_size=3,
num_beams=10,
) # my model contains a BART model in self.bart_model,so I use model.module.bart_model to get it
if outputs.size(1) < args.max_rep_len: # need padding because the lengths from different GPUs may be different
batch_pred_padding = torch.ones((outputs.size(0), args.max_rep_len - outputs.size(1)), dtype=outputs.dtype).cuda() # use the padding token of BART, and its token id is 1. Be careful with the data type.
outputs = torch.cat([outputs, batch_pred_padding], dim=1)
batch_pred = [torch.zeros_like(outputs, dtype=outputs.dtype).cuda() for _ in range(args.world_size)] # initialized a list for collecting tensors from all GPUs. Be careful with the data type.
dist.all_gather(batch_pred, outputs) # collect data
batch_pred = torch.stack(batch_pred, dim=1) # use stack, take care of the dimension
batch_pred = batch_pred.reshape(-1, args.max_rep_len)
if args.rank == 0:
batch_out_sentences = tokenizer.batch_decode(batch_pred, skip_special_tokens=True, clean_up_tokenization_spaces=False) # decode the token id to token
for r in batch_out_sentences:
fw.write(r + "\n")
fw.flush()
count += len(batch_out_sentences)
print(count)
if args.rank == 0:
fw.close()
if __name__ == "__main__":
if args.gpus < 0:
args.gpus = torch.cuda.device_count()
args.world_size = args.nodes * args.gpus
os.environ['MASTER_ADDR']='localhost'
os.environ['MASTER_PORT']='8888'
mp.spawn(test_model_generation, nprocs=args.gpus, args=(args, ))
```
Be careful with the data order. It is better to add IDs and write them to the file, which can be used for checking the order.<|||||>@DaoD
Thanks a lot for this example! If I understand correctly, I think you don't need to use `torch.nn.parallel.DistributedDataParallel` wrapper on the model if only running inference though, since we don't care about gradients. I found that only using the distributed data sampler and leaving the model as is (without the DDP wrapper) led to better memory usage. Haven't looked into why, but let's me use larger batch sizes just FYI.<|||||>@bkleiner2 Thanks for your information! I just want to use multiple GPUs for inference, so I don't how it works if DDP is not used. <|||||>Yeah I'm just saying for inference only I don't think you need this line:
`model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_gpu_rank], find_unused_parameters=True)`
because we don't need to aggregate gradients. Once you've done this:
` model.to(args.device) # move the model to GPU` you've already copied the model to each device right? Because there is a separate process running for each device. So you can get the data parallelism across devices by simply using `test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)`, as far as I can tell.<|||||>@bkleiner2 Oh, I got it! Thanks for your explanation!<|||||>I think the real problem we are facing is this error when we try to called `model.generate`
``` bash
AttributeError: 'DistributedDataParallel' object has no attribute 'generate'
```
Indeed, it can be solved by distributed sampler or using the huggingface trainer class.
But it could be quite tricky if we don't use them and write our own trainer. Even we have to be careful in using the distributed sampler.<|||||>remove the outside wrapping to get to the `transformers`'s model and it should work,
`model.module.generate()`
You can use the helper that deals with arbitrary number of wrappers.
https://github.com/huggingface/transformers/blob/d90a36d192e2981a41122c30a765c63158dd0557/src/transformers/modeling_utils.py#L3027-L3038
<|||||>> As I already replied earlier `generate` works on multiple gpus including `deepspeed`
> remove the outside wrapping to get to the `transformers`'s model and it should work,
>
> `model.module.generate()`
>
> You can use the helper that deals with arbitrary number of wrappers.
>
> https://github.com/huggingface/transformers/blob/d90a36d192e2981a41122c30a765c63158dd0557/src/transformers/modeling_utils.py#L3027-L3038
I tried unwraping the model as "model.module.generate" but I guess there's an issue in the generators it's all special tokens' index as decoding returns empty strings.
could you please clarify what model.module is exactly doing?<|||||>@HebaGamalElDin I think your problem is not related to this issue. Or we might need more context here, in your case.
<|||||>> could you please clarify what model.module is exactly do?
Wrappers like DDP, Deepspeed, and others hide the original model inside their objects - usually under `model.module`, so you can get from the wrapped object to the original model using `unwrap_model` which will handle multiple wrappers. If it's just one you can just access the original PreTrainedModel HF subclass model with `model.module`.
The wrappers have no `generate` method, only the PreTrainedModel subclasses have it. That's why if you want to call `generate` you must call it on the HF model and not its wrappers.
If you have issues I suggest you first remove DDP and debug your issue on a single GPU, once working it'd most likely work under DDP..
As @allanj suggested your issue most likely has nothing to do with unwrapping, so anything is possible if you write your own code. Hence I suggest to sort it out on 1-gpu first, then try 1+.<|||||>@stas00 thank you, what I'm wondering is which model version exactly does model.module access or which model version in which GPU exactly?
**It was working already in 1 GPU, since I switched to ml.p3.16xlarge instance It makes this behavior.**
Here's the full training process:
```
import os
import torch
import pandas as pd
import random
import math
from copy import deepcopy
from tqdm import tqdm
import os
import re
import shutil
import tarfile, zipfile
import pickle,json
import numpy as np
import itertools
from PIL import Image
import PIL.ImageOps
import cv2
from torch.utils.data import DataLoader
from transformers import AdamW, TrOCRProcessor, VisionEncoderDecoderModel, get_scheduler
from Data_pipeline import Context, HCRDataset, OCRDataLoad
from Validation_Metrics import getWordLevelError, getCharacterLevelError
from datasets import load_metric
cer_metric = load_metric("cer")
# SageMaker data parallel: Import the library PyTorch API
#round(random.uniform(),2)
import smdistributed.dataparallel.torch.torch_smddp
# SageMaker data parallel: Import PyTorch's distributed API
import torch.distributed as dist
from torch.nn.parallel import DistributedDataParallel as DDP
# SageMaker data parallel: Initialize the process group
dist.init_process_group(backend='smddp')
# LOAD MODEL
def load_model() -> VisionEncoderDecoderModel:
model: VisionEncoderDecoderModel = VisionEncoderDecoderModel.from_pretrained('gagan3012/ArOCRv4')
return model.cuda()
# SETUP MODEL CONFIGUATIONS
def init_model_for_training(model: VisionEncoderDecoderModel, processor: TrOCRProcessor):
model.config.decoder_start_token_id = processor.tokenizer.cls_token_id
model.config.pad_token_id = processor.tokenizer.pad_token_id
model.config.vocab_size = model.config.decoder.vocab_size
model.config.bos_token_id = processor.tokenizer.bos_token_id
model.config.max_length = 162
model.config.decoder.is_decoder = True
model.config.decoder.add_cross_attention = True
torch.cuda.manual_seed_all(42)
def compute_cer(processor, pred_ids, label_ids):
pred_str = processor.batch_decode(pred_ids, skip_special_tokens=True)
label_ids[label_ids == -100] = processor.tokenizer.pad_token_id
label_str = processor.batch_decode(label_ids, skip_special_tokens=True)
cer = cer_metric.compute(predictions=pred_str, references=label_str)
return cer
# LOAD PRE_PROCESSOR
def load_processor() -> TrOCRProcessor:
return TrOCRProcessor.from_pretrained('gagan3012/ArOCRv4')
def train(context: Context, train_epochs, learning_rate):
model = context.model
optimizer = AdamW(model.parameters(), lr=learning_rate)
num_training_steps = train_epochs * len(context.train_dataloader)
lr_scheduler = get_scheduler("linear", optimizer=optimizer, num_warmup_steps=0, num_training_steps=num_training_steps)
train_loss = 0.0
min_cer = 1.0
min_train_loss = 1.0
for epoch in range(train_epochs):
model.train()
for j, batch in enumerate(context.train_dataloader):
inputs: torch.Tensor = batch["input"].cuda(non_blocking=True)
labels: torch.Tensor = batch["label"].cuda(non_blocking=True)
#print(inputs)
#print(labels)
outputs = model(pixel_values=inputs, labels=labels)
#print(outputs)
loss = outputs.loss
loss.backward()
optimizer.step()
lr_scheduler.step()
optimizer.zero_grad(set_to_none=True)
train_loss+=loss
if (loss < min_train_loss) or (min_train_loss==1.0):
min_train_loss = loss
print(f"Epoch {epoch}-----Loss---{train_loss/len(context.train_dataloader)}--------- min-cer: {min_train_loss}")
# evaluate
#model.eval()
valid_cer = 0.0
with torch.no_grad():
for batch in tqdm(context.val_dataloader):
#print(f"INPUT: {batch['input']} ------ LABEL: {batch['label']}")
outputs = model.module.generate(batch["input"].cuda(non_blocking=True))
#print(f"OUTPUTS: {outputs}")
#print(f"OUTPUTS on CPU: {outputs.cpu().numpy()}")
cer = compute_cer(context.processor, pred_ids=outputs.detach(), label_ids=batch["label"])
valid_cer += cer
print("Validation CER:", valid_cer / len(context.val_dataloader))
def main():
batch_size = 64
train_epochs = 100
learning_rate = 0.0001
checkpoints_path = "checkpoints"
# SageMaker data parallel: Scale batch size by world size
batch_size //= dist.get_world_size()
batch_size = max(batch_size, 1)
# Prepare dataset
#train_dataset = torchvision.datasets.MNIST(...)
processor = load_processor()
(x_train,y_train),(x_valid,y_valid),(x_test,y_test) = OCRDataLoad()
train_dataset = HCRDataset(x_train, y_train, processor)
# SageMaker data parallel: Set num_replicas and rank in DistributedSampler
train_sampler = torch.utils.data.distributed.DistributedSampler(
train_dataset,
num_replicas=dist.get_world_size(),
rank=dist.get_rank())
train_dataloader = DataLoader(train_dataset, batch_size, shuffle=False, sampler=train_sampler)
val_dataset = HCRDataset(x_valid, y_valid, processor)
val_sampler = torch.utils.data.distributed.DistributedSampler(
val_dataset,
num_replicas=dist.get_world_size(),
rank=dist.get_rank())
val_dataloader = DataLoader(val_dataset, batch_size, shuffle=False, sampler=val_sampler)
# SageMaker data parallel: Wrap the PyTorch model with the library's DDP
model = load_model()
init_model_for_training(model, processor)
model = DDP(model, find_unused_parameters=True)
context = Context(model, processor, train_dataset, train_dataloader, val_dataset, val_dataloader)
# SageMaker data parallel: Pin each GPU to a single library process.
local_rank = os.environ["LOCAL_RANK"]
torch.cuda.set_device(int(local_rank))
model.cuda(int(local_rank))
train(context, train_epochs, learning_rate)
# SageMaker data parallel: Save model on master node.
if dist.get_rank() == 0:
model.module.save_pretrained(checkpoints_path)
if __name__ == '__main__':
main()
```
@allanj @stas00 <|||||>In the simple case of DDP you basically have multiple GPUs and each gpu just runs its own `generate` on the unwrapped model.
I see that you use SageMaker here, I don't have experience with this environment, perhaps there is something non-standard about it?
I think you want to start testing with a very simple base case reducing your test to just creating DDP and doing `generate` with some hardcoded string - you can remove all the other code including training and dataloaders, etc.
1. create smddp on 2 gpus
2. run `generate` on that model
nothing else.
perhaps smddp does something different from other wrappers? Once you have a simple repro code then we can tag someone who knows SMDDP better.<|||||>> >
>
> @kkavyashankar0009 @JulesGM
>
> Hi, it seems that model.generatre() does not support DP, but it can be used in DDP. Here is some example code:
>
> ```
> import torch.multiprocessing as mp
> import torch.distributed as dist
> import argparse
> import torch
> from transformers import get_linear_schedule_with_warmup, BartConfig, BartTokenizer, BartForConditionalGeneration
>
> parser = argparse.ArgumentParser()
> parser.add_argument('--nodes', type=int, default=1) # how many nodes (machines) you have
> parser.add_argument('--gpus', type=int, default=-1, help='num gpus per node')
> parser.add_argument('--nr', type=int, default=0, help='ranking within the nodes')
> args = parser.parse_args()
> tokenizer = BartTokenizer.from_pretrained(args.tokenizer_name)
>
> def test_model_generation(local_gpu_rank, args):
> set_seed(args.seed)
> args.rank = args.nr * args.gpus + local_gpu_rank # compute the rank of the current GPU
> dist.init_process_group(backend="nccl", init_method="env://", world_size=args.world_size, rank=args.rank)
>
> test_data = "../data/" + args.dataset + "/test.txt"
> print("Processing data: " + test_data, flush=True)
> config = BartConfig.from_pretrained(args.config_name)
> bart_ctx = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)
> bart_rep = BartForConditionalGeneration.from_pretrained(args.model_name_or_path, config=config)
> bart_ctx.resize_token_embeddings(len(tokenizer))
> bart_rep.resize_token_embeddings(len(tokenizer))
>
> model = MyModel(bart, config)
> model_state_dict = torch.load("../output/model/gen.ddp.pt")
> model.load_state_dict(model_state_dict, strict=True) # load model
> torch.cuda.set_device(local_gpu_rank)
> args.device = torch.device("cuda", local_gpu_rank)
> model.to(args.device) # move the model to GPU
> # model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[local_gpu_rank], find_unused_parameters=True)
> model.eval()
>
> test_dataset = FileDataset(test_data, tokenizer, max_context_len=args.max_ctx_len, max_response_len=args.max_rep_len, dataset=args.dataset)
> test_sampler = torch.utils.data.distributed.DistributedSampler(test_dataset, shuffle=False)
> test_dataloader = DataLoader(test_dataset, batch_size=args.test_batch_size, num_workers=2, sampler=test_sampler) # the batch size on each GPU
>
> if args.rank == 0:
> count = 0
> fw = open("../output/test.responses.txt", "w", encoding="utf-8") # open file for writting result
> with torch.no_grad():
> test_sampler.set_epoch(0) # keep all data the same on all GPUs, it is usually used in training, I'm not sure if it is necessary in inference
> for test_data in test_dataloader:
> for key in test_data.keys():
> test_data[key] = test_data[key].to(args.device)
> outputs = model.bart_model.generate(
> input_ids=test_data["input_ids"],
> attention_mask=test_data["attention_mask"],
> max_length=args.max_rep_len,
> no_repeat_ngram_size=3,
> num_beams=10,
> ) # my model contains a BART model in self.bart_model,so I use model.module.bart_model to get it
> if outputs.size(1) < args.max_rep_len: # need padding because the lengths from different GPUs may be different
> batch_pred_padding = torch.ones((outputs.size(0), args.max_rep_len - outputs.size(1)), dtype=outputs.dtype).cuda() # use the padding token of BART, and its token id is 1. Be careful with the data type.
> outputs = torch.cat([outputs, batch_pred_padding], dim=1)
> batch_pred = [torch.zeros_like(outputs, dtype=outputs.dtype).cuda() for _ in range(args.world_size)] # initialized a list for collecting tensors from all GPUs. Be careful with the data type.
> dist.all_gather(batch_pred, outputs) # collect data
> batch_pred = torch.stack(batch_pred, dim=1) # use stack, take care of the dimension
> batch_pred = batch_pred.reshape(-1, args.max_rep_len)
> if args.rank == 0:
> batch_out_sentences = tokenizer.batch_decode(batch_pred, skip_special_tokens=True, clean_up_tokenization_spaces=False) # decode the token id to token
> for r in batch_out_sentences:
> fw.write(r + "\n")
> fw.flush()
> count += len(batch_out_sentences)
> print(count)
> if args.rank == 0:
> fw.close()
>
> if __name__ == "__main__":
> if args.gpus < 0:
> args.gpus = torch.cuda.device_count()
> args.world_size = args.nodes * args.gpus
> os.environ['MASTER_ADDR']='localhost'
> os.environ['MASTER_PORT']='8888'
> mp.spawn(test_model_generation, nprocs=args.gpus, args=(args, ))
> ```
>
> Be careful with the data order. It is better to add IDs and write them to the file, which can be used for checking the order.
May have two errors:
1. gather
```python
dist.all_gather(batch_pred, outputs) # collect data
batch_pred = torch.stack(batch_pred, dim=1) # use stack, take care of the dimension
batch_pred = batch_pred.reshape(-1, args.max_rep_len)
```
change to
```python
dist.all_gather(batch_pred, outputs) # collect data
batch_pred = torch.cat(batch_pred, dim=0)
```
2. DistributedSampler causes more samples
`batch_out_sentences` need to slice:`batch_out_sentences[:total_examples]`<|||||>@gongel Thanks!
For the first problem, please see the following discussion.
For the second problem, I use DistributedEvalSampler provided in https://github.com/SeungjunNah/DeepDeblur-PyTorch/blob/master/src/data/sampler.py to solve the unbalanced data problem. I think your solution is simpler. Thanks!<|||||>> @gongel Thanks! For the first problem, I do not see the difference between torch.stack + reshape and torch.cat. I think the results are the same. For the second problem, I use DistributedEvalSampler provided in https://github.com/SeungjunNah/DeepDeblur-PyTorch/blob/master/src/data/sampler.py to solve the unbalanced data problem. I think your solution is simpler. Thanks!
For the first problem, you can test the code
```python
import torch
batch_pred = [torch.ones(2,3), torch.zeros(2,3)]
print(batch_pred)
print(torch.stack(batch_pred, dim=1).reshape(-1, 3))
print(torch.cat(batch_pred, dim=0))
```<|||||>@gongel Thanks. I'm sorry for the mistake.
In this case, I think torch.cat is incorrect.
In PyTorch DDP, if there are four samples [0, 1, 2 ,3] and two GPUs, the first GPU will get [0, 2] and the second will get [1, 3]. So, using torch.stack+reshape, you will get [0, 1, 2, 3]. Using torch.cat, you will get [0, 2, 1, 3], which is incorrect.<|||||>@DaoD thanks, PyTorch DDP is different from PaddlePaddle DDP. |
transformers | 6,820 | closed | Bert transformer issue | # ❓ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models and benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
You can also try Stack Overflow (SO) where a whole community of PyTorch and
Tensorflow enthusiast can help you out. In this case, make sure to tag your
question with the right deep learning framework as well as the
huggingface-transformers tag:
https://stackoverflow.com/questions/tagged/huggingface-transformers
-->
## Details
<!-- Description of your issue -->
Token indices sequence length is longer than the specified maximum sequence length for this model (5 > 512). Running this sequence through the model will result in indexing errors
<!-- You should first ask your question on the forum or SO, and only if
you didn't get an answer ask it here on GitHub. -->
**A link to original question on the forum/Stack Overflow**: | 08-29-2020 18:46:16 | 08-29-2020 18:46:16 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,819 | closed | Unable to establish Lock on cached tokenizer output from RobertaTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2
- Platform: Linux-3.10.0-1127.13.1.el7.x86_64-x86_64-with-redhat-7.8-Maipo
- Python version: 3.7.5
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
tokenizers: @mfuntowicz
## Information
Model I am using (Bert, XLNet ...): Roberta (`roberta-large-mnli`)
The problem arises when using:
* [X] the official example scripts: `run_glue.py`
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: MNLI
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. download GLUE data using official download script and place it in the root of `transformers`
2. `python run_glue.py --model_name_or_path roberta-large-mnli --data_dir ../glue_data --output_dir tmp --task_name MNLI --do_eval
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```bash
Traceback (most recent call last):
File "./run_glue.py", line 247, in <module>
main()
File "./run_glue.py", line 143, in main
if training_args.do_eval
File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/transformers/data/datasets/glue.py", line 106, in __init__
with FileLock(lock_path):
File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/filelock.py", line 323, in __enter__
self.acquire()
File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/filelock.py", line 271, in acquire
self._acquire()
File "/home/USER/anaconda3/envs/nlu/lib/python3.7/site-packages/filelock.py", line 384, in _acquire
fd = os.open(self._lock_file, open_mode)
FileNotFoundError: [Errno 2] No such file or directory: '../glue_data/cached_dev_RobertaTokenizer_128_mnli.lock'
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The script should be able to create the above-mentioned cached file if one doesn't exist, and acquire lock and load it if it does.
| 08-29-2020 18:44:26 | 08-29-2020 18:44:26 | The issue was with the file structur: using the downloaded glue data in the filesystem.<|||||>@aalok-sathe - Could you please explain how you resolved it ?. I am having the same problem with XLNET for glue(STS-B)<|||||>I had the data placed in the wrong location, and I was giving the incorrect path. |
transformers | 6,818 | closed | [tests] fix typos in inputs | fixes a typo in inputs and the corresponding ids | 08-29-2020 18:10:46 | 08-29-2020 18:10:46 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=h1) Report
> Merging [#6818](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ab21b072fa2a122da930386381d23f95de06e28?el=desc) will **increase** coverage by `0.51%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6818 +/- ##
==========================================
+ Coverage 79.58% 80.10% +0.51%
==========================================
Files 157 157
Lines 28588 28588
==========================================
+ Hits 22752 22900 +148
+ Misses 5836 5688 -148
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcm9iZXJ0YS5weQ==) | `87.67% <0.00%> (-10.96%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `86.31% <0.00%> (-7.19%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_fast.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfZmFzdC5weQ==) | `92.85% <0.00%> (-1.43%)` | :arrow_down: |
| [src/transformers/tokenization\_bert.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmVydC5weQ==) | `91.07% <0.00%> (-0.45%)` | :arrow_down: |
| [src/transformers/tokenization\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHMucHk=) | `89.45% <0.00%> (-0.40%)` | :arrow_down: |
| [src/transformers/tokenization\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fb3BlbmFpLnB5) | `82.57% <0.00%> (+11.36%)` | :arrow_up: |
| [src/transformers/tokenization\_reformer.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmVmb3JtZXIucHk=) | `95.00% <0.00%> (+13.33%)` | :arrow_up: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `100.00% <0.00%> (+57.89%)` | :arrow_up: |
| [src/transformers/tokenization\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6818/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25feGxtLnB5) | `82.93% <0.00%> (+66.66%)` | :arrow_up: |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=footer). Last update [5ab21b0...8b59678](https://codecov.io/gh/huggingface/transformers/pull/6818?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
transformers | 6,817 | closed | Tensorflow 2 Finetuning TF T5 using keras fit | ## The Problem
I have been trying to finetune the T5 model model using tensorflow and keras. there is no documentation/ offcial+community or **notebook** for finetuning T5 in tensorflow. There are a bunch of lines [here](https://huggingface.co/transformers/model_doc/t5.html#tft5forconditionalgeneration) and some finetuning instructions [here](https://huggingface.co/transformers/model_doc/t5.html#tft5forconditionalgeneration) other than that there is nothing for tensorflow.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `3.0.2`
- Platform: `Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid`
- Python version: `Linux-4.15.0-112-generic-x86_64-with-debian-buster-sid`
- PyTorch version (GPU?): `1.6.0 (True)`
- Tensorflow version (GPU?): `2.2.0 (True)`
- Using GPU in script?: `yes`
- Using distributed or parallel set-up in script?: `no`
### Who can help
@patrickvonplaten @jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @TevenLeScao
blenderbot: @mariamabarham
Bart: @sshleifer
Marian: @sshleifer
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): `TFT5ForConditionalGeneration (TFAutoModelWithLMHead) pretrained`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (SQuad from tfds)
* [ ] my own task or dataset: (give details below)
## To reproduce
```
model = TFAutoModelWithLMHead.from_pretrained("t5-base")
tokenizer = AutoTokenizer.from_pretrained("t5-base")
train_dataset, info = tfds.load('squad', split='train', with_info=True)
def encode_tf(inputs):
"""Encodes the squad inputs and uses the tokenizer to encode inputs returns
the appropriate model 'input_ids', 'attention masks`, `decoder_attention_mask`, 'labels'
Returns:
dict: returns a dictionary with keys: 'input_ids', 'attention masks`, `decoder_attention_mask`,
'labels' with appropriate tensor values
"""
pass
dataset = train_dataset.map(encode_tf)
dataset = dataset.shuffle(1000)
dataset = dataset.batch(8)
```
### Sample data output:
```
data = next(iter(dataset))
data
```
```
{'input_ids': <tf.Tensor: shape=(8, 200), dtype=int32, numpy=
array([[ 987, 834, 7771, ..., 0, 0, 0],
[ 987, 834, 7771, ..., 2749, 3385, 12187],
[ 987, 834, 7771, ..., 0, 0, 0],
...,
[ 987, 834, 7771, ..., 0, 0, 0],
[ 987, 834, 7771, ..., 0, 0, 0],
[ 987, 834, 7771, ..., 6, 30, 8]], dtype=int32)>,
'labels': <tf.Tensor: shape=(8, 200), dtype=int32, numpy=
array([[ 363, 19, 80, ..., 0, 0, 0],
[4504, 149, 186, ..., 0, 0, 0],
[ 571, 54, 3298, ..., 0, 0, 0],
...,
[2645, 2832, 4599, ..., 0, 0, 0],
[ 571, 103, 7000, ..., 0, 0, 0],
[ 366, 410, 8, ..., 0, 0, 0]], dtype=int32)>,
'attention_mask': <tf.Tensor: shape=(8, 200), dtype=int32, numpy=
array([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 1, 1, 1],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 1, 1, 1]], dtype=int32)>,
'decoder_attention_mask': <tf.Tensor: shape=(8, 200), dtype=int32, numpy=
array([[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
...,
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0],
[1, 1, 1, ..., 0, 0, 0]], dtype=int32)>}
```
### Training
```
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(optimizer=optimizer, loss=loss)
model.fit(dataset, epochs=10)
```
model.fit result in the following error about **ValueError: No gradients provided for any variable**
### The Stacktrace
```
Epoch 1/10
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-163-f8c5e0c71664> in <module>
----> 1 model.fit(dataset, epochs=10)
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in _method_wrapper(self, *args, **kwargs)
64 def _method_wrapper(self, *args, **kwargs):
65 if not self._in_multi_worker_mode(): # pylint: disable=protected-access
---> 66 return method(self, *args, **kwargs)
67
68 # Running inside `run_distribute_coordinator` already.
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py in fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
846 batch_size=batch_size):
847 callbacks.on_train_batch_begin(step)
--> 848 tmp_logs = train_function(iterator)
849 # Catch OutOfRangeError for Datasets of unknown size.
850 # This blocks until the batch has finished executing.
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in __call__(self, *args, **kwds)
578 xla_context.Exit()
579 else:
--> 580 result = self._call(*args, **kwds)
581
582 if tracing_count == self._get_tracing_count():
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _call(self, *args, **kwds)
625 # This is the first call of __call__, so we have to initialize.
626 initializers = []
--> 627 self._initialize(args, kwds, add_initializers_to=initializers)
628 finally:
629 # At this point we know that the initialization is complete (or less
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in _initialize(self, args, kwds, add_initializers_to)
504 self._concrete_stateful_fn = (
505 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
--> 506 *args, **kwds))
507
508 def invalid_creator_scope(*unused_args, **unused_kwds):
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
2444 args, kwargs = None, None
2445 with self._lock:
-> 2446 graph_function, _, _ = self._maybe_define_function(args, kwargs)
2447 return graph_function
2448
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _maybe_define_function(self, args, kwargs)
2775
2776 self._function_cache.missed.add(call_context_key)
-> 2777 graph_function = self._create_graph_function(args, kwargs)
2778 self._function_cache.primary[cache_key] = graph_function
2779 return graph_function, args, kwargs
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/function.py in _create_graph_function(self, args, kwargs, override_flat_arg_shapes)
2665 arg_names=arg_names,
2666 override_flat_arg_shapes=override_flat_arg_shapes,
-> 2667 capture_by_value=self._capture_by_value),
2668 self._function_attributes,
2669 # Tell the ConcreteFunction to clean up its graph once it goes out of
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes)
979 _, original_func = tf_decorator.unwrap(python_func)
980
--> 981 func_outputs = python_func(*func_args, **func_kwargs)
982
983 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/eager/def_function.py in wrapped_fn(*args, **kwds)
439 # __wrapped__ allows AutoGraph to swap in a converted function. We give
440 # the function a weak reference to itself to avoid a reference cycle.
--> 441 return weak_wrapped_fn().__wrapped__(*args, **kwds)
442 weak_wrapped_fn = weakref.ref(wrapped_fn)
443
~/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs)
966 except Exception as e: # pylint:disable=broad-except
967 if hasattr(e, "ag_error_metadata"):
--> 968 raise e.ag_error_metadata.to_exception(e)
969 else:
970 raise
ValueError: in user code:
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:571 train_function *
outputs = self.distribute_strategy.run(
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:951 run **
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2290 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/distribute/distribute_lib.py:2649 _call_for_each_replica
return fn(*args, **kwargs)
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:541 train_step **
self.trainable_variables)
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/engine/training.py:1804 _minimize
trainable_variables))
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:521 _aggregate_gradients
filtered_grads_and_vars = _filter_grads(grads_and_vars)
/home/ml/anaconda3/envs/hugging/lib/python3.7/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:1219 _filter_grads
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['shared/shared/weight:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._0/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._1/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._2/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._3/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._4/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._5/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._6/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._7/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._8/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._9/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._10/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._1/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._1/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/encoder/block_._11/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/encoder/final_layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/SelfAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/EncDecAttention/relative_attention_bias/embeddings:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._0/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._1/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._2/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._3/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._4/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._5/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._6/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._7/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._8/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._9/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._10/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/SelfAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._0/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/q/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/k/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/v/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/EncDecAttention/o/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._1/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._2/DenseReluDense/wi/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._2/DenseReluDense/wo/kernel:0', 'tf_t5for_conditional_generation/decoder/block_._11/layer_._2/layer_norm/weight:0', 'tf_t5for_conditional_generation/decoder/final_layer_norm/weight:0'].
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Should be able to run the training loop for the specified epochs. | 08-29-2020 10:34:12 | 08-29-2020 10:34:12 | @patrickvonplaten , @jplu any insights into what could be the problem?<|||||>You should create your model into a strategy.<|||||>> You should create your model into a strategy.
As in tf distributed strategies? but i am using a single gpu at the moment. <|||||>This one https://www.tensorflow.org/api_docs/python/tf/distribute/OneDeviceStrategy<|||||>Device placement strategy works and the error is no longer there. i should point out this is not the usual way to train a model in TF. We normally do not need to place the model explicitly on a device while creating a model. <|||||>Is this method correct?
```
with mirrored_strategy.scope():
...
model.compile(...)
model.fit(...)
```
This still gives me the same error on GPT2LMHeadModel.<|||||>@ksjae Please open a new issue with more detail of your issue. |
transformers | 6,816 | closed | control framework loglevel in scripts and tests | There is too much logging going on at times under `transformers` and friends. One needs to be able to turn the noise off easily. This is a follow up to https://github.com/huggingface/transformers/issues/3050#issuecomment-682167272
**edit**: The default was changed yesterday to`logging.WARNING` https://github.com/huggingface/transformers/commit/4561f05c5fafc2b636a2fc1d0dded9057d439745 so there is much less noise now.
**edit**: this PR has evolved since it was initially submitted, so this OP has been updated to reflect the current state of things.
This PR introduces 2 things.
# 1. new function: `set_verbosity_all`
Usage:
a. override all module-specific loggers to a desired level (except whatever got logged during modules importing)
```
import everything, you, need
import transformers
transformers.testing_utils.set_verbosity_all(transformers.logging.ERROR)
```
b. If you want to disable specific loggers you can call it with specific top level names:
```
import transformers, torch, ...
transformers.testing_utils.set_verbosity_all(transformers.logging.ERROR, ["transformers", "nlp", "torch", "tensorflow", "tensorboard", "wandb"])
```
add/remove module name prefices as needed.
I initially placed it under `transformers.utils.logging` but then since it's beyond the core functionality, I moved it to testing_utils, since this is where we really want it. Please correct me if it should better belong elsewhere.
# 2. new pytest option: `--log-level-all=error`
when debugging tests, sometimes framework-wide logging gets seriously in the way, e.g. try:
```
RUN_SLOW=1 pytest -sv --disable-warnings tests/test_modeling_bart.py::BartModelIntegrationTests::test_inference_no_head
```
gives a lot of noise. (it was so until `s/info/warn` recent change mention above, but there is noise still)
Now you will be able to turn it off, focusing only on the debug you want, by adding `--loglevel=error ` to the `pytest` options (or another level of your choice):
```
RUN_SLOW=1 pytest -sv --log-level-all=error --disable-warnings tests/test_modeling_bart.py::BartModelIntegrationTests::test_inference_no_head
```
voila - the noise is gone, while you can still do debug printing, etc.
```
pytest -h
[...]
--log-level-all={debug,info,warning,error,critical}
set global logger level before each test
```
# 3. new test + `CaptureLogger` context manager
While working on this a few integration tests were added and a helper `CaptureLogger` context manager
to easily test the logger outputs.
# 4. cleaned up one test
removed verbosity setting in one test, which impacted other tests as it wasn't resetting the level to the original
-----
Quite a few testing features were added recently, I guess it's time to start `testing.md` or something.
----
Fixes: https://github.com/huggingface/transformers/issues/3050 | 08-29-2020 04:51:38 | 08-29-2020 04:51:38 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=h1) Report
> Merging [#6816](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/367235ee52537ff7cada5e1c5c41cdd78731f092?el=desc) will **increase** coverage by `2.48%`.
> The diff coverage is `67.85%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6816 +/- ##
==========================================
+ Coverage 76.27% 78.76% +2.48%
==========================================
Files 157 157
Lines 28795 28823 +28
==========================================
+ Hits 21963 22701 +738
+ Misses 6832 6122 -710
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `66.24% <67.85%> (+0.35%)` | :arrow_up: |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |
| [src/transformers/modeling\_t5.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190NS5weQ==) | `77.63% <0.00%> (-6.21%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| [src/transformers/configuration\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2JhcnQucHk=) | `90.00% <0.00%> (-4.00%)` | :arrow_down: |
| [src/transformers/generation\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3RmX3V0aWxzLnB5) | `83.95% <0.00%> (-1.51%)` | :arrow_down: |
| [src/transformers/modeling\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ191dGlscy5weQ==) | `87.50% <0.00%> (-0.56%)` | :arrow_down: |
| ... and [13 more](https://codecov.io/gh/huggingface/transformers/pull/6816/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=footer). Last update [367235e...05ec0d0](https://codecov.io/gh/huggingface/transformers/pull/6816?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>[moved to the normal comment from the code review comment, as it might be removed]
> What is the difference between this method and the previous set_verbosity?
> I think we should select just one way to set the verbosity of the library.
@thomwolf, I agree. Now that I added a test I can see that `set_verbosity` is an equivalent of `set_global_logging_level(prefices=["transformers"])` (the proposed function).
So the main question then is this: do we want to provide a util that allows to do the setting not just for `transformers.`? or leave that to the user - sort of contrib library somewhere?
The main reason for setting a global log level not just for `transfromers`, but also for `torch`, `wandb`, etc. is to be able to quickly turn off the noise when it's interfering. And currently each of these external libraries `transformers` uses add their noise to the output. When debugging tests it's very helpful to control the noise-levels. So having a quick switch --logger-be-quiet saves a lot of time.
<|||||>I also added:
- a logger setting integration test
- a helper `CaptureLogger` ctx manager<|||||>Could someone please explain why CI gets `logging.ERROR` as the default logging level, when it should be `logging.WARNING` https://github.com/stas00/transformers/blob/loglevels/src/transformers/utils/logging.py#L58 (I rebased this branch to catch that very recent change)
When I run it on my machine, I get `logging.WARNING`.
On CI the failure is:
```
[gw4] linux -- Python 3.7.9 /usr/local/bin/python
self = <tests.test_logging.HfArgumentParserTest testMethod=test_set_level>
def test_set_level(self):
logger = logging.get_logger()
level_origin = logging.get_verbosity()
> self.assertEqual(level_origin, logging.WARNING)
E AssertionError: 40 != 30
```
(`logging.ERROR == 40`, `logging.WARNING == 30`)
**edit**: found the culprit - it was another test not cleaning up after itself. fixed in this PR.<|||||>Thank you all for your excellent feedback. I made changes and updated the first post to reflect the PR's current state of things.<|||||>I'm not sure we really need to control the logging level of all libraries. Since the logging level was changed back to its initial level `WARNING`, do you feel like there are too much logs during tests?<|||||>For Bart tests there is a repetitive warning, which I raised here: https://github.com/huggingface/transformers/issues/6652
If you run others, you will see a bunch still, e.g.:
```RUN_SLOW=1 pytest -sv --disable-warnings tests/test_modeling_t5.py ```
```
tests/test_modeling_t5.py::T5ModelTest::test_generate_with_past_key_value_states You might want to consider setting `use_cache=True` to speed up decoding
You might want to consider setting `use_cache=True` to speed up decoding
You might want to consider setting `use_cache=True` to speed up decoding
You might want to consider setting `use_cache=True` to speed up decoding
[...]
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 242M/242M [00:06<00:00, 40.1MB/s]
Some weights of T5Model were not initialized from the model checkpoint at t5-small and are newly initialized: ['encoder.embed_tokens.weight', 'decoder.embed_tokens.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
PASSED
```
And this is just one test.
Of course, the other approach is to go and fix all those warnings, so that the tests that are fully under our control can be written according to the requirements the library sets and then warnings won't be there :) But see the next comment with a large dump of loggers that aren't `transformers`.
----
Yet another alternative solution is instead of flag we add an env var, `LOG_LEVEL_GLOBAL`
<|||||>Here is some more samples of noise coming from outside `transformers` - a lot of it:
```
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_albert_model 2020-09-02 10:32:56.462871: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcuda.so.1
2020-09-02 10:32:56.467570: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.469326: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX TITAN X computeCapability: 5.2
coreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s
2020-09-02 10:32:56.469400: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.470032: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 1 with properties:
pciBusID: 0000:02:00.0 name: GeForce GTX TITAN X computeCapability: 5.2
coreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s
2020-09-02 10:32:56.470303: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-09-02 10:32:56.470670: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-02 10:32:56.470719: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-02 10:32:56.470752: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-02 10:32:56.495979: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-02 10:32:56.496076: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-02 10:32:56.594292: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-09-02 10:32:56.594768: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.597007: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.599207: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.601306: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.603994: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0, 1
2020-09-02 10:32:56.612943: I tensorflow/core/platform/cpu_feature_guard.cc:142] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN)to use the following CPU instructions in performance-critical operations: AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2020-09-02 10:32:56.672605: I tensorflow/core/platform/profile_utils/cpu_utils.cc:104] CPU Frequency: 3199980000 Hz
2020-09-02 10:32:56.675701: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x556ed093a910 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-09-02 10:32:56.675767: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
2020-09-02 10:32:56.678402: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.680525: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 0 with properties:
pciBusID: 0000:01:00.0 name: GeForce GTX TITAN X computeCapability: 5.2
coreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s
2020-09-02 10:32:56.680889: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.683022: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1716] Found device 1 with properties:
pciBusID: 0000:02:00.0 name: GeForce GTX TITAN X computeCapability: 5.2
coreClock: 1.2155GHz coreCount: 24 deviceMemorySize: 11.93GiB deviceMemoryBandwidth: 313.37GiB/s
2020-09-02 10:32:56.683197: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
2020-09-02 10:32:56.683257: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcublas.so.10
2020-09-02 10:32:56.683304: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcufft.so.10
2020-09-02 10:32:56.683346: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcurand.so.10
2020-09-02 10:32:56.683504: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusolver.so.10
2020-09-02 10:32:56.683566: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcusparse.so.10
2020-09-02 10:32:56.683693: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudnn.so.7
2020-09-02 10:32:56.684014: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.686245: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.688465: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.690589: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.692497: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1858] Adding visible gpu devices: 0, 1
2020-09-02 10:32:56.692670: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1257] Device interconnect StreamExecutor with strength 1 edge matrix:
2020-09-02 10:32:56.692706: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1263] 0 1
2020-09-02 10:32:56.693071: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 0: N Y
2020-09-02 10:32:56.693135: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1276] 1: Y N
2020-09-02 10:32:56.694784: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.696986: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.699094: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.701214: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.703406: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10865 MB memory) -> physical GPU (device: 0, name: GeForce GTX TITAN X, pci bus id: 0000:01:00.0, compute capability: 5.2)
2020-09-02 10:32:56.706854: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.709029: I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:982] successful NUMA node read from SysFS had negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
2020-09-02 10:32:56.710969: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1402] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10856 MB memory) -> physical GPU (device: 1, name: GeForce GTX TITAN X, pci bus id: 0000:02:00.0, compute capability: 5.2)
2020-09-02 10:32:56.718970: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x556e0ea3c200 initialized for platform CUDA (this does not guarantee that XLA will be used). Devices:
2020-09-02 10:32:56.719031: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): GeForce GTX TITAN X, Compute Capability 5.2
2020-09-02 10:32:56.719056: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (1): GeForce GTX TITAN X, Compute Capability 5.2
2020-09-02 10:32:57.410269: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
PASSED
[...]
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_for_sequence_classification PASSED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_graph_mode WARNING:tensorflow:5 out of the last 5 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f273870ba70> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:6 out of the last 6 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f2738779ef0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:7 out of the last 7 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f2742cb89e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
PASSED
[...]
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_common_attributes PASSED
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████| 63.0M/63.0M [00:02<00:00, 28.8MB/s]
PASSED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_model_outputs_equivalence PASSED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_pt_tf_model_equivalence PASSED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_resize_token_embeddings PASSED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_save_load PASSED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_attentions_output WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7f273831d450>, because it is not built.
WARNING:tensorflow:From /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:From /home/stas/anaconda3/envs/main/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.
FAILED
tests/test_modeling_tf_albert.py::TFAlbertModelTest::test_saved_model_with_hidden_states_output WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7f26cb69b7d0>, because it is not built.
WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.
WARNING:tensorflow:Skipping full serialization of Keras layer <tensorflow.python.keras.layers.core.Dropout object at 0x7f26c625ccd0>, because it is not built.
WARNING:tensorflow:No training configuration found in save file, so the model was *not* compiled. Compile it manually.
FAILED
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_identifier_from_model_type PASSED
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_from_pretrained_identifier PASSED
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 536M/536M [00:18<00:00, 28.9MB/s]
2020-09-02 10:34:15.259729: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
2020-09-02 10:34:15.394172: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 93763584 exceeds 10% of free system memory.
PASSED
Downloading: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████| 498M/498M [00:12<00:00, 40.0MB/s]
2020-09-02 10:34:29.779196: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 154389504 exceeds 10% of free system memory.
2020-09-02 10:34:30.859094: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 154389504 exceeds 10% of free system memory.
PASSED
tests/test_modeling_tf_auto.py::TFAutoModelTest::test_model_for_encoder_decoder_lm 2020-09-02 10:34:32.437951: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 65798144 exceeds 10% of free system memory.
PASSED
[...]
tests/test_modeling_tf_bert.py::TFBertModelTest::test_for_token_classification PASSED
tests/test_modeling_tf_bert.py::TFBertModelTest::test_graph_mode WARNING:tensorflow:8 out of the last 8 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c5ccbef0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:9 out of the last 9 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c5db8950> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:10 out of the last 10 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c5ccbcb0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f277c0eb5f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c8298dd0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f273816d5f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c8235560> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26caab45f0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
WARNING:tensorflow:11 out of the last 11 calls to <function TFModelTesterMixin.test_graph_mode.<locals>.run_in_graph_mode at 0x7f26c9da34d0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
PASSED
```
<|||||>Thinking about it more while working with other tools, it'd be of a great help to have an env var that can set the default logging level for `transformers`. e.g. I wanted to change the logging level for `run_eval.py` and I couldn't do that w/o modifying it. If we had an env var that would have been trivial and much faster to use.
This is regardless of the outcome of this discussion of whether we should have a way to turn non-transformers-related loggers off.<|||||>I understand the issue, and while I agree that some frameworks are extremely log intensive (TensorFlow ...), I wonder if it's such a bad thing to have too many logs during testing. If a test fails, the logs may help to understand the issue quicker when the stack trace isn't helping much. Removing these logs would mean needing to restart the CI with a different logging level to see what's happening in the logs around this error.
Regarding your second point, yes, I think it would be nice to control the default logging level with an environment variable! Would welcome such a PR.<|||||>I would find some more control over logging very useful! A lot of our users are on colab, and warnings waste a ton of screen space there. Same with my debugging workflow -- there are so many logger statements that can't see my full traceback on the screen.<|||||>> I would find some more control over logging very useful! A lot of our users are on colab, and warnings waste a ton of screen space there. Same with my debugging workflow -- there are so many logger statements that can't see my full traceback on the screen.
I wonder whether we should just have an env var `DISABLE_LOGGING=info` that will just do:
```
import logging
logging.disable(logging.INFO) # disable INFO and DEBUG logger everywhere
```
`DISABLE_LOGGING=warning` for WARNING, INFO and DEBUG...
In addition to the transformers-specific one `TRANSFORMERS_VERBOSITY=info...` which I will add.
<|||||>> I understand the issue, and while I agree that some frameworks are extremely log intensive (TensorFlow ...), I wonder if it's such a bad thing to have too many logs during testing. If a test fails, the logs may help to understand the issue quicker when the stack trace isn't helping much. Removing these logs would mean needing to restart the CI with a different logging level to see what's happening in the logs around this error.
In no way I am proposing to impact CI in any way - on the contrary - on CI the more debug info the merrier. I'm only proposing a way for a developer to turn the logging off on their own setup. i.e. we won't be enabling any such features on CI.
Different developers have different needs and for me, for example, noise is very counterproductive for development. When debugging something I only want to see outputs that are relevant to what I'm debugging and nothing else - and seconding @sshleifer's comment - I too want them to fit into the current screen so I don't need to scroll. Especially in complicated situations when I need to look at output numbers. I understand how this can be a total non-issue for others.
> Regarding your second point, yes, I think it would be nice to control the default logging level with an environment variable! Would welcome such a PR.
I will do so. Thank you!<|||||>> I would find some more control over logging very useful! A lot of our users are on colab, and warnings waste a ton of screen space there. Same with my debugging workflow -- there are so many logger statements that can't see my full traceback on the screen.
@sshleifer have you tried the new library-wide control for logging that Lysandre added in #6434?
The doc is here: https://huggingface.co/transformers/master/main_classes/logging.html<|||||>Added the env var to control the transformers verbosity level: https://github.com/huggingface/transformers/pull/6961<|||||>It feels that this proposal is a no go at the moment, so I'm closing it down.
The extended tests and added testing utils which were part of this PR have been merged in https://github.com/huggingface/transformers/pull/6961
Thank you all who contributed to this discussion. |
transformers | 6,815 | closed | make the tmp dir configurable/persistent in tokenizer tests | Currently, debugging tokenizers is difficult since the temp dir is random and it gets wiped out at the end of the test run. It can be done, but it takes so much repetitive work.
This PR uses the recently added [`TestCasePlus`](https://github.com/huggingface/transformers/pull/6494) which automatically sets up temp dirs and optionally doesn't remove them at the end of the test. It makes it very easy to configure the temp dir to be fixed rather than random, and also not delete itself.
As a side-effect of inheriting from `TestCasePlus`, the mixin approach of `FooTest(TokenizerTesterMixin, unittest.TestCase)` doesn't work - as it now tries to run the super-class tests directly and not from within the subclass. Therefore, the code switches to normal sub-classing and instructs `unittest` not to run super-class' tests on its own, using the following machinery as explained [here](https://stackoverflow.com/a/50922971/9201239). Specifically here:
```
from transformers.testing_utils import TestCasePlus
class TokenizerCommonTester(TestCasePlus):
__test__ = False
# and then the sub-class:
from .test_tokenization_common import TokenizerCommonTester
class BartTokenizationTest(TokenizerCommonTester):
__test__ = True
```
The PR makes the code ready to debug by changing just one flag:
```
DEBUG = False
# if you need to debug the contents of the tmpdirname, set DEBUG to True, which will then use
# a hardcoded path and won't delete it at the end of the test
if not DEBUG:
self.tmpdirname = self.get_auto_remove_tmp_dir()
else:
self.tmpdirname = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/token-test", after=False)
```
So just make `DEBUG` `True` and nothing else needs to be tweaked.
I hope this is useful for developers.
There are a few other test mixins that could be improved the same way, but let's see if this approach is welcomed first.
----
## An alternative solution
If mixin is preferable, then let's leave everything as is and do this instead:
```
# transformers/testing_utils.py
from pathlib import Path
def make_dir(path):
Path(path).resolve().mkdir(parents=True, exist_ok=True)
return path
# tests/test_tokenization_common.py
from transformers.testing_utils import make_dir
DEBUG = True
class TokenizerTesterMixin:
tokenizer_class = None
test_rust_tokenizer = False
def setUp(self):
if not DEBUG:
self.tmpdirname = tempfile.mkdtemp()
else:
self.tmpdirname = make_dir("./tmp/test-tok")
def tearDown(self):
if not DEBUG:
shutil.rmtree(self.tmpdirname)
```
| 08-29-2020 01:33:58 | 08-29-2020 01:33:58 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=h1) Report
> Merging [#6815](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ab21b072fa2a122da930386381d23f95de06e28?el=desc) will **decrease** coverage by `1.46%`.
> The diff coverage is `n/a`.
[](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6815 +/- ##
==========================================
- Coverage 79.58% 78.11% -1.47%
==========================================
Files 157 157
Lines 28588 28588
==========================================
- Hits 22752 22332 -420
- Misses 5836 6256 +420
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_tf\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hbGJlcnQucHk=) | `21.47% <0.00%> (-69.44%)` | :arrow_down: |
| [src/transformers/pipelines.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9waXBlbGluZXMucHk=) | `26.26% <0.00%> (-53.69%)` | :arrow_down: |
| [src/transformers/optimization.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb24ucHk=) | `34.28% <0.00%> (-48.00%)` | :arrow_down: |
| [src/transformers/optimization\_tf.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9vcHRpbWl6YXRpb25fdGYucHk=) | `33.33% <0.00%> (-24.33%)` | :arrow_down: |
| [src/transformers/modeling\_roberta.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yb2JlcnRhLnB5) | `74.81% <0.00%> (-22.27%)` | :arrow_down: |
| [src/transformers/modeling\_longformer.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19sb25nZm9ybWVyLnB5) | `71.55% <0.00%> (-20.48%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9hdXRvLnB5) | `48.79% <0.00%> (-18.08%)` | :arrow_down: |
| [src/transformers/data/processors/squad.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9kYXRhL3Byb2Nlc3NvcnMvc3F1YWQucHk=) | `13.76% <0.00%> (-14.38%)` | :arrow_down: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `64.36% <0.00%> (-14.37%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_xlm.py](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl94bG0ucHk=) | `88.42% <0.00%> (-4.85%)` | :arrow_down: |
| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6815/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=footer). Last update [5ab21b0...d984fd8](https://codecov.io/gh/huggingface/transformers/pull/6815?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>After sleeping on this, I'm not quite sure of 2 things.
1. the main switch from mixin to normal subclassing - if it's done it should be done for all other common testing mixins - the benefit would be - having simpler access to `unittest.TestCase` and the extended `unittest.TestCasePlus` features. As I proposed in the alternative solution, it's not at all required, as a different solution can be used for temp dirs during debug.
2. a totally unrelated issue of having debugging code in place. Do we want to gradually make the test suite easier to debug, by leaving `if DEBUG: ...` in strategic places (currently, consisting of just one thing - having a fixed tmp dir and not having it removed, but there are probably others).
For example, I find myself adding a debug message for various asserts, so it's easier to see what's not matching, but those are usually a 2nd/3rd argument to the assert function (or `msg=`), so it's a smooth feature requiring no `if DEBUG`.
i.e. I'd love to hear what others think - if you think this is a useful discussion - I can open 2 unrelated issues if it helps to make discussing these 2 unrelated issues focused.
My inclination right now is to just provide a quick way to make a fixed temp dir w/o it being deleted, i.e. the alt solution in OP, and leave the original PR for maybe some time in the future if we see other benefits to doing so.<|||||>I agree with having a quicker fix for this specific problem and think a bit more about a general way to have a specific debug behavior for our use.<|||||>If you're joining in now, please ignore the proposed code (as it also requires changing from Mixin to a subclass), and what this needs is your feedback on this question: **do we want to have a simple DEBUG flag in tests, that once enabled it would switch to not deleting temp dirs and would use a fixed temp dir path, so that it's easy to monitor?** So instead of needing to manually tweak the tests, we have the debug setup already in place. That's the question.
Let me know if perhaps I should scratch that PR and start a new one discussing just that, so that the initial attempts at solving the issue won't be confusing to you, the readers.
And to quickly give you context, we are talking about:
```
def setUp(self):
self.tmpdirname = tempfile.mkdtemp()
```
and the modified version is:
```
DEBUG=0
[...]
def setUp(self):
super().setUp()
# if you need to debug the contents of the tmpdirname, set DEBUG to True, which will then use
# a hardcoded path and won't delete it at the end of the test
if not DEBUG:
self.tmpdirname = self.get_auto_remove_tmp_dir()
else:
self.tmpdirname = self.get_auto_remove_tmp_dir(tmp_dir="./tmp/token-test", after=False)
```
https://github.com/huggingface/transformers/blob/d984fd82bf940c62700919da5735e60f3f883348/tests/test_tokenization_common.py#L69
except the code itself will be different as we can't make it work with mixins in that way.
If it helps, here is the last time a related issue of working with temp dirs has been worked on with a successful PR merge:
https://github.com/huggingface/transformers/pull/6494 - i.e. this is a continuation of the same to other parts of the test suite.
<|||||>> do we want to have a simple DEBUG flag in tests, that once enabled it would switch to not deleting temp dirs and would use a fixed temp dir path, so that it's easy to monitor?
yes, this would be useful if you can do it in a way that doesn't add overhead for people trying to add new tokenizers.
I didn't look at the code.<|||||>I will close it for now and revisit the next time I deal with this unless someone beats me to it. |
transformers | 6,814 | closed | Create smaller number of heads in attn without pruning using shared parameters | # 🚀 Feature request
Instead of prunning heads or masking heads, create new linear layers with views of the larger linear layers that contains all heads.
## Motivation
For various models (Bert, Distilbert, etc.) I would like to be able to experiment on using separate heads without having to prune the heads. For example, training layers with separate heads. There is a way to masks heads, but the computation is still performed over all of the k, q, v tensors, and then masking is performed.
Is there a way to create views of the heads that you would otherwise prune, so that the extra computation per unused heads is not performed. My understanding is that the attention computation is one of the more expensive operation, since it is quadratic in the seq_len.
For example, with DistilBert, I was thinking of replacing, q_lin, v_lin, and k_lin with a shared linear layer that is a view into the original q_lin, k_line, v_lin.
## Contribution
I am thinking of adapting this code from prune_linear_layer from modeling_utils.py. Do you think this will work? Is there a better way to do this?
``
def share_linear_layer_by_index(layer, index, dim=0):
""" Create a new linear layer (a model parameters) with shared parameters from index of the old layer
Return new layer with requires_grad=True.
"""
index = index.to(layer.weight.device)
W = layer.weight.index_select(dim, index)
if layer.bias is not None:
if dim == 1:
b = layer.bias
else:
b = layer.bias[index]
new_size = list(layer.weight.size())
new_size[dim] = len(index)
new_layer = nn.Linear(new_size[1], new_size[0], bias=layer.bias is not None).to(layer.weight.device)
new_layer.bias.requires_grad = False
new_layer.weight = W
new_layer.weight.requires_grad = True
if layer.bias is not None:
new_layer.bias.requires_grad = False
new_layer.bias = b
new_layer.bias.requires_grad = True
return new_layer
`` | 08-29-2020 01:31:30 | 08-29-2020 01:31:30 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,813 | closed | RAG | # Intro
This pull request implements a RAG-sequence and a RAG-token models, as defined in [the paper](https://arxiv.org/pdf/2005.11401.pdf).
RAG (for Retrieval Augmented Generation) is an architecture combining a retriever with a generator model. During a forward pass, the input sequence is used as a question to the retriever, which surfaces relevant context documents. The documents are then prepended to the input and such contextualized input is passed to the generator. In the paper, we experiment with DPR-based retrieval and BART generator.
# Implementation
RAG is a seq2seq model which encapsulates three core components:
- a retriever - a wrapper around a faiss index of the documents,
- a question encoder which encodes the input sequence before passing it to the retriever,
- a generator which learns to generate the output from the contextualized input
as well as respective tokenizers (we need to be able to decode the input sequence, encode it with the question encoder tokenizer, and then encode contextualized input with the generator tokenizer again).
---
We implement two variants of the model, both presented in the paper:
- `RagSequence`, which uses `DPRQuestionEncoder` as the question encoder. As for the generator, two compatible architectures have been tested: `BartForConditionalGeneration` and `T5ForConditionalGeneration`.
- `RagToken`, which uses `DPRQuestionEncoder` and `BartForConditionalGeneration` as the generator.
---
Key files in the pull request:
- `modeling_rag.py`, `tokenization_rag.py`, `configuration_rag.py` the core model implementation
- `retrieval_rag.py` - a distributed retriever built on top of the `torch.distributed` communication package. The retriever is an interface between the model and the faiss index of the encoded documents. During training, all workers initialize their own instance of the retriever, however, only the main worker loads the index into memory, which prevents OOMs on machines with multiple GPUs (we store the index in RAM). The index itself is based on the `nlp.Datasets`. We also implement a variant compatible with indices built using the original DPR implementation (https://github.com/facebookresearch/DPR)
- `eval_rag.py` - an evaluation script which allows to perform the evaluation end to end (measures the exact match and F1 on the downstream task) as well as the evaluation of the retrieval component alone (measures precision@k).
- `finetune.py` - a training script for finetuning RAG models.
# Testing
We have successfully managed to reproduce original paper results on Natural Questions for a couple of scenarios:
- converting original `fairseq` checkpoints to `HuggingFace` and evaluating them on `HuggingFace`
- converting original `fairseq` checkpoints to `HuggingFace` and continuing fine-tuning on `HuggingFace`
- training from scratch on `HuggingFace`
# Pretrained, ready-to-use models (after PR is merged).
- RagToken: https://huggingface.co/facebook/rag-token-nq
- RagSequence: https://huggingface.co/facebook/rag-sequence-nq
- RagTokenBase: https://huggingface.co/facebook/rag-token-base
- RagSequneceBase: https://huggingface.co/facebook/rag-sequence-base
# Future PR
- [ ] Add and test distributed Pytorch and possible Ray - Retriever @lhoestq
- [ ] Upload more rag model combinations.
- [ ] Clean examples
| 08-28-2020 22:34:26 | 08-28-2020 22:34:26 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=h1) Report
> Merging [#6813](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/244e1b5ba331cb4c1ed96d88d0895c252567f7f3?el=desc) will **decrease** coverage by `0.85%`.
> The diff coverage is `82.89%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6813 +/- ##
==========================================
- Coverage 78.81% 77.95% -0.86%
==========================================
Files 174 178 +4
Lines 33670 34125 +455
==========================================
+ Hits 26537 26603 +66
- Misses 7133 7522 +389
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/tokenization\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fcmFnLnB5) | `69.76% <69.76%> (ø)` | |
| [src/transformers/modeling\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19yYWcucHk=) | `76.98% <76.98%> (ø)` | |
| [src/transformers/testing\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90ZXN0aW5nX3V0aWxzLnB5) | `67.28% <77.77%> (+0.40%)` | :arrow_up: |
| [src/transformers/retrieval\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9yZXRyaWV2YWxfcmFnLnB5) | `91.27% <91.27%> (ø)` | |
| [src/transformers/\_\_init\_\_.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9fX2luaXRfXy5weQ==) | `99.37% <100.00%> (+0.01%)` | :arrow_up: |
| [src/transformers/configuration\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2F1dG8ucHk=) | `96.25% <100.00%> (+0.09%)` | :arrow_up: |
| [src/transformers/configuration\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX2Rwci5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/configuration\_rag.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3JhZy5weQ==) | `100.00% <100.00%> (ø)` | |
| [src/transformers/file\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9maWxlX3V0aWxzLnB5) | `83.37% <100.00%> (+0.53%)` | :arrow_up: |
| [src/transformers/modeling\_auto.py](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19hdXRvLnB5) | `83.96% <100.00%> (+1.58%)` | :arrow_up: |
| ... and [24 more](https://codecov.io/gh/huggingface/transformers/pull/6813/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=footer). Last update [3ebb1b3...db3e5e0](https://codecov.io/gh/huggingface/transformers/pull/6813?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>@lhoestq thanks for the comments! I did consider moving retrieval outside of the model - the benefit of this that I see would be that we would move all training-related logic (e.g. handling distributed processing in Retriever) from `transformers` to `examples`.
That said, I'm still in favor of keeping the call to `contextualize` as part of the forward pass. Here's my thinking:
- retrieval is more than just data pre-processing step, it is a core part of the model's architecture. E.g. we can't pre-compute retrieved docs for a batch of data beforehand as the question encoder will be updated at every step of training, so the set of retrieved docs would be changing dynamically. If we move retrieval outside of the model people may be tempted to do that.
- we would need to call `contextualize` before every forward pass on the model, so not only in finetuning, but also e.g. in evaluation code. On top of that anyone who would want to run the model for demo purposes would have to instantiate the retriever first and remember to call `contextualize`, instead of doing the two simple steps that other HF models require (encoding the sequence and running the model) - we could potentially consider making contextualization a part of tokenizer's `encode` method (not sure this would be intuitive for people used to HF's APIs) - however, the retrieval logic would still remain in `transformers` then
- In terms of flexibility - I think with the current approach it'd still be possible for people to build different retrievers and pass them to the model
What do you think? I'd be curious to know what others think about it, cc @patrick-s-h-lewis, @thomwolf<|||||>So we've been brainstorming with @patrickvonplaten and @lhoestq on this yesterday and here is a proposal.
The general context is that we expect to include more hybrid models with a retrieval component in the future (like REALM, MARGE, knn-LM) so it's nice if we can setup an API which is general enough to include a diversity of models.
Here is the general idea: we can add a new base class to the library (in addition to the `Tokenizer` and `Model`) which we can call a `Retriever` for instance. The `Retriever` class:
- won't include trainable components and will thus be framework independent like the tokenizer,
- does both the retrieval of the documents given an encoding and the post processing (in our case the retokenization),
- it is sub-classed to be model specific (`RAGRetriever`) like the models and tokenizers,
- we can add an `AutoRetriever` version later.
We probably want to keep it's API fairly generic for now since this field is only beginning to be explored. The `Retriever` can just have a `__call__` method (like the tokenizers) which expect an encoding and does retrieval and postprocessing.
Then (copying @patrickvonplaten's idea) we would be to have a `RetrievalGenerationModel` that comprises the trainable (and PyTorch/TF specific) elements, i.e. the encoder and generator models. In its forward pass the `RetrievalGenerationModel` would have besides the usual `input_ids` one more input argument which is of class `Retriever`. The RetrivalGenerationModel would just call Retriever.forward(encoded_input) and expect an encoding that the self.generator could then be conditioned on.
What do you think? We would help you implement this option of course since it impacts more significantly the library.<|||||>I'm just wondering if TF will like a model which accept a class as input. What do you think @jplu @patrickvonplaten?
We could also have a method in the class to set the retriever instead of dynamically providing it. <|||||>Never tried, but I doubt it should be possible in compiled mode. I think what you propose would be a better way to go.<|||||>> Never tried, but I doubt it should be possible in compiled mode. I think what you propose would be a better way to go.
@jplu - I think there was a slight misunderstanding.
With the proposed approach we actually would pass an instantiation of a class as an argument to the forward pass of the `RetrievalGenerationModel` -> so before following this path we should check if this can nicely be done with TF...<|||||>Oh ok! I thought the question was about to pass one class that contains all the arguments. My bad 😢
So, after reading your explanation I can say, yes it is doable!<|||||>Hey @ola13,
Thanks for your comment, this is indeed a very important aspect that I didn't really think of before.
With @lhoestq, we have been brainstorming a bit and thought maybe a slighly different design could make sense:
```python
#!/usr/bin/env python3
class RetrievalGenerationModel(PretrainedModel):
def __init__(self, config: RetrievalGenerationConfig, encoder: PretrainedModel, retrieval_generator: PretrainedModel):
if encoder is not None and retrieval_generator is not None:
self.encoder = encoder
self.retrieval_generator = retrieval_generator
self.config = RetrievalGenerationConfig.from_encoder_generator_config(self.encoder.config, self.retrieval_generator.config)
assert config is not None
super().__init__(config)
if encoder is None:
self.encoder = AutoModel.from_config(config.encoder)
if retrieval_generator is None:
self.retrieval_generator.from_config(config.generator)
@classmethod
def from_pretrained_encoder_generator(cls, encoder_model_id, generator_model_id):
encoder = AutoModel.from_pretrained(...) # load any query encoding model
retrieval_generator = AutoRetrievalGeneratorModel.from_pretrained(...) # this would be a new class that contains any model that can be used as the `retrieval_generator` model.
return cls(encoder=encoder, retrieval_generator=retrieval_generator)
def forward(input_ids, retriever: PretrainedRetriever):
# 1. Every retriever model encodes the query -> any `AutoModel` can be used here
input_ids_encodings = self.encoder(input_ids) # model with weights
# 2. Use costumized retriever (tokenizer-like) class instance, like `RAGRetriever` that
# - query the index
# - reformats the document outputs
# - tokenizes the document outpus
retrieved_docs_input_ids, retrieved_docs_encodings = retriever(input_ids_encodings, input_ids) # tokenizer like postprocessor that returns the tokenized docs input and the docs encodings
# 3. Now the retrieval_generator requires a specific forward pass which accepts at least four kinds of tensors: 1) the input_ids (query), 2) the encoded_input_ids (encoded query), 3) retrieved_docs_input (tokenized context) and 4) retrieved_docs_encodings
output_ids = self.retrieval_generator(input_ids, encoded_query, retrieved_docs_input_ids, retrieved_docs_encodings) # any `AutoRetrievalGeneratorModel` can be used here
class RagRetrievalGenerator(PretrainedModel):
def __init__(self):
self.generator = AutoModelForSeq2Seq.from_pretrained(...) # e.g. Bart
def forward(input_ids, encodings, docs_input_ids, docs_encodings):
doc_scores = torch.bmm(encodings.unsqueeze(1), docs_encodings.transpose(1, 2)).squeeze(1)
....
output_ids = self.generator.generate(...)
class RAGRetriever(PretrainedRetriever)
"""
This retriever is framework independant (for both TF and PT)
similar to a tokenizer
"""
def __init__(self):
self.docs = nlp.load_dataset(...)
...
def __call__(input_ids_encodings, input_ids):
# no tensor operations happen here
...
class DPRRetrivalGenerator(PretrainedModel):
def __init__(self):
self.genator = AutoModelForQuestionsAnswering.from_pretrained(...) # QA model
def forward(input_ids, encodings, docs_input_ids, docs_encodings):
concated_qa_input = torch.cat([input_ids, docs_input_ids], dim=-1)
output_ids = self.generator(concated_qa_input)
class DPRRetriever(PretrainedRetriever)
"""
This retriever is framework independant (for both TF and PT)
similar to a tokenizer
"""
def __init__(self):
self.docs = nlp.load_dataset(...)
...
def __call__(input_ids_encodings, input_ids):
# no tensor operations happen here
...
```
Hopefully this is somewhat understandable @ola13 @thomwolf ...
@lhoestq and I think that for each RetrivalAugmentedModel we need 2 specific parts:
1) A specific Retriever: how are documents retrieved and formated and tokenized -> e.g. `RAGRetriever`
2) A specific Generator: Here we can also have multiple possibilities: DPR uses a `AutoModelForQuestionAnswering` while RAG uses a `AutoModelForSeq2Seq`
So with this framework we would have to introduce 1 general class that would be used for all RetrievalAugementedModels, called `RetrievalGenerationModel` (or whatever name fits better) and 2 architecture specific classes `RAGRetriever` and `RagRetrievalGenerator`.
Would be keen to hear your thoughts :-) <|||||>Hey @patrickvonplaten, makes sense and in fact it's not very different from how we structured the code already the key differences that I see are:
- we move re-tokenization between query_encoder and generator to the Retriever (so respective tokenizers will be encapsulated by the Retriever not a model class as we currently do it)
- we move retrieval score calculation to the model so that no tensor operations happen in the retriever
which both should be pretty straightforward to implement.
The one thing that I'm still on the fence about is passing a `retriever` to each `forward` pass on a `RetrievalGenerationModel`, instead of making it a member of `RetrievalGenerationModel` class. Why do you feel the former is preferable over the latter?<|||||>Yeah, good point! It's a bit weird to pass a class instance just to make a forward pass with it.
My main reason is the following:
Currently, the library makes a very clear distinction between `config`, `tokenizer` and `model` which are all independent of each other. Each of these classes have a seperate `.from_pretrained()` and `.save_pretrained()` method where as the `PretrainedModel.save_pretrained(...)` and `PretrainedModel.from_pretrained(...)` internally call `PretrainedConfig.save_pretrained(...)` and `PretrainedConfig.save_pretrained(...)`, but **never** the `PretrainedTokenizer.from_pretrained(...)` an d`PretrainedTokenizer.save_pretrained(...)` methods. For a `RetrievalGenerationModel` I would like to reuse `PretrainedModel`'s `from_pretrained(...)` and `save_pretrained(...)` methods which means that a tokenizer instance should not be part of the model because other wise we would have to tweak this function (which I don't think is a good idea).
Also, this will make the `RetrievalGenerationModel` a "clean" and relatively light `Model` object without any string processing logic in it whatsoever which is more in line with other `PretrainedModel` classes. <|||||>@patrickvonplaten, got it, yeah makes sense! We would still want to call `PretrainedTokenizer.from_pretrained(...)` when initializing `RagRetriever` but I guess this should be fine?
Okay, so I would propose to do the following - I will refactor this PR to follow the design we discussed. It seems though that implementing the generic `Retriever` logic as discussed earlier by @thomwolf would require extra effort and time, and is not necessarily within the scope of this PR. In the interest of time, we could land this PR and then proceed with generalizing the retrieval logic? I'm then happy to work with the RAG implementation to make it compatible.<|||||>Exactly! I was thinking that we either create a genereric `PretrainedRetriever` class with a `from_pretrained()` method that calls the tokenizer `from_pretrained()` methods or add `from_pretrained()` method directly to `RagRetriever`. Maybe @lhoestq and @thomwolf have better insight on the "tokenizer" side.
@ola13 maybe we can wait quickly if @lhoestq and @thomwolf are fine with the design as discussed above :-) <|||||>Sounds awesome to me!<|||||>Hey I just refactored the model following suggestions above. One point is that I had to modify `generation_utils.py` to account for a model which takes a `retriever` as an argument to the encoder. Let me know what you think!<|||||>Hi, a question - to use RAG I need a couple of non-standard dependencies (faiss, psutil, nlp) - can I define a special test environment which would install those for rag tests? any pointers on how to handle this?<|||||>> Hey I just refactored the model following suggestions above. One point is that I had to modify `generation_utils.py` to account for a model which takes a `retriever` as an argument to the encoder. Let me know what you think!
Awesome ! I'll take a look. Also cc @patrickvonplaten
> Hi, a question - to use RAG I need a couple of non-standard dependencies (faiss, psutil, nlp) - can I define a special test environment which would install those for rag tests? any pointers on how to handle this?
Maybe @LysandreJik knows more about how to handle tests with dependencies ?
<|||||>Hey @ola13,
I think the general code design is exactly what we have imagined to go for, defining a `RagRetriever` and passing the `retriever` to the forward pass, so this is great! <|||||>Regarding the test dependencies, you can add the libraries here: https://github.com/huggingface/transformers/blob/d6c08b07a087e83915b4b3156bbf464cebc7b9b5/setup.py#L92 and it should automatically be installed for testing on circle ci :-) `psutil` is already in the test dependency<|||||>@ola13 - it would be awesome if you could add one "full" integration test with hardcoded input and output under @slow
By that I mean, *e.g.* hardcoding an input question "Why does it rain", loading a relevant dataset using the `HfIndex` and the full pretrained encoder and generator model and hardcoding the expected output answer in thet test. I think all operations are deterministic (beam search, etc...), so no random seeds have to be set.
This way we have one test where we can be sure that the model works as expected and every change to the model in the future can be checked against that.
The tests you have in `test_modeling_rag.py` so far look great. We could also add a full `RagModel` test by defining a dummy dataset that will be instantiated from a hardcoded dict at test time and instantiating a very light `RagRetriever` at test time this way. But we can manually add those tests later, they are not super important.
In terms of a timeline, it would be be awesome if you manage to make the `test_modeling_rag.py` tests pass and if you could add one "full" integration test showing reasonable results. After this is finished, I think the best idea is if we add some changes on top of your PR (this should take another 1,2 days) and then merge the model into the lib :-)
Thanks a mille for your awesome work so far!!!<|||||>Hey @patrickvonplaten, sounds good! yes definitely adding an integration test was on my agenda, right now having merged the `master` I'm also dealing with some issues arising after the refactor from https://github.com/huggingface/transformers/commit/afc4ece462ad83a090af620ff4da099a0272e171#diff-72b038fcff0de4ae5e094e3cde9471f1 as we were relying on the old structure of `past`. I'm hoping to be done with both of these things by tomorrow :) <|||||>Hi, I just added an integration test for RAG using the dummy variant of `wiki_dpr`. However, I had to locally hack `datasets` to make it run locally, as there seems to be a discrepancy between the dummy index name hardcoded in `wiki_dpr.py` here: https://github.com/huggingface/datasets/blob/37d4840a39eeff5d472beb890c8f850dc7723bb8/datasets/wiki_dpr/wiki_dpr.py#L72 (expecting `dummy.psgs_w100.nq.IndexHNSWFlat-IP-train.faiss`) and what's available on HF's google cloud bucket:
```
~$ gsutil ls -r gs://huggingface-nlp/datasets/wiki_dpr/*
gs://huggingface-nlp/datasets/wiki_dpr/
gs://huggingface-nlp/datasets/wiki_dpr/dummy_psgs_w100_with_nq_embeddings_IndexFlatIP-train.faiss
gs://huggingface-nlp/datasets/wiki_dpr/psgs_w100.nq.IVFPQ4096_HNSW32_PQ64-IP-train.faiss
gs://huggingface-nlp/datasets/wiki_dpr/psgs_w100_with_nq_embeddings_IVFPQ4096_HNSW32,PQ64-IP-train.faiss
```
cc @lhoestq - this would have to be fixed quickly, alternatively I could use full `wiki_dpr` in tests, but that's 78GB, not sure if it makes sense.
Let me know what you think!<|||||>> cc @lhoestq - this would have to be fixed quickly, alternatively I could use full `wiki_dpr` in tests, but that's 78GB, not sure if it makes sense.
I fixed it, dummy.psgs_w100.nq.IndexHNSWFlat-IP-train.faiss is now available on gcs
<|||||>Previous RAG code is now saved in this PR: #7200<|||||>Last fail is due to time-out. All import tests are passing => merging to master. |
transformers | 6,812 | closed | Potential bug in PLM training | There seems to be a bug in `mask_tokens` method of `DataCollatorForPermutationLanguageModeling`. Based on the comment, [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L294) is supposed to compute mask for non-functional tokens, ie. anything but padding and special tokens. So there should be an OR between the `padding_mask` and `special_tokens_mask`, and not AND. For reference, the [corresponding line](https://github.com/zihangdai/xlnet/blob/master/data_utils.py#L602) in the original XLNet code also has an OR.
I should acknowledge that I haven't understood the permutation masking code properly yet. But raising an issue, because it seems wrong to me.
-----
Besides the above problem, I'm also getting a very bad perplexity (**296.0**) on evaluating (w/o finetuning) `xlnet-base-cased` PLM model on plain wikitext2 dataset (`wiki.test.raw`). I've used XLNet example from [here](https://github.com/huggingface/transformers/tree/master/examples/language-modeling) (without `--do-train` flag) to get the perplexity.
The PLM code only works if the sequence lengths are even. To workaround this, I append a padding token when sequence length is odd. Concretely, I replaced the [error here](https://github.com/huggingface/transformers/blob/master/src/transformers/data/data_collator.py#L255) with:
```
padding = inputs.new_ones((inputs.size(0), 1))*self.tokenizer.pad_token_id
inputs = torch.cat([inputs, padding], dim=1)
```
For comparison, the perplexity of BERT in this dataset is around 10.
Transformer Version: from master.
@patrickvonplaten @TevenLeScao @LysandreJik @shngt | 08-28-2020 22:29:43 | 08-28-2020 22:29:43 | I think this is correct. It should be replaced by an `|`. Do you get a better perplexity if you change this line?<|||||>Thanks a lot for opening this issue @HarshTrivedi ! I also agree that the logic should be OR and not AND. @shngt - can you maybe comment here as well?<|||||>Thank you for confirming this!
If I remember correctly, changing `&` to `|` didn't fix the high zero-shot perplexity for me. I'll try it again later today or tomorrow and report back the numbers with `&` vs `|`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>I agree - the logic should be OR and not AND. Could you please confirm if the numbers change @HarshTrivedi?
Sorry for the delay - I missed the notification at the time. I'll submit a PR for AND -> OR fix asap, and try to do some more stringent testing to catch the reason for the perplexity difference. How can I proceed with the latter @patrickvonplaten @LysandreJik ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
<|||||>(Resolved by https://github.com/huggingface/transformers/pull/8409 I believe) |
transformers | 6,811 | closed | Pegasus finetune script: add --adafactor | 08-28-2020 21:21:00 | 08-28-2020 21:21:00 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=h1) Report
> Merging [#6811](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/5ab21b072fa2a122da930386381d23f95de06e28?el=desc) will **decrease** coverage by `0.10%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6811 +/- ##
==========================================
- Coverage 79.58% 79.47% -0.11%
==========================================
Files 157 157
Lines 28588 28586 -2
==========================================
- Hits 22752 22719 -33
- Misses 5836 5867 +31
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/configuration\_pegasus.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX3BlZ2FzdXMucHk=) | `100.00% <100.00%> (ø)` | |
| [src/transformers/configuration\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9jb25maWd1cmF0aW9uX29wZW5haS5weQ==) | `34.28% <0.00%> (-62.86%)` | :arrow_down: |
| [src/transformers/tokenization\_albert.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYWxiZXJ0LnB5) | `28.84% <0.00%> (-58.66%)` | :arrow_down: |
| [src/transformers/modeling\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19vcGVuYWkucHk=) | `23.87% <0.00%> (-57.10%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_distilbert.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9kaXN0aWxiZXJ0LnB5) | `64.47% <0.00%> (-34.36%)` | :arrow_down: |
| [src/transformers/tokenization\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fbWFyaWFuLnB5) | `67.79% <0.00%> (-31.36%)` | :arrow_down: |
| [src/transformers/tokenization\_transfo\_xl.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdHJhbnNmb194bC5weQ==) | `20.53% <0.00%> (-21.21%)` | :arrow_down: |
| [src/transformers/generation\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9nZW5lcmF0aW9uX3V0aWxzLnB5) | `96.66% <0.00%> (-0.28%)` | :arrow_down: |
| [src/transformers/tokenization\_utils\_base.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fdXRpbHNfYmFzZS5weQ==) | `93.76% <0.00%> (+0.27%)` | :arrow_up: |
| [src/transformers/modeling\_tf\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl91dGlscy5weQ==) | `87.29% <0.00%> (+0.32%)` | :arrow_up: |
| ... and [8 more](https://codecov.io/gh/huggingface/transformers/pull/6811/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=footer). Last update [5ab21b0...67322db](https://codecov.io/gh/huggingface/transformers/pull/6811?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
|
|
transformers | 6,810 | closed | [s2s] save first batch to json for debugging purposes | helps debugging and understanding at a very low cost, by writing `text_batch.json` and `tok_batch.json` to `output_dir/` | 08-28-2020 21:08:39 | 08-28-2020 21:08:39 | |
transformers | 6,809 | closed | [s2s] Test hub configs in self-scheduled CI | <!-- This line specifies which issue to close after the pull request is merged. -->
Regression test for #6806
| 08-28-2020 21:04:41 | 08-28-2020 21:04:41 | |
transformers | 6,808 | closed | bart-large-cnn ROUGE-L scores | ## Environment info
### Who can help
BART + Summarization @sshleifer
## Information
Model I am using is BART.
The problem arises when:
verifying accuracy numbers of facebook/bart-large-cnn on CNN+Daily Mail. The paper reports R1, R2, RL of 44.16, 21.28, 40.90 but I can get only 44.05, 21.07, 30.62. I used [this](https://github.com/abisee/cnn-dailymail) to make my dataset. Is this expected?
The tasks I am working on is:
* CNN-Dm summarization task
## To reproduce
Steps to reproduce the behavior:
1. Follow instructions to download dataset
2. Run with `python run_summarization.py --reference_path=data/cnn_dm/test.target data/cnn_dm/test.source results/test.log` | 08-28-2020 20:56:32 | 08-28-2020 20:56:32 | Also noticed this! I have convinced myself that it's a scoring difference because the summaries generated are the same between this model and the fairseq implementation.
<|||||>This might help:
https://github.com/google-research/google-research/issues/168
I used pyrogue and R1, R2, RL = 44.32, 21.15, 37.53
<|||||>@yxyzzz can you tell me how you're using it? I get similar scores with py-rogue
```
def calculate_rouge(output_lns: List[str], reference_lns: List[str], use_stemmer=True) -> Dict:
scorer = rouge_scorer.RougeScorer(ROUGE_KEYS, use_stemmer=use_stemmer)
aggregator = scoring.BootstrapAggregator()
for reference_ln, output_ln in zip(reference_lns, output_lns):
scores = scorer.score(reference_ln, output_ln)
aggregator.add_scores(scores)
result = aggregator.aggregate()
import rouge
import nltk
nltk.download('punkt')
evaluator = rouge.Rouge(metrics=['rouge-n', 'rouge-l'],
max_n=2,
limit_length=False,
apply_avg=True)
scores = evaluator.get_scores(reference_lns, output_lns)
print("py-rogue", scores)
print("rogue_scorer", {k: round(v.mid.fmeasure * 100, 4) for k, v in result.items()})
```
Results in:
```
py-rogue {'rouge-1': {'f': 0.44335299665102107, 'p': 0.5174289830764615, 'r': 0.40466586165106366}, 'rouge-2': {'f': 0.21133693864752542, 'p': 0.2465209393822732, 'r': 0.19324181648769206}, 'rouge-l': {'f': 0.3073058732169781, 'p': 0.35988134598642835, 'r': 0.2798097075410874}}
rogue_scorer {'rouge1': 44.0698, 'rouge2': 21.0711, 'rougeLsum': 30.6233}
```<|||||>1. rouge_score split sentences by '\n'. You can add a '\n' to separate sentences in the summaries and evaluate. The summary level rougeL (rougeLsum) should be a lot higher and close to the one in the literature.
'{'rouge1': 44.0536, 'rouge2': 21.0711, 'rougeL': 30.6157, 'rougeLsum': 40.9812}'
```
output_ln2 = []
for o in `output_ln:
s = sent_tokenize(p)
output_ln2.append('\n'.join(s))
```
2. Use pyrouge -> https://pypi.org/project/pyrouge/ <|||||>replacing
```
output_lns = [x.rstrip() for x in open(args.save_path).readlines()]
reference_lns = [x.rstrip() for x in open(args.reference_path).readlines()][: len(output_lns)]
```
with works for rouge_score
```
output_lns = [" . \n".join(x.rstrip().split('. ')) for x in open(args.save_path).readlines()]
reference_lns = [" . \n".join(x.rstrip().split(' . ')) for x in open(args.reference_path).readlines()][: len(output_lns)]
```
Thanks @yxyzzz !<|||||>should we change run_eval.py ?
<|||||>Opened a PR at #7356 that fixes this issue @sshleifer |
transformers | 6,807 | closed | [WIP] Added token_type_id support to GPT2Model | Fixes #6794 #4922 #1339
Added token type embedding support to GPT2Model.
To use `token_type_ids` with `GPT2Model` the user can now supply `type_vocab_size` to the `GPT2Config` constructor and then pass `token_type_ids` as input to the `forward` method as with other models.
Using all the default parameters, the model should continue to work as before, since I have set `type_vocab_size=None` as the default of the `GPT2Config` class, which disables my token type embeddings.
If the user tries to pass `token_type_ids` to the `forward` method when token type embeddings have not been enabled for a model, it will raise an informative error message. The same is true for when the user does not pass `token_type_ids` to the `forward` method when token type embeddings have been enabled for a model. | 08-28-2020 20:32:07 | 08-28-2020 20:32:07 | This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,806 | closed | distilbart-cnn reproduction | @sshleifer
Unable to read `config.json` for `sshleifer/student_cnn_12_6`
[config.json](https://s3.amazonaws.com/models.huggingface.co/bert/sshleifer/student_cnn_12_6/config.json) Line 64 has a format error :
```
"force_bos_token_to_be_generated", true
}
```
This should be:
```
"force_bos_token_to_be_generated" : true
}
```
This is causing an issue in loading `sshleifer/student_cnn_12_6` using `.from_pretrained()`.
| 08-28-2020 20:29:08 | 08-28-2020 20:29:08 | Fixed, will add a check.
Are you running distillation experiments!? FYI that model is not trained.<|||||>> Are you running distillation experiments!? FYI that model is not trained.
Yes, I know. Reproducing the results, then planning to run a few experiments with it.
Wasn't able to use `--fp16 `, kept getting OOM errors (using 4 2080TIs).<|||||>Cool!
re: fp16:
Are you in torch 1.6?
Try torch 1.5.1 with apex installed.
I haven't run anything successfully in torch 1.6 and am very suspicious of native amp.<|||||>> Try torch 1.5.1 with apex installed.
>
> I haven't run anything successfully in torch 1.6 and am very suspicious of native amp.
Thanks, I will try that.
Also, did you use `run_eval.py` for the results [here](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0)?
I tried using `sshleifer/distilbart-cnn-12-6` as well as one I finetuned from `sshleifer/student_cnn_12_6`, but got comparatively lower results.
<|||||>Yes I did, what were your results?<|||||>Validation - `{'rouge1': 36.902390083382635, 'rouge2': 15.98520126771937, 'rougeL': 25.75566724592724} `
Test -` {'rouge1': 33.980893339399074, 'rouge2': 13.925809496977044, 'rougeL': 23.731267594610095} `<|||||>That's awful! Can I see your command?
<|||||>```
python run_eval.py distilbart-cnn-12-6/best_tfmr $DATA_DIR/val.source dbart_val_generations.txt \
--reference_path $DATA_DIR/val.target \
--score_path distilbart-cnn-12-6/cnn_rouge.json \
--task summarization \
--n_obs 100 \
--device cuda \
--bs 32 \
```<|||||>On 100 observations that might not be so bad.
The 21.26 Rouge 2 is from the following command (a few months ago):
```bash
python run_eval.py sshleifer/distilbart-cnn-12-6 \
cnn_dm/test.source \
dbart_cnn_12_6_test_gens.txt \
--reference_path cnn_dm/test.target \
--score_path dbart_cnn_12_6_test_rouge.json \
--task summarization --bs 32 --fp16
```
in torch 1.5.1.
Reran Today (it took an hour)
```
{'rouge1': 44.2503, 'rouge2': 21.2586, 'rougeL': 30.3729, 'n_obs': 11490, 'runtime': 3569, 'seconds_per_sample': 0.3106}
```
<|||||>I had tried with 1000 (based on the [comment](https://docs.google.com/spreadsheets/d/1EkhDMwVO02m8jCD1cG3RoFPLicpcL1GQHTQjfvDYgIM/edit#gid=0)), had similar results. I wouldn't have expected the result to change that much, my bad. Thanks for your help!<|||||>This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
|
transformers | 6,805 | closed | [WIP] Added token_type_id support to GPT2Model | Fixes #6794 #4922 #1339
Added token type embedding support to GPT2Model. | 08-28-2020 20:04:31 | 08-28-2020 20:04:31 | |
transformers | 6,804 | closed | BART ce loss ignores pad_token_id instead of -100 | cc @ibeltagy who noticed this.
I think it is better to ignore pad_token_id then -100, but this is semi-breaking because old training code might have replaced pad token id with -100 in labels.
Maybe I should check for both? | 08-28-2020 18:57:23 | 08-28-2020 18:57:23 | # [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=h1) Report
> Merging [#6804](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=desc) into [master](https://codecov.io/gh/huggingface/transformers/commit/3cac867fac3f8717b25e3026b97b456a4e748039?el=desc) will **increase** coverage by `0.03%`.
> The diff coverage is `100.00%`.
[](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=tree)
```diff
@@ Coverage Diff @@
## master #6804 +/- ##
==========================================
+ Coverage 79.21% 79.25% +0.03%
==========================================
Files 157 157
Lines 28588 28588
==========================================
+ Hits 22646 22656 +10
+ Misses 5942 5932 -10
```
| [Impacted Files](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=tree) | Coverage Δ | |
|---|---|---|
| [src/transformers/modeling\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19iYXJ0LnB5) | `95.06% <100.00%> (-0.52%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_electra.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9lbGVjdHJhLnB5) | `25.13% <0.00%> (-73.83%)` | :arrow_down: |
| [src/transformers/modeling\_tf\_openai.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ190Zl9vcGVuYWkucHk=) | `22.58% <0.00%> (-72.26%)` | :arrow_down: |
| [src/transformers/tokenization\_bart.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fYmFydC5weQ==) | `42.10% <0.00%> (-57.90%)` | :arrow_down: |
| [src/transformers/trainer.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyLnB5) | `13.21% <0.00%> (-40.45%)` | :arrow_down: |
| [src/transformers/modeling\_marian.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9tb2RlbGluZ19tYXJpYW4ucHk=) | `60.00% <0.00%> (-30.00%)` | :arrow_down: |
| [src/transformers/training\_args.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmluZ19hcmdzLnB5) | `66.66% <0.00%> (-25.00%)` | :arrow_down: |
| [src/transformers/trainer\_utils.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90cmFpbmVyX3V0aWxzLnB5) | `64.44% <0.00%> (-20.00%)` | :arrow_down: |
| [src/transformers/activations.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy9hY3RpdmF0aW9ucy5weQ==) | `85.00% <0.00%> (-5.00%)` | :arrow_down: |
| [src/transformers/tokenization\_dpr.py](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree#diff-c3JjL3RyYW5zZm9ybWVycy90b2tlbml6YXRpb25fZHByLnB5) | `53.15% <0.00%> (-4.51%)` | :arrow_down: |
| ... and [14 more](https://codecov.io/gh/huggingface/transformers/pull/6804/diff?src=pr&el=tree-more) | |
------
[Continue to review full report at Codecov](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=continue).
> **Legend** - [Click here to learn more](https://docs.codecov.io/docs/codecov-delta)
> `Δ = absolute <relative> (impact)`, `ø = not affected`, `? = missing data`
> Powered by [Codecov](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=footer). Last update [3cac867...7a6bf5a](https://codecov.io/gh/huggingface/transformers/pull/6804?src=pr&el=lastupdated). Read the [comment docs](https://docs.codecov.io/docs/pull-request-comments).
<|||||>What do you mean default in config?
The default `ignore_index` is -100 for CrossEntropyLoss.
`pad_token_id` is overwritten by BartConfig.<|||||>Sorry mixed things up. This would make `BartForConditionalGeneration` behave differently from all the others models (all ModelForMaskedLM and T5ForConditionalGeneration use -100) so I think this is pretty breaking. Users probably have special code to changed padded token to -100, plus you may want to mask other things than the padding for loss computation (more relevant for masked LM than seq2seq but still).
I think this is some preprocessing work to do on the labels, for instance the `DataCollatorForLanguageModeling` replaces all non-masked tokens by -100 in the labels. |
transformers | 6,803 | closed | Fix style | <!-- This line specifies which issue to close after the pull request is merged. -->
Fixes #{issue number}
| 08-28-2020 18:52:52 | 08-28-2020 18:52:52 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.