repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 9,210 | closed | MLFlow logger breaks training | I'm running into two issues when I try and train with mflow installed.
The main one is transformers logs every single parameter and mlflow has restrictions on parameter size. So when I tried to train a `AutoModelForSequenceClassification` using the code below
```
num_labels=25
config = AutoConfig.from_pretrained(
'./models/roberta',
num_labels=num_labels)
model = AutoModelForSequenceClassification.from_pretrained(
'./models/roberta',
config=config)
```
An exception is thrown by the mlflow/utils/validation.py - specifically that the length of the value of parameter with key "id2label" is > 250 (this is a character limit so it's trying to send the dict as a string)
The second issue is if I resume training after this error I get a further mlflow exception saying that the run is still in progress - I have to manually call
`mlflow.end_run()`
I'm not sure how to work around this (for now I've uninstalled mlflow). Can I tell the trainer not to log certain parameters?
| 12-19-2020 06:24:57 | 12-19-2020 06:24:57 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,209 | closed | Error while loading model file - .ckpt file :: Missing key(s) in state_dict | @julien-c @patrickvonplaten @thomwolf
transformers version == 4.0.0
I trained a T5 model for classification and I am trying to load the checkpoint model saved after 3 epochs.
While trying to load the model, getting the below error.

The model file is a ckpt file.
Any help is appreciated.
Thanks.
| 12-19-2020 05:56:30 | 12-19-2020 05:56:30 | Hi @adithyaan-creator
Seems like you used `pytorch-lightning` for training, `pl` prefixes every `state_dict` key with `model.` so to load the `pl` checkpoint to hf format, remove the `model.` prefix from `state_dict` keys.
```python3
def remove_prefix(text: str, prefix: str):
if text.startswith(prefix):
return text[len(prefix) :]
return text # or whatever
state_dict = {remove_prefix(k, "model."): v for k, v in state_dict.items()}
hf_model.load_state_dict(state_dict)
hf_model.save_pretrained(save_path)
```
Also, would be nice if you could post code snippet instead of screenshots :)
<|||||>Hi @patil-suraj,
I tried it with loading the 'state_dict' item to the model, and getting the following error "Unexpected key(s) in state_dict: "decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight". ".
The code is ::
def remove_prefix(text: str, prefix: str):
if text.startswith(prefix):
return text[len(prefix) :]
return text # or whatever
ckpt = {remove_prefix(k, "model."): v for k, v in ckpt['state_dict'].items()}
model.load_state_dict(ckpt)
model.save_pretrained("/content/model")`
<|||||>Hey @adithyaan-creator,
You can ignore this warning, see: https://github.com/huggingface/transformers/pull/9231<|||||>@patrickvonplaten the code that was merged wasnt working out, but since the weight was unnecessary I deleted the weight layer and it worked out.
'del ckpt["decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight"]'
Thanks.<|||||>@adithyaan-creator Hi, I am having the same issue now. Could you elaborate on how and where did you add 'del ckpt["decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight"]' ? Thank you very much! |
transformers | 9,208 | closed | [docs] outline sharded ddp doc | This PR provides an initial outline of HF Trainer integration starting with ZeRO. We have fairscale's Sharded optimizer/gradients supported already and deepspeed is coming
We won't merge this until fairscale merged all the required fixes and released a new version, but I thought it'd be good to start the doc going so it's ready when fairscale is ready.
I hope to submit a deepspeed integration shortly as well, so we will extend it then with deepspeed info. edit (https://github.com/huggingface/transformers/pull/9211)
@sgugger | 12-19-2020 05:34:13 | 12-19-2020 05:34:13 | |
transformers | 9,207 | closed | Saving Pretrained Tokenizer | I created a custom `tokenizers.Tokenizer` and saved it as follows
```
tokenizer.model.save("./tokenizer")
tokenizer.save("./tokenizer.json")
```
This produces 3 files, merges.txt, vocab.json and tokenizer.json
Then I created a `transformers.RobertaTokenizerFast` and saved it to the same folder
```
tokenizer = RobertaTokenizerFast.from_pretrained("./tokenizer")
tokenizer.save_pretrained("./tokenizer")
```
This adds `special_tokens_map.json` and `tokenizer_config.json`
I then saved it to another folder to simulate what happens when I train my model
```
tokenizer.save_pretrained("./model")
tokenizer = RobertaTokenizerFast.from_pretrained("./model")
```
What I noticed was `tokenizer_config.json` contains a key `name_or_path` which still points to `./tokenizer`, so what seems to be happening is `RobertaTokenizerFast.from_pretrained("./model") ` is loading files from two places (./model and ./tokenizer)
Not sure if this is expected, it seems that the tokenizer_config.json should be updated in save_pretrained, and tokenizer.json should be saved with it?
Or perhaps this is just an issue because I'm training my tokenizer in a subdirectory of the model folder?
| 12-19-2020 04:39:18 | 12-19-2020 04:39:18 | I realised that
`tokenizer.model.save("./tokenizer")`
Is unnecessary. I've started saving only the `tokenizer.json` since this contains not only the merges and vocab but also the pipeline.
And I noticed that `tokenizer.save_pretrained()` has a parameter `legacy_format` which defaults to True. When I set it to false it properly round trips (i.e. saves out the unified tokenizer.json rather than the model files (merges.txt and vocab.json)
The only issue now is if I create a trainer
```
trainer = Trainer(
model=model,
tokenizer=tokenizer
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"] ,
data_collator=data_collator,
)
logger.info("*** Train ***")
trainer.train(model_path=model_path)
trainer.save_model()
```
The last line doesn't accept kwargs so it saves the tokenizer in the legacy format.
A workaround is to do it manually ie.
```
trainer = Trainer(
model=model,
#tokenizer=tokenizer
args=training_args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"] ,
data_collator=data_collator,
)
logger.info("*** Train ***")
trainer.train(model_path=model_path)
trainer.save_model()
tokenizer.save_pretrained(output_dir, legacy_format=False)
```
Only issue is I don't see a workaround to ensure the checkpoints are created using `legacy_format=False`
<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,206 | closed | DataTrainingArguments: __init__() got an unexpected keyword argument 'evaluate_during_training' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0
- Platform: Google Colab
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Distilbert
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the HF example Notebook here: https://colab.research.google.com/github/huggingface/blog/blob/master/notebooks/trainer/01_text_classification.ipynb
That Notebook is linked from here: https://huggingface.co/transformers/examples.html
2. At this line:
```python
training_args = TrainingArguments(
output_dir="./models/model_name",
overwrite_output_dir=True,
do_train=True,
do_eval=True,
per_gpu_train_batch_size=32,
per_gpu_eval_batch_size=128,
num_train_epochs=1,
logging_steps=500,
logging_first_step=True,
save_steps=1000,
evaluate_during_training=True,
)
```
an error is raised:
```
TypeError Traceback (most recent call last)
<ipython-input-6-e83ba093226a> in <module>()
14 logging_first_step=True,
15 save_steps=1000,
---> 16 evaluate_during_training=True,
17 )
TypeError: __init__() got an unexpected keyword argument 'evaluate_during_training'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The HF example Notebook should complete successfully. | 12-19-2020 03:12:09 | 12-19-2020 03:12:09 | Indeed, will fix that notebook on Monday. Thanks for reporting!<|||||>Note that this notebook is quite old, so you might prefer the most recent one [here](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb). I fixed the issue by pinning the transformers version inside it, we will keep it for legacy reasons but not update it.<|||||>Same problem on this notebook: [https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing](https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing).
This notebook is referenced here: [https://huggingface.co/transformers/training.html#additional-resources]().
Also, if I understand correctly, `evaluate_during_training=True` should be replaced with `evaluation_strategy='epoch'`, and then it will work fine. <|||||>I think this notebook is community-contributed (it's not in our repos) so it's up to the person who wrote it to port it to the most recent version of transformers. |
transformers | 9,205 | closed | [model_utils] very slow model instantiation | For some reason I'm noticing a very slow model instantiation time.
For example to load `shleifer/distill-mbart-en-ro-12-4` it takes
* 21 secs to instantiate the model
* 0.5sec to `torch.load` its weights.
If I'm not changing how the model is created and want to quickly fast forward to the area of debug how could these slow parts be cached and not rebuilt anew again and again?
But also it looks like we are doing a completely wasteful operation of init_weights, which immediately get overwritten with pretrained model weights (https://github.com/huggingface/transformers/issues/9205#issuecomment-748741195) (for the use case of pre-trained model).
(I initially made a mistake and thought that it was `torch.load` that had an issue, but it's `cls(config, *model_args, **model_kwargs)`) - thank you, @sgugger - so this post has been edited to reflect reality. So if you're joining later you can skip the comments up to https://github.com/huggingface/transformers/issues/9205#issuecomment-748722644 and continue from there)
@patrickvonplaten, @sgugger, @LysandreJik | 12-19-2020 02:44:41 | 12-19-2020 02:44:41 | Doesn't that script also loads and preprocess the data? From what you're reporting, I don't interpret this as "transformers takes a long time to load the model" (since the line that does that takes the same time as a torch load) but as "stuff that happens in that script before the model loading takes a lot of time" (which is probably data preprocessing + the 3s to import transformers if TF is in your env). Or am I missing something?
<|||||>Perhaps my first post is confusing, what I did is bracketing the `torch.load` call in modeling_utils.py:
```
start_time = time.time()
state_dict = torch.load(resolved_archive_file, map_location="cpu")
end_time = time.time() - start_time
```
So all the other stuff isn't being measured, just the `torch.load` call. <|||||>Ah, I understand better. I don't think your comparison is fair: `AutoModel.from_pretrained` does two things: creating a model and filling it with the weights. From a small experiment in timing on my side, I believe all the time is spent in the model creation. So you should compare the timing of creating the model and loading the weights inside to have something that's apple to apple.<|||||>I removed the 2nd part that was showing the same issue from a different angle, as it appears to just confuse and isn't contributing to understanding the issue at hand.
There is just `state_dict = torch.load(resolved_archive_file, map_location="cpu")` call - and nothing else. On its own:
`python -c "import torch; torch.load('/hf/transformers-master/data/distill-mbart-en-ro-12-4/pytorch_model.bin')"`
it takes ~1s, the exact same call inside `modeling_utils` takes 22+ secs.<|||||>OK, somehow I made a mistake and was taking the snapshot of startime before `model = cls(config, *model_args, **model_kwargs)` and not `torch.load()` - my apologies :( and thank you for double checking my invalid report.
```
import time
t0 = time.time()
model = cls(config, *model_args, **model_kwargs)
t1 = time.time()
state_dict = torch.load(resolved_archive_file, map_location="cpu")
t2 = time.time()
print(f"cls init { round(t1-t0, 4)}")
print(f"load { round(t2-t1, 4)}")
import sys
sys.exit(0)
```
```
cls init 21.2055
load 0.5074
```
So it's setting up the model that takes so long, just as you said.
Can this somehow be sped up? I was integrating deepspeed and re-running the same command repeatedly and 23 extra secs of waiting to just discover that something is off was very painful for debugging. All the failures happened at much later stages. I worked around it it by switching to a tiny model, but even that takes some secs.
Can we think of a way to make an image and load it rather than rebuilding the model from scratch? So we torch.load the weights, but also cache the model image itself and load it too, rather then create it anew. It seems to be so wasteful and slow if I'm not debugging the model creation but say tuning up something in the trainer and I want the other parts to load blazingly fast and get me to the point of interest quickly. What would be the best way to approach such need?
<|||||>So doing profiling on model instantiation code it can be seen that `_init_weights` is where some 75% of that slowdown happens
```
ncalls tottime percall cumtime percall filename:lineno(function)
354 18.942 0.054 18.942 0.054 {method 'normal_' of 'torch._C._TensorBase' objects}
225 2.286 0.010 2.286 0.010 {method 'uniform_' of 'torch._C._TensorBase' objects}
````
So we are completely wasting time doing init weights, since we are immediately replacing them. (with the exception to `SinusoidalPositionalEmbedding` which do not get loaded from the pretrained model).
If you prefer the visual version:

Chances are that model init needs to be made context aware and not init weights which will be immediately replaced. Thoughts?
That would make `transformers` so much faster to start! (e.g. think the model pages website which takes forever to load a model).
The profiling was done with:
```
# prep
pip install graphviz gprof2dot
cat <<EOT > prog
from transformers import AutoModelForSeq2SeqLM
AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distill-mbart-en-ro-12-4")
EOT
# text profile
USE_TF=0 PYTHONPATH=src python -m cProfile -s tottime prog > profile.txt
head -10 profile.txt
# visual profile
USE_TF=0 PYTHONPATH=src python -m cProfile -o profile.pstats prog
gprof2dot -f pstats profile.pstats | dot -Tsvg -o callgraph.svg
display callgraph.svg
```
<|||||>If we see a significant gain in loading time, maybe it's worth to explore a way to only apply `init_weights` on missing layers. Not sure how easy it would be to implement it though...
Maybe a `init_weights` function arg in `__init__` might make sense:
```python
model = cls(config, init_weights=False, *model_args, **model_kwargs) # don't call init_weights, but initialize all weights to zero because it's much faster
# load weights into model and get missing layers
# init missing layers
```<|||||>Yeah Patrick's suggestion is probably the best, though I'm not sure it can easily be achieved in the current API. Note that this is only one slowdown at the beginning of training, so I don't think this should be high priority.<|||||>I totally get it that it's not high priority, since most people don't care for a slow start when they run it non-stop for hours - it only affects people who need a quick start - which is the case when debugging something or as I suggested the demo function on the model pages which takes a really long time to load.
In the case of BART, its deterministic segments do the init internally, so it's enough to just monkeypatch as a proof of concept:
```
# modeling_utils.py::from_pretrained
init_weights_orig = PreTrainedModel.init_weights
def init_weights_pretrained(self):
# self.apply(self._init_weights)
if self.config.pruned_heads: self.prune_heads(self.config.pruned_heads)
self.tie_weights()
PreTrainedModel.init_weights = init_weights_pretrained
model = cls(config, *model_args, **model_kwargs)
PreTrainedModel.init_weights = init_weights_orig
```
and this command:
```
PYTHONPATH=../../src USE_TF=0 time python -c 'from transformers import AutoModelForSeq2SeqLM; AutoModelForSeq2SeqLM.from_pretrained("sshleifer/distill-mbart-en-ro-12-4")'
```
goes from 25sec to 8secs. The instantiation goes from 22 secs to 5 secs.
There are few `uniform_` calls left which account for 2.3 extra secs, which if shaves off we should be down to 2-3 secs (from 22!).
I quickly checked that the core functions normally - same scores - well, I did just one finetune_trainer run.
One way is to solve this as @patrickvonplaten suggested, and I'm also thinking of changing the design a bit. So that each model has a normal `init_weights` and `init_weights_pretrained` - then it's very clear to the developer what goes where and then simply invoke one or the other depending on the context. And then it's just a matter of choosing how to signal the context.
<|||||>I don't see how you could have an `init_weights_pretrained`: it depends on the checkpoint you pass: if you pass the checkpoint of a `BertModel` to `BertForMaskedLM`, you just have one bias to initialize (if weights are tied). But if you pass a checkpoint of a `BertForMaskedLM` checkpoint then you have nothing to initialize. And the same holds for every variant (which would have different specific weights to initialize in case of a pretrained model) so I don't really see how you can do this API-wise.
The only way I see through it is to allow the `init_weights` to get the list of model parameters to randomly initialize, but since we use the `apply` method afterward (and rely on it to get modules inside each model specific `_init_weights` method) I don't see how to use it properly. It would probably require some clever recursive method.
Again, lots of headaches and possibilities for errors for an end result that doesn't strike me as high priority.
> it only affects people who need a quick start - which is the case when debugging something or as I suggested the demo function on the model pages which takes a really long time to load.
It doesn't take 25 seconds on a tiny model, only a big one. So I'd suggest debugging on a tiny model :-)<|||||>Thank you both for entertaining possible approaches and suggesting that you are not quite seeing a smooth solution. I just don't know enough about all of it, so I'm surely missing on cases I haven't thought of, but somehow in my mind it looks simple. The devil is in the details.
> It doesn't take 25 seconds on a tiny model, only a big one. So I'd suggest debugging on a tiny model :-)
Unfortunately the tiny model approach doesn't work with debugging OOM in deepspeed, as its configuration correlates to the model size. I guess it's not special to deepspeed at all. So the tiny model trick works for checking mechanics (i.e. that the code compiles), but isn't helpful for OOM debug.<|||||>@patrickvonplaten, @sgugger, @LysandreJik - could we please revisit this - working on making t5-11b train was painful - it was taking really really really long time to init the model, just to drop it and replace with pre-trained weights. Transformers is mainly about pre-trained models, so perhaps this can be made somehow configurable?
We know when a pretrained model is loaded, so why not propagate that information and let the model know it's being loaded in pre-trained mode, so that it could skip any weight inits that are going to be replaced anyway?
And while we are at it, I don't suppose there is a way to involve more than one CPU core in loading the model? I guess that would be a question for pytorch.
Thank you!<|||||>I'm happy to add such a featurue. It should be feasible to only initialize those layers that are not in the saved `.pt` file.<|||||>Indeed, this would be a welcome feature, big models aren't going away.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>@patrickvonplaten, I should probably work on it - since it doesn't seem like you will have time any time soon.<|||||>It's on my To-Do List, but still don't think, I'll be able to take a look within the next 2,3 weeks - sorry :-/ If you find some time for this, it would be great<|||||>Ihave finetuned a longformer encoder decoder model, and trying to convert it into an api but model takes too long to load that api throws a not responding error.

Kindly if anyone can guide me on how can I reduce the time for the model to load.
Thank You in advance.<|||||>Hello @AyeshaSarwar,
could you please use the forum: https://discuss.huggingface.co/ instead for such questions? We don't support Flask compatibility in `transformers`. Please keep in mind that the issues are mainly used for issues related to just `transformers`.
Thanks<|||||>Im on the same boat as @stas00 . I understand that the code need to maintain a wider compatibility across the oceans of models, but people needs a working workaround before an elegant solution born into reality. I believe as huggingface slowly graduating from pure research field, more and more people are being hurt by the tremendous model initialization time.
Hoping for a change<|||||>@DeXtmL, this thread is 2 years old - the particular problem I raised in this Issue has been solved a long time ago. The model is no longer being init'ed twice.
If you feel something is still slow please start a new Issue.
thank you. |
transformers | 9,204 | closed | Load saved Pytorch model into Tensorflow or convert from Pytorch model to TF | Hi,
Thanks for this awesome framework!
I have trained and saved an XLMRoberta model in PyTorch and I'm wondering if there is any way I can load the model into TFRobertaForSequenceClassification class. Or if there are ways to convert the checkpoint to TensorFlow checkpoints so that it can be load by TF2.
I came across this file https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py
and I'm wondering if there are any docs or instructions to use the script.
Thanks! | 12-19-2020 01:00:51 | 12-19-2020 01:00:51 | Hey @Jess0-0,
You can load TF checkpoints into PT and vice-versa via:
`XLMRoberta.from_pretrained(...., from_tf=True)`
or
`TFXLMRoberta.from_pretrained(...., from_pt=True)`.
Since `XLMRobertaForSequenceClassification` is just an alias of `RobertaForSequenceClassification` there should be no problem in doing
```python
TFRobertaForSequenceClassification.from_pretrained("<your/path/to/saved/xlm/roberta/pytorch/dir>", from_pt=True)
``` |
transformers | 9,203 | closed | [finetune trainer] better logging and help | As a follow up to this [thread](https://discuss.huggingface.co/t/summarization-is-finetune-trainer-py-accepting-length-arguments-correctly/2879/) this PR:
* documents that `--val_max_target_length` is also used during `generate`
* disambiguates `use_task_specific_params` logger so that it's clear that it dumps just the initial params and that those could be overridden by user's cl args
@sgugger | 12-19-2020 00:42:43 | 12-19-2020 00:42:43 | |
transformers | 9,202 | closed | [wanted] explicit docs for inherited methods | # 🚀 Feature request
The HF approach is to unroll most of the code for the ease of understanding, but somehow this is not the case with docs.
If possible, could we explicitly add documentation for inherited methods in the specific model documentation page?
e.g. why https://huggingface.co/transformers/model_doc/t5.html doesn't have `T5ForConditionalGeneration.generate` documented - sure one eventually figures out it's https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate but why make user's life so difficult when the docs are autogenerated anyway.
In the worst case there could be an entry for `T5ForConditionalGeneration.generate` with the link to https://huggingface.co/transformers/main_classes/model.html#transformers.generation_utils.GenerationMixin.generate, but that is not ideal since one has to leave the main doc of the model, which makes it much harder to jump around its different parts.
IMHO, it'd greatly improve user's experience to have all the functionality documentation supported by a model in one page of that model.
And xrefs are super-useful too, e.g. [t5 pre-amble](https://huggingface.co/transformers/model_doc/t5.html#overview) mentions `T5ForConditionalGeneration.generate `but it's not linked to anywhere.
Thank you!
@sgugger, @LysandreJik | 12-18-2020 23:40:29 | 12-18-2020 23:40:29 | I don't agree on an always-document-everything basis, as the documentation page of each model is already quite long (which may be the reason not a lot of people seem to be reading them...) In the case of generate, since there is no link to the part where the generate method is documented, we could add the documentation of the generate method there, but in the case of tokenizers for instance, I prefer our approach where we document just the main methods in the subclasses and point to the superclass for less important ones.
Which again also goes in the direction of documenting generate in the models that implement it, so why not go with it. I don't think it should be harder than adding `generate` to the members field, though sphinx might not like `GenerationMixin`. If anyone wants to give it a try, I'll happily review a PR.<|||||>`generate` was just an example, so let's not solve just a sub-case if possible.
As mentioned in OP if you feel that including full docs is too much - a short entry that links to the super class or mixin's method is a satisfactory solution, but the user shouldn't hunt for where that entry might be elsewhere.
Also for methods such as `T5ForConditionalGeneration.generate` mentioned in free prose it'd be awesome to have it linked to the right doc entry. I understand it should happen automatically with sphinx if there is an actual entry to link to.
> the documentation page of each model is already quite long (which may be the reason not a lot of people seem to be reading them...)
I don't know what you mean by "not a lot of people seem to be reading them" - do you imply that users ask a lot of questions that are already answered by the existing documentation or do you have some other way to measure the "not a lot" part?
<|||||>> I don't know what you mean by "not a lot of people seem to be reading them" - do you imply that users ask a lot of questions that are already answered by the existing documentation
Yes, I was implying exactly that. For the general rule, I'd keep it to: main parts of the API of a class should be documented in which class (within reasons) and more minor parts should be documented once in the superclass/mixin with a link from all the subclasses (like what is done for `Tokenizer`)
<|||||>I agree the process is long an feels like a lot of unnecessary steps.<|||||>>> I don't know what you mean by "not a lot of people seem to be reading them" - do you imply that users ask a lot of questions that are already answered by the existing documentation
> Yes, I was implying exactly that.
Do we have a way to reach out and ask why users don't per-use docs before asking questions?
Is this:
* an issue of the quality/readability/navigationability of the docs
* users are just lazy (as a virtue)
* users don't know that there are docs to search through
* the search engine doesn't have the smarts to show most relevant info and shows too many irrelevant hits? many docs search engines are pretty crappy in my experience. It's usually the best to use google with:
`"my search query" site:https://huggingface.co/transformers/`
to find the best information. (swap in whatever other docs site you need, I wasn't singling out transformers, it was just an example)
> For the general rule, I'd keep it to: main parts of the API of a class should be documented in which class (within reasons) and more minor parts should be documented once in the superclass/mixin with a link from all the subclasses (like what is done for Tokenizer)
That works 100% for me.
What needs to be done to make this happen? Definitely no rush.<|||||>I don't know whether this helps, but when I managed a huge open source project many years back I trained our users to per-use docs by almost never replying with the repeat information they requested but with a direct link to where it was in the documentation. Over time it was very clear to the community that the information is available in the docs and less and less questions were asked and more and more one line answers with the link to the right info were posted by various users of that community.
When one answers in clear text it signals users that the information is in your brain and not outside of it.
That is just how my experience had been, YMMV.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>@sgugger, how do we resolve this - please let me know if I can be of help.<|||||>I think we should agree to disagree on this point. I didn't see anything in the last survey that showed a strong desire from the user to have the documentation of each model page be even longer, quite the opposite, so I stand by what I said with the generic methods like `generate` be documented in the base classes unless they have some model-specific behavior that needs to override that documentation. It's the same for the tokenizer encode and call methods.<|||||>I think there are two different possibilities here:
1. duplicating content - you disagree with - OK
2. having a placeholder with a link to the main location of this entry
For example, when documentation refers to `T5ForConditionalGeneration.generate` it could:
1. link directly to the general `generate` doc entry
2. link to an entry `generate` in the t5 doc which will link to the main location of this entry
Why make the user work extra hard searching for something, when it can be automated and get the user what they need in 1 or 2 quick clicks.
I'm not asking for a hypothetical nice-to-have feature, I often find myself frustrated when I can't quickly link to a method when I address someone's Issue and have to search for it. So it's a very selfish request. And I'm willing to work for it, since it'll save my time and minimize frustration where it doesn't have to happen.<|||||>> For example, when documentation refers to T5ForConditionalGeneration.generate it could:
> 1. link directly to the general generate doc entry
I am not sure I know how to do that in sphinx, but if it's possible, I'm all for it!
> 2. link to an entry generate in the t5 doc which will link to the main location of this entry
This one will require writing a custom docstring for the generate method of T5. Probably more possible and can be automated with a decorator to use on the classes with a generate method I think.<|||||>Great. I will research it then and report back if I find a way.
Thank you for your feedback, @sgugger |
transformers | 9,201 | closed | when to use sortish sampler | Hi,
I appreciate adding to the documentation on when to use sortish sampler and which platform this works/how much it impacts the speed, this is related to seq2seq folder of huggingface, for Seq2seqDataset.
thanks
| 12-18-2020 23:20:37 | 12-18-2020 23:20:37 | Is there any update on this?
Seems like it allows dynamic batching. <|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,200 | closed | Beam search fails when using model parallelism | ## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-4.4.0-194-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes, two GTX 1080, on a single node
- Using distributed or parallel set-up in script?: Using model parallelism through `model.parallelize()`
### Who can help
@LysandreJik
@alexorona
## Information
Model I am using (Bert, XLNet ...): GPT2
The problem arises when using:
* [ ] the official example scripts:
* [x] my own modified scripts:
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task:
* [ ] my own task or dataset:
## To reproduce
The recent (and awesome!) model `parallelize()` doesn't seem to work with beam search decoding at the moment. The behavior can be reproduced on the official `huggingface/transformers-pytorch-gpu:4.1.1` docker image by running the following (on a machine with multiple GPUs):
```python
import transformers
tokenizer = transformers.GPT2Tokenizer.from_pretrained("gpt2")
model = transformers.GPT2LMHeadModel.from_pretrained("gpt2")
model.parallelize()
input_ids = tokenizer.encode("This is a test", return_tensors="pt").to("cuda:0")
model.generate(input_ids, num_beams=2)
```
This raises the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
return func(*args, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 612, in generate
**model_kwargs,
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 1088, in beam_search
model_kwargs["past"] = self._reorder_cache(model_kwargs["past"], beam_idx)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 229, in _reorder_cache
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
File "/usr/local/lib/python3.6/dist-packages/transformers/generation_utils.py", line 229, in <genexpr>
return tuple(layer_past.index_select(1, beam_idx) for layer_past in past)
RuntimeError: Input, output and indices must be on the current device
```
## Expected behavior
The expected behavior is to not raise an error, but instead correctly return the beam search decoding.
| 12-18-2020 23:08:13 | 12-18-2020 23:08:13 | As the trace suggests, the error seem to come from the `_reorder_cache` method in `generation_utils.py`. Since the model is parallelized among multiple devices, it fails since the device of `beam_idx` and `layer_past` don't match for all layers.
I just tried to modify line 229 in `generation_utils.py` to:
```
return tuple(layer_past.index_select(1, beam_idx.to(layer_past.device)) for layer_past in past)
```
which seems to work.
I'm happy to file a PR with this change if you approve. Please let me know if there is anything I should be aware of, or pay extra attention to.<|||||>FWIW, this fix doesn't currently work for T5, as the fix to `_reorder_cache` is not reflected in the `modeling_t5.py` file. Following the above, changing [this line](https://github.com/huggingface/transformers/blob/fa84540e98a6af309c3007f64def5011db775a70/src/transformers/models/t5/modeling_t5.py#L1679) to `layer_past_state.index_select(0, beam_idx.to(layer_past_state.device)),` appears to fix it.
@patrickvonplaten <|||||>@OyvindTafjord - would you mind opening a new PR for it? :-) |
transformers | 9,199 | closed | [t5 doc] typos | This PR fixes a few run away backticks
Sylvain, why do we not have documentation for `T5ForConditionalGeneration.generate`? This doc is trying to link to it, but there is no such entry in https://huggingface.co/transformers/model_doc/t5.html
@sgugger
| 12-18-2020 22:46:08 | 12-18-2020 22:46:08 | |
transformers | 9,198 | closed | [run_glue] add speed metrics | This PR starts to sync with recent changes in trainer+finetune_trainer.py
* train: (sync with finetune_trainer):
- prints and saves train speed metrics - needed for benchmarking
- saves the state,
* eval: sorts metrics logging info
@sgugger | 12-18-2020 21:50:34 | 12-18-2020 21:50:34 | |
transformers | 9,197 | closed | [RAG] Add Ray implementation for distributed retrieval | # What does this PR do?
This PR adds a new distributed retriever implementation for RAG built on Ray, as an alternative to the current retriever implementation that uses torch.distributed. With Ray it's possible to load the index on multiple processes instead of just the rank 0 training worker, allowing fine tuning to scale out better to multiple GPUs, and also allowing the index to potentially be fit in GPU memory. This also removes a core dependency on Pytorch, allowing a Tensorflow implementation of `finetune.py`.
This PR also makes changes to support finetune.py with Pytorch Lightning >v1.0.
A benchmark of Pytorch distribtued retrieval vs. Ray distributed retrieval

## Implementation Details
In the current Pytorch retrieval implementation, the index is loaded once on just the rank 0 training workers. Training worker 0 gathers the inputs from all other workers, performs the index lookup, and scatters the results back to the other workers.

With the Ray implementation, the index is loaded on *separate* processes, which are referred to as Ray actors. Each training worker randomly selects a retrieval actor to query for documents and Ray handles all the communication between the processes. Because the index can be loaded in *multiple* processes, training can scale up since no synchronization needs to happen for the index lookup.

Note that Pytorch Lightning is still handling distributed *training*, but Ray manages distributed *retrieval*. Because PTL calls the entire training script under the hood multiple times, we have to use Ray's named actors feature (https://docs.ray.io/en/master/actors.html?highlight=named%20actors#named-actors) allowing the retrieval actors to be referenced by all training processes. The use of named actors is necessitated by how PTL handles distributed training, and a simpler approach could probably be used for a Tensorflow implentation.
## Testing Strategy
Unit tests were added to `test_distributed_retriever.py`. Note that the local Ray cluster for the tests had to be started with `local_mode=True` because the test file modifies `sys.path` and these changes are not propagated to remote processes. See https://stackoverflow.com/questions/54338013/parallel-import-a-python-file-from-sibling-folder for more info.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to the it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
Translation: @sshleifer
Summarization: @sshleifer
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @sshleifer
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @sshleifer
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
| 12-18-2020 19:33:10 | 12-18-2020 19:33:10 | cc @sgugger @patrickvonplaten @LysandreJik @lhoestq <|||||>Nice, good to merge then!<|||||>Awesome, thank you so much for the reviews @lhoestq @patrickvonplaten -- happy holidays!<|||||>Thanks guys!<|||||>@amogkam @patrickvonplaten I need some help to implement an end-to-end retrieval training feature for the rag with Ray.
How can I run document encoding and indexing with an updated doc-encoder (context encoder network that kept frozen in the original RAG) using a Ray actor separated from the main training process?
How can I access the document index inside Ray actors during the training incase I want to update the index, say in every 5000 steps.
<|||||>@shamanez could you open a new issue to track this?<|||||>@richardliaw
I have already opened one a few weeks ago. Please refer to this [issue](https://github.com/huggingface/transformers/issues/9646)
I added a new issue explaining the exact problem in [this](https://github.com/huggingface/transformers/issues/10135) |
transformers | 9,196 | closed | Add timing inside Trainer | # What does this PR do?
Add timing reports for training/evaluation and test inside the Trainer. Also, change the default repr of `TrainingArguments` to avoid printing the deprecated arguments.
There is a breaking change in this PR: the output of `Trainer.train` gains a new field `metrics`, so the length of the namedtuple changes. I don't think it's too bad since all scripts and examples I've seen never store the result of this `train` method. After discussion with @LysandreJik we proposed to merge this breaking change and revert it before the next release if users complain. | 12-18-2020 18:50:56 | 12-18-2020 18:50:56 | may I ask for one more bit, while you're at it - sorting the metrics before printing them out?
I added this already into the final json file writing, but it'd make it easier to read the info logs.
Now we have:
```
2020-12-18 11:51:55 | INFO | __main__ | val_loss = 368.4116
2020-12-18 11:51:55 | INFO | __main__ | val_bleu = 26.3465
2020-12-18 11:51:55 | INFO | __main__ | val_gen_len = 31.2
2020-12-18 11:51:55 | INFO | __main__ | val_runtime = 22.1214
2020-12-18 11:51:55 | INFO | __main__ | val_samples_per_second = 9.041
2020-12-18 11:51:55 | INFO | __main__ | epoch = 1.0
2020-12-18 11:51:55 | INFO | __main__ | val_n_objs = 200
```
as compared to sorted:
```
2020-12-18 11:51:55 | INFO | __main__ | epoch = 1.0
2020-12-18 11:51:55 | INFO | __main__ | val_bleu = 26.3465
2020-12-18 11:51:55 | INFO | __main__ | val_gen_len = 31.2
2020-12-18 11:51:55 | INFO | __main__ | val_loss = 368.4116
2020-12-18 11:51:55 | INFO | __main__ | val_n_objs = 200
2020-12-18 11:51:55 | INFO | __main__ | val_runtime = 22.1214
2020-12-18 11:51:55 | INFO | __main__ | val_samples_per_second = 9.041
```
or I could make a PR later if it's too unrelated... I probably should do that and not waste your time.
Thanks. |
transformers | 9,195 | closed | Error "if input.dim() == 2 and bias is not None" | Pytorch version: pytorch-1.7.1-py3.8_cuda11.0.221_cudnn8.0.5_0
Transformer version: 4.0.0
Code:
```
import torch
import torch.nn as nn
from torch.utils.data import TensorDataset, DataLoader, RandomSampler, SequentialSampler
from transformers import AutoModel, BertTokenizerFast
class BERT_Arch(nn.Module):
def __init__(self, bert):
super(BERT_Arch, self).__init__()
self.bert = bert
# dropout layer
self.dropout = nn.Dropout(0.1)
# relu activation function
self.relu = nn.ReLU()
# dense layer 1
self.fc1 = nn.Linear(768,512)
# dense layer 2 (Output layer)
self.fc2 = nn.Linear(512,2)
#softmax activation function
self.softmax = nn.LogSoftmax(dim=1)
#define the forward pass
def forward(self, sent_id, mask):
#pass the inputs to the model
_, cls_hs = self.bert(sent_id, attention_mask=mask)
x = self.fc1(cls_hs)
x = self.relu(x)
x = self.dropout(x)
# output layer
x = self.fc2(x)
# apply softmax activation
x = self.softmax(x)
return x
# import BERT-base pretrained model
bert = AutoModel.from_pretrained('bert-base-uncased')
# pass the pre-trained BERT to our define architecture
model = BERT_Arch(bert)
# push the model to GPU
model = model.to(device)
# dataLoader for train set
train_dataloader = DataLoader(train_data, sampler=train_sampler, batch_size=batch_size)
for step,batch in enumerate(train_dataloader):
batch = [r.to(device) for r in batch]
sent_id, mask, labels = batch
preds = model(sent_id, mask)
```
Error:
> ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-46-3656917b982b> in <module>
2 batch = [r.to(device) for r in batch]
3 sent_id, mask, labels = batch
----> 4 preds = model(sent_id, mask)
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
<ipython-input-43-05830d6f294e> in forward(self, sent_id, mask)
21 #pass the inputs to the model
22 _, cls_hs = self.bert(sent_id, attention_mask=mask)
---> 23 x = self.fc1(cls_hs)
24 x = self.relu(x)
25 x = self.dropout(x)
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
91
92 def forward(self, input: Tensor) -> Tensor:
---> 93 return F.linear(input, self.weight, self.bias)
94
95 def extra_repr(self) -> str:
~/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1686 if any([type(t) is not Tensor for t in tens_ops]) and has_torch_function(tens_ops):
1687 return handle_torch_function(linear, tens_ops, input, weight, bias=bias)
-> 1688 if input.dim() == 2 and bias is not None:
1689 # fused op is marginally faster
1690 ret = torch.addmm(bias, input, weight.t())
AttributeError: 'str' object has no attribute 'dim'
| 12-18-2020 17:53:38 | 12-18-2020 17:53:38 | Please try
`_, cls_hs = self.bert(sent_id, attention_mask=mask)`
to
`_, cls_hs = self.bert(sent_id, attention_mask=mask)[:2]`
or
`_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)`<|||||>It fixes the issue. Thanks!<|||||>> Please try
> `_, cls_hs = self.bert(sent_id, attention_mask=mask)`
> to
> `_, cls_hs = self.bert(sent_id, attention_mask=mask)[:2]`
> or
> `_, cls_hs = self.bert(sent_id, attention_mask=mask, return_dict=False)`
You the man! I also got stuck in this issue for an hour, and your solution just fixes it perfectly!
Thanks man! |
transformers | 9,194 | closed | Loading MPNet from disc: ValueError: An instance of tokenizer class MPNetTokenizer cannot be converted in a Fast tokenizer instance. | ## Environment info
- `transformers` version: 4.1.1 (pip version)
- Platform: Ubuntu 20.04
- Python version: 3.7
- PyTorch version (GPU?): Pytorch 1.7 GPU
## Information
Hi,
thanks for adding MPNet. I got quite promising results when using it for generating sentence embeddings.
However, there is an issue when saving and loading the MPNet model (when using version 4.1.1 of transformers, installed via pip):
```python
from transformers import AutoTokenizer, AutoModel
local_dir = 'mpnet-model/'
model = AutoModel.from_pretrained('microsoft/mpnet-base')
tokenizer = AutoTokenizer.from_pretrained('microsoft/mpnet-base')
model.save_pretrained(local_dir)
tokenizer.save_pretrained(local_dir)
#Load tokenizer and model from dir
model = AutoModel.from_pretrained(local_dir)
# The following command will throw an exception
tokenizer = AutoTokenizer.from_pretrained(local_dir)
```
This leads to the following error:
```
File "/home/reimers/miniconda3/envs/sbert/lib/python3.7/site-packages/transformers/convert_slow_tokenizer.py", line 636, in convert_slow_tokenizer
f"An instance of tokenizer class {tokenizer_class_name} cannot be converted in a Fast tokenizer instance. "
ValueError: An instance of tokenizer class MPNetTokenizer cannot be converted in a Fast tokenizer instance. No converter was found. Currently available slow->fast convertors: ['AlbertTokenizer', 'BartTokenizer', 'BarthezTokenizer', 'BertTokenizer', 'CamembertTokenizer', 'DistilBertTokenizer', 'DPRReaderTokenizer', 'DPRQuestionEncoderTokenizer', 'DPRContextEncoderTokenizer', 'ElectraTokenizer', 'FunnelTokenizer', 'GPT2Tokenizer', 'HerbertTokenizer', 'LayoutLMTokenizer', 'LongformerTokenizer', 'LxmertTokenizer', 'MBartTokenizer', 'MobileBertTokenizer', 'OpenAIGPTTokenizer', 'PegasusTokenizer', 'ReformerTokenizer', 'RetriBertTokenizer', 'RobertaTokenizer', 'SqueezeBertTokenizer', 'T5Tokenizer', 'XLMRobertaTokenizer', 'XLNetTokenizer']
```
| 12-18-2020 17:12:29 | 12-18-2020 17:12:29 | Looking into it!
An easy fix for now would be the following:
```python
from transformers import AutoTokenizer, AutoModel
local_dir = 'mpnet-model/'
model = AutoModel.from_pretrained('microsoft/mpnet-base')
tokenizer = AutoTokenizer.from_pretrained('microsoft/mpnet-base')
model.save_pretrained(local_dir)
tokenizer.save_pretrained(local_dir)
#Load tokenizer and model from dir
model = AutoModel.from_pretrained(local_dir)
# The following command will throw an exception
tokenizer = AutoTokenizer.from_pretrained(local_dir, use_fast=False)
```
but that's more of a hack than really solving the underlying problem as it just loads the slow tokenizer. Will check what's going on!
<|||||>The PR linked to the issue adds the required converter so that the above code:
```python
from transformers import AutoTokenizer, AutoModel
local_dir = 'mpnet-model/'
model = AutoModel.from_pretrained('microsoft/mpnet-base')
tokenizer = AutoTokenizer.from_pretrained('microsoft/mpnet-base')
model.save_pretrained(local_dir)
tokenizer.save_pretrained(local_dir)
#Load tokenizer and model from dir
model = AutoModel.from_pretrained(local_dir)
# The following command will throw an exception
tokenizer = AutoTokenizer.from_pretrained(local_dir)
```
should work after merging. It's a bit weird that one can use a FastTokenizer if the model id is `microsoft/mpnet-base` but not if the model is serialized and loaded again...we should maybe think of a better way to prevent such issues in the future. Maybe just not allow one to add a "FastTokenizer" class without adding the corresponding converter? @sgugger @LysandreJik <|||||>Great, thanks for the quick fix. |
transformers | 9,193 | closed | Full rework of the TF input/output embeddings and bias resizing | # What does this PR do?
This PR 100% reworks the entire process of input/output and bias resizing. Now the exceptions are better handled including the names that now are always similar. The corresponding tests have also been entirely reworked and now have a better coverage of this feature.
This PR adds a small breaking change. Now the `get_input_embeddings` methods returns the weights and not anymore the embedding layer. | 12-18-2020 16:28:25 | 12-18-2020 16:28:25 | I haven't reviewed in detail yet, but just looking at the API with the number of things to change for ALBERT (and in terms of line of code) is a hard pass for me. Overriding the resize method as was done before was way easier, this adds too much complexity.<|||||>I understand that it is a big update. Nevertheless, the way it was done before didn't worked and was quite buggy (the tests basically was testing almost nothing) and to make the resizing properly working, these changes are necessary.<|||||>In all cases I'm open to any suggestion that will reduce the number of changes :)<|||||>@sgugger @LysandreJik I tried a new approach for the resizing that reduce a lot the changes in each model implementation, it is even much shorter than what we currently have in master. I have done my test only on ALBERT for now, can you recheck that file and let me know what you think about it.<|||||>Ok I will clarify this a bit more:
1. The fist most important issue in the current implementation is that the resizing is not graph compilation+ execution compliant because of the usage of `numpy` and `tensor.numpy()` calls and then not usable in such cases.
2. The naming was depending of where the build was coming from, that's why we needed the `get_prefix_bias_name` for the bias and the manual build of the embeddings names. Which was a temporary fix, because it is very error prone, because the naming depends of several other things that are not taken into account into this manual build.
3. Resizing was not working for some models, such as BART which doesn't work properly and raises an error. (Proof that the tests was not testing everything)
4. The current resizing has two issues when resizing when we instantiate a model from scratch: either it raises an attribute error because the model is not fully built (weights not instantiated) and get a wrong naming, and then if we save the model with this wrong naming we cannot save/load it properly because the naming doesn't correspond to the current architecture.
5. All the weights names across the models don't share the same names sometimes `embeddings.word_embeddings`, sometimes `shared.weight`, sometimes `lm_head.bias`, sometimes `lm_loss.bias`, sometimes `mlm.bias`, sometimes `mlm.predictions.bias` and many other ways...
As stated in #8657 this was just a temporary fix to go to the quickest way to wait for the real rework.
This PR aims to solve all these issues and bring something more generic and less error prone.<|||||>I personally never understood that #8657 was a quick fix that was needing another PR afterwards. We cannot operate by adding new methods in one release then breaking them or deleting them in the next so the works that was done in #8657 needs to be built upon not destroyed (and please, say in bold next time you are just making a quick fix as I would never have approved #8657 to be merged had I known...)
So before we review this, the following need to be addressed:
- `get_input_embeddings` needs to keep the same return type
- `get_output_embeddings` needs to keep the same return type
- `get_output_layer_with_bias` can't disappear
- `get_prefix_bias_name` can't disappear
This is annoying but this is why we usually don't merge a half-baked fix introducing new APIs, we can't break that after.<|||||>We can keep this for the next major release.
What you ask is doable but will make the codebase more complicated. I will rework this.<|||||>I have just done the following restore:
- `get_input_embeddings` still returns a layer
- `get_output_embeddings` still returns a layer
- `get_output_layer_with_bias` is back
- `get_prefix_bias_name` is back
The old and new approach was much more compliant than I thought so it was easier to restore what @sgugger asked, and now there should be zero breaking change. Really sorry for the misunderstanding, I will clearer next time.<|||||>@sgugger I should have addressed all your comments :)
> which makes me think there is a better way to code the default in modeling_tf_utils.
Share your thoughts 😉<|||||>Thanks @LysandreJik! For the tying test look in the `_get_resized_lm_head_decoder()` method. Unless you mean adding a test in `test_modeling_tf_common` ?<|||||>I mean I'm not seeing a test that checks `get_input_embeddings() == get_output_embeddings()` when weights are tied, but I may be missing something here.
I know these two generally point to the same tensors, but no always, do they?<|||||>> I mean I'm not seeing a test that checks get_input_embeddings() == get_output_embeddings() when weights are tied, but I may be missing something here.
Yes, there is a test for this, line 884 in `modeling_tf_utils`.
> I know these two generally point to the same tensors, but no always, do they?
Yes they always point to the same tensor when they equals, 100% sure.<|||||>I should have addressed all the comments.<|||||>Ah yeah, we probably need a rebase here since TF-serving just got merged :-/<|||||>Arf good point!<|||||>@patrickvonplaten the test `TFLEDModelTest::test_pt_tf_model_equivalence` seems very flaky, it looks like that it randomly pass/fail. <|||||>Good to merge for me now :)<|||||>> @patrickvonplaten the test `TFLEDModelTest::test_pt_tf_model_equivalence` seems very flaky, it looks like that it randomly pass/fail.
Just fixed it: https://github.com/huggingface/transformers/pull/9459<|||||>@sgugger any objection to merge this PR? |
transformers | 9,192 | closed | example code for fine-tuning CLM does not work for GPT | ## Environment info
- `transformers` version: 3.5.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.6
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: NO
### Who can help
@patrickvonplaten @TevenLeScao
## Information
Model I am using: open-ai GPT.
The problem arises when using:
* [x] the official example scripts: (give details below)
```
python run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm
```
It's the example script on: [(https://github.com/huggingface/transformers/tree/master/examples/language-modeling], which is used to fine-tune a casual language model.
## To reproduce
Steps to reproduce the behavior:
1. go to transformers/examples/language-modeling
2. run the following command
`python run_clm.py --model_name_or_path openai-gpt --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --do_train --do_eval --output_dir /tmp/test-clm`
3 The following error occurs:
`
RuntimeError: The size of tensor a (1024) must match the size of tensor b (512) at non-singleton dimension 1
`
In my case, the error does not occur when using `--model_name_or_path gpt2`.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The progress bar should be filled and the language model should be finetuned.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-18-2020 16:18:01 | 12-18-2020 16:18:01 | Could you try putting `--block_size=512` in your command to see if it changes something?<|||||>@LysandreJik Thank you it solved the issue! |
transformers | 9,191 | closed | Segfault on python 3.9 exit | Ok, that's a weird one
## Environment info
- `transformers` version: 4.1.1
- Platform: Linux-5.8.0-34-generic-x86_64-with-glibc2.32
- Python version: 3.9.0+
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
Steps to reproduce the behavior:
1. Run the following script
```python
import torch
import transformers
loss = torch.tensor([1.0], requires_grad=True)
loss.backward()
```
The script runs correctly but exits with
```text
[1] 46823 segmentation fault (core dumped) python testcase.py
```
Which doesn't happen if `import transformers` is commented out.
Only happens when on Python 3.9, it works as expected in 3.8.
## Full env
```text
certifi==2020.12.5
chardet==4.0.0
click==7.1.2
filelock==3.0.12
idna==2.10
joblib==1.0.0
numpy==1.19.4
packaging==20.8
pyparsing==2.4.7
regex==2020.11.13
requests==2.25.1
sacremoses==0.0.43
six==1.15.0
tokenizers==0.9.4
torch==1.7.1
tqdm==4.54.1
transformers==4.1.1
typing-extensions==3.7.4.3
urllib3==1.26.2
```
| 12-18-2020 15:18:11 | 12-18-2020 15:18:11 | I encountered a similar issue, in a context where `transformers` was not imported. I've [reported](https://github.com/pytorch/pytorch/issues/50858) the issue to the PyTorch project.<|||||>I too have similar problem when running unittests for a Python package on travis, using Python 3.9. Specifically, all unit tests run fine, but the final outcome is segfault. See: https://travis-ci.org/github/LoryPack/abcpy/builds/755321073
<|||||>This is probably fixed by https://github.com/pytorch/pytorch/pull/50998 |
transformers | 9,190 | closed | Addition of MuRIL - BERT based model for 17 Indian Languages to the library | Hi,
This PR is regarding the addition of MuRIL, a BERT-based model trained specifically for 17 Indian Languages to the hugging face library.
MuRIL is released on tfhub. Link to the repo: [https://tfhub.dev/google/MuRIL/1](https://tfhub.dev/google/MuRIL/1)
I am interested to work on this contribution to the library. Please let me know if I can work on it. | 12-18-2020 15:03:18 | 12-18-2020 15:03:18 | Hello! Yes, feel free to. However, you seem to have based yourself off of a `tf` branch that is very old?<|||||>@LysandreJik Yeah sorry. Should I raise a new issue so that it can be changed accordingly?<|||||>If you want to contribute this model, I invite you to base yourself off of the `master` branch and create a new branch from there.
Once you your branch, you can leverage the [template scripts](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model) which should help you by creating a new model and adding it everywhere it should be added; you'll only have to update the created files.<|||||>Reading the [contributing guide](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) will also be very helpful.<|||||>great effort and contribution @ravi03071991 <|||||>@LysandreJik Sure. Will follow the guidelines. Thank you. |
transformers | 9,189 | closed | GPT-model attention heads pruning example | # What does this PR do?
This script is adapted from the [BERT attention heads pruning code](https://github.com/huggingface/transformers/blob/master/examples/research_projects/bertology/run_bertology.py) AKA Bertology to make it possible to prune GPT-model heads as well.
It basically works the same way as run_bertology.py, but can deal with GPT-models.
| 12-18-2020 15:00:56 | 12-18-2020 15:00:56 | One last detail: could you run `make style` on your branch? There seems to be some bad formatting.<|||||>> One last detail: could you run `make style` on your branch? There seems to be some bad formatting.
I did, but let me recheck it |
transformers | 9,188 | closed | MRPC Reproducibility with transformers-4.1.0 | I always get lower precision following the MRPC example, what's the reason?
```
python run_glue.py \
--model_name_or_path bert-base-cased \
--task_name $TASK_NAME \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_device_train_batch_size 32 \
--learning_rate 2e-5 \
--num_train_epochs 3.0 \
--output_dir /tmp/$TASK_NAME/
```
and get
```
12/18/2020 17:16:38 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 17:16:38 - INFO - __main__ - eval_loss = 0.5318707227706909
12/18/2020 17:16:38 - INFO - __main__ - eval_accuracy = 0.7622549019607843
12/18/2020 17:16:38 - INFO - __main__ - eval_f1 = 0.8417618270799347
12/18/2020 17:16:38 - INFO - __main__ - eval_combined_score = 0.8020083645203595
12/18/2020 17:16:38 - INFO - __main__ - epoch = 3.0
12/18/2020 16:45:29 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 16:45:29 - INFO - __main__ - eval_loss = 0.47723284363746643
12/18/2020 16:45:29 - INFO - __main__ - eval_accuracy = 0.8063725490196079
12/18/2020 16:45:29 - INFO - __main__ - eval_f1 = 0.868988391376451
12/18/2020 16:45:29 - INFO - __main__ - eval_combined_score = 0.8376804701980294
12/18/2020 16:45:29 - INFO - __main__ - epoch = 3.0
12/18/2020 16:34:37 - INFO - __main__ - ***** Eval results mrpc *****
12/18/2020 16:34:37 - INFO - __main__ - eval_loss = 0.571368932723999
12/18/2020 16:34:37 - INFO - __main__ - eval_accuracy = 0.6838235294117647
12/18/2020 16:34:37 - INFO - __main__ - eval_f1 = 0.8122270742358079
12/18/2020 16:34:37 - INFO - __main__ - eval_combined_score = 0.7480253018237863
12/18/2020 16:34:37 - INFO - __main__ - epoch = 3.0
```
GPU: GTX 1080
transformers: 4.1.0
Torch: 1.6.0
python: 3.8
Server: Ubuntu 18.04
| 12-18-2020 12:45:20 | 12-18-2020 12:45:20 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,187 | closed | Problem with pretraining GPT-2 on TPU with Pytorch/XLA | ### **Environment info**
transformers version: 4.0.1
Platform: Ubuntu 18.04.4 LTS
Python version: 3.6.9
PyTorch version: 1.7.1
Torch-XLA version: 1.7
Tensorflow version: 2.3.1
### **Information**
Model I am intended to pretrain (Bert, XLNet ...): GPT2
The problem arises when using:
- my own modified scripts: (give details below)
The tasks I am working on is:
- pretraining GPT-2 on Google TPU
### **To reproduce**
Steps to reproduce the behavior:
1. According to instructions [here](https://github.com/pytorch/xla/blob/master/README.md#-consume-prebuilt-compute-vm-images), creating instance of compute engine on Google Cloud with following spec:
- OS: Deep Learning on Linux
- Version: Debian GNU/Linux 9 Stretch + Pytorch/XLA
2. Running the code of:
`export XRT_TPU_CONFIG="tpu_worker;0;$TPU_IP_ADDRESS:8470"`
3. Running following script.
```python
#!/usr/bin/env bash
pipenv run python3 xla_spawn.py --num_cores 8 \
run_clm.py \
--num_train_epochs 5 \
--output_dir saved_model/ \
--overwrite_output_dir \
--logging_dir logs \
--logging_steps 50 \
--save_total_limit 2 \
--save_steps 2000 \
--model_type gpt2 \
--config_name tokenized_data/ \
--tokenizer_name tokenized_data/ \
--block_size 1024 \
--train_file dataset/dataset.txt \
--per_device_train_batch_size=64 \
--do_train
```
Here is the exception occured:
```
Exception in device=TPU:3: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds))
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::service::MeshClient::MeshClient(std::string const&)
xla::service::MeshClient::Get()
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_GetAttr
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyRun_StringFlags
PyRun_SimpleStringFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Failed to connect to client mesh master: workstation:41295
Traceback (most recent call last):
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 330, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
_setup_replication()
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 315, in _setup_replication
device = xm.xla_device()
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 231, in xla_device
devkind=devkind if devkind is not None else None)
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 136, in get_xla_supported_devices
xla_devices = _DEVICES.value
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/utils/utils.py", line 32, in value
self._value = self._gen_fn()
File "/home/ws/.local/share/virtualenvs/gpt-2_pretrain_huggingface-aTVuIzXL/lib/python3.6/site-packages/torch_xla/core/xla_model.py", line 18, in <lambda>
_DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices())
RuntimeError: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds))
*** Begin stack trace ***
tensorflow::CurrentStackTrace()
xla::service::MeshClient::MeshClient(std::string const&)
xla::service::MeshClient::Get()
xla::ComputationClient::Create()
xla::ComputationClient::Get()
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_GetAttr
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
PyObject_Call
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
_PyEval_EvalFrameDefault
PyRun_StringFlags
PyRun_SimpleStringFlags
Py_Main
main
__libc_start_main
_start
*** End stack trace ***
Failed to connect to client mesh master: workstation:41295
Exception in device=TPU:2: tensorflow/compiler/xla/xla_client/mesh_service.cc:316 : Check failed: impl_->channel->WaitForConnected( std::chrono::system_clock::now() + std::chrono::seconds(connect_wait_seconds))
``` | 12-18-2020 11:21:04 | 12-18-2020 11:21:04 | I realized that I didn't follow [instructions](https://github.com/pytorch/xla/blob/master/README.md) properly.<|||||>@redrussianarmy What`s the problem? I have the same issue but can not find a solution. |
transformers | 9,186 | closed | fixed not JSON serializable error in run_qa.py with fp16 | # What does this PR do?
Fixed an issue where running the run_qa.py script in a Squad-like dataset with fp16 enabled, would lead to a JSON serialization error:
```
TypeError: Object of type 'float16' is not JSON serializable
```
The bug is caused by not converting `np.float16` to `float` on line 209 and 397 in the utils_qa.py file.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section? --> yes
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. --> no
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation). --> No need for it
- [x] Did you write any new necessary tests? No
## Who can review?
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-18-2020 09:40:50 | 12-18-2020 09:40:50 | |
transformers | 9,185 | closed | can we use ckpt model file generated after finetuning the pre-trained models on custom dataset | Hi Team,
I have fine tuned pegasus huggingface wikihow model with my own custom dataset and end up with below files:
model.ckpt-1000.data-00000-of-00001
model.ckpt-1000.index
model.ckpt-1000.meta
events.out.tfevents.1608192436.ip-xxx-xx-x-xx
events.out.tfevents.1608192510.ip-xxx-xx-x-xx.v2
When I am running below code, it is taking pytorch_model.bin file automatically for generating summary. Where as I want to use ckpt model file for generating summary which I got after training.
**Code I am using is :**
model_path ='local-pegasus-wikihow'
torch_device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = PegasusTokenizer.from_pretrained(model_path)
model = PegasusForConditionalGeneration.from_pretrained(model_path).to(torch_device)
batch = tokenizer.prepare_seq2seq_batch(src_text, truncation=True, padding='longest', return_tensors="pt").to(torch_device)
translated = model.generate(**batch)
tgt_text = tokenizer.batch_decode(translated, skip_special_tokens=True)
print(tgt_text[0])
Do I need to change code here so that it will take ckpt model file for generating summary instead of searching for pre-trained pytorch_model.bin or .h5 file? | 12-18-2020 09:06:55 | 12-18-2020 09:06:55 | hi @SagarPalyal not sure what you mean here. by ckpt file do you mean a saved checkpoint ?<|||||>Yes you are right. This is saved checkpoint file only.<|||||>One thing to note: `pytorch_model.bin` is the name of the weights file, and every torch HF model will have that file.
To load the checkpoint simply pass the path for the checkpoint instead of `model_path`<|||||>Hey, @SagarPalyal did you solve the problem? I met the same problem. 'from_pretrained' function needs pytorch model file, not tensorflow model file. I don't know how to convert a costumed pegasus tensorflow model to pytorch model. Anyone knows??? <|||||>I am not able to figure out how to convert .ckpt file to pytorch model file but in your case if you have tensorflow model file then you can use parameter from_tf=True<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.<|||||>After fine-tuning Stable diffusion model in Huggingface, I got checkpoint files like pytorch_model.bin and scheduler.bin.
To get model.ckpt file from this directory, I needed to use CheckPoint Merger in AUTOMATIC1111.
Still finding if there is other checkpoint merger app, but no result. :( |
transformers | 9,184 | closed | [RagSequenceForGeneration] generate "without" input_ids | Hi guys,
In `RagSequenceForGeneration` method `generate()` function, the doc said that both `input_ids` and `context_input_ids` are optional (one of them must be specified) .
However, in the code https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L907
It specifically need `input_ids` in all cases.
Not sure which option is the best
(1) simply said `input_ids` is always needed , OR
(2) add code to calculate `nll` if only `context_input_ids` is provided , but in this case `doc_scores` and `context_attention_mask` have to be provided as well (similar to RagModel requirement ) : https://github.com/ratthachat/transformers/blob/ragseq_context_id/src/transformers/models/rag/modeling_rag.py#L588
I think option (2) should be reasonable since `RagTokenForGeneration` method `generate()` also requires the same.
Proposed fix in https://github.com/huggingface/transformers/pull/9220
| 12-18-2020 08:51:55 | 12-18-2020 08:51:55 | |
transformers | 9,183 | closed | Add caching mechanism to BERT, RoBERTa | # What does this PR do?
- This PR adds past key/values caching mechanism to `BertLMHeadModel`, `BertGenerationDecoder`, and `RobertaForCausalLM` to speed up the generation of `EncoderDecoder` models
- delete the `CausalLMOutputWithPastAndCrossAttentions` class and add `past_key_values` to `CausalLMOutputWithCrossAttentions` and `BaseModelOutputWithPoolingAndCrossAttentions` . All `ModelOutputs` that have a `cross-attention` should also have a `past_key_values`
- by default caching is enabled for `BertLMHeadModel`, `BertGenerationDecoder`, and `RobertaForCausalLM` during inference (not just generation, also for the forward pass) and now they also output `past_key_values` by default during inference, which is a small breaking change.
specifically when `config.output_attentions=True` in a `EncoderDecoderModel` model then the 2nd index of the output will be `past_key_values` instead of `attentions`
```python3
model = EncoderDecoderModel.from_pretrained(...)
outputs = model(input_ids)
attentions = outputs[1] # 2nd index will be past_key_values, instead of attentions
```
- this will only affect the output during generation for `EncoderDecoder` models. Caching will be disabled when the models are used as standalone encoders, so the default output, in that case, is unchanged.
Fixes #9052
| 12-18-2020 08:38:51 | 12-18-2020 08:38:51 | `python utils/check_copies.py --fix_and_overwrite` should be run to make the `check_code_quality` test pass<|||||>> `python utils/check_copies.py --fix_and_overwrite` should be run to make the `check_code_quality` test pass
Yeah, did that, for some reason, it's not working for `RobertaEmbeddings`, the code is the same as that of `BertEmbeddings`<|||||>> > `python utils/check_copies.py --fix_and_overwrite` should be run to make the `check_code_quality` test pass
>
> Yeah, did that, for some reason, it's not working for `RobertaEmbeddings`, the code is the same as that of `BertEmbeddings`
If Roberta has to be different feel free to remove the copy statement<|||||>I just merged a PR that made `cache` related tests a bit more aggressive: https://github.com/huggingface/transformers/pull/9256. It would be awesome if you could run your PR on the `EncoderDecoderModel` slow tests to make sure the cache doesn't change the results.<|||||>Merging ! |
transformers | 9,182 | closed | Fix link to old NER fine-tuning script | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-18-2020 00:11:46 | 12-18-2020 00:11:46 | |
transformers | 9,181 | closed | Fix link to old SQUAD fine-tuning script | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-17-2020 23:58:17 | 12-17-2020 23:58:17 | |
transformers | 9,180 | closed | [trainer] apex fixes and tests | This PR:
* [x] fixes a bug in with `fp16_backend` apex (`is_apex_available` + `amp` weren't getting imported w/ pt>=1.6)
* [x] adds test
* [x] adds a logger info on which fp16 backend will be used
@sgugger | 12-17-2020 23:55:41 | 12-17-2020 23:55:41 | |
transformers | 9,179 | closed | [trainer] speed issues: --fp16 doesn't improve speed, DP runs really slow | Splitting from https://github.com/huggingface/transformers/issues/9156#issuecomment-747636108 where while running benchmarks for fairscale's sharded ddp support I noticed that there was almost no difference between having or not having `--fp16` to the training runtime.
Not sure whether this impacts trainer in general or just seq2seq finetune_trainer.py, but there is almost no speed improvements with adding `--fp16`.
This is with pytorch-nightly.
Not sure whether this has to do with the recent autocast cache blowup fix https://github.com/pytorch/pytorch/issues/48049 (our corresponding issue https://github.com/huggingface/transformers/issues/8403) - part of the fix was to remove some of the cash that perhaps was essential and thus leads to this issue.
I will test with apex and compare.
Also as can be seen from: https://github.com/huggingface/transformers/issues/9179#issuecomment-747783879
DP is running really slow!
@sgugger
| 12-17-2020 22:45:01 | 12-17-2020 22:45:01 | Could you run the other examples script to check? On my end they result in a roughly x2 speedup but I'm not on pytorch-nightly.<|||||>Any recommendations and specific command lines that you use?
I have been using the finetune and other scripts in seq2seq as a go-to scripts for testing, so I'm not quite experienced with the others.
Thank you!<|||||>So on a finetune_trainer setup `--fp16` is slower than w/o it - on either amp or apex, so that elimination the caching concern. (w/ pytorch nightly)
Caveat: The following tests aren't ideal since I have 1 fast and 1 slow card, but they should be consistent since the overall speed is always at the slowest card (with the exception of single gpu tests), so it's like having 2 slow cards.
### DDP
```
# baseline w/ --fp16
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500
2020-12-17 16:04:18 | INFO | __main__ | train_runtime = 27.9693
# --fp16 --fp16_backend apex
2020-12-17 16:01:14 | INFO | __main__ | train_runtime = 30.0469
# --fp16 --fp16_backend amp
2020-12-17 16:06:41 | INFO | __main__ | train_runtime = 29.5368
```
### DP
DP setup is about the same correlation - but runs twice as slow! (that looks wrong too!)
```
# baseline (no --fp16)
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500
2020-12-17 16:16:09 | INFO | __main__ | train_runtime = 56.7522
# --fp16 --fp16_backend apex
2020-12-17 16:14:26 | INFO | __main__ | train_runtime = 59.4309
# --fp16 --fp16_backend amp
2020-12-17 16:12:18 | INFO | __main__ | train_runtime = 58.4406
```
### Single GPU (gtx-1070 slowest)
no improvement either with fp16
```
# baseline (no --fp16)
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=1 PYTHONPATH=../../src USE_TF=0 python ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500
2020-12-17 16:26:10 | INFO | __main__ | train_runtime = 24.6995
# --fp16 --fp16_backend apex
2020-12-17 16:27:26 | INFO | __main__ | train_runtime = 28.6601
# --fp16 --fp16_backend amp
2020-12-17 16:28:37 | INFO | __main__ | train_runtime = 27.9687
```
### Single GPU (rtx-3090 fastest)
no improvement either with fp16
```
# baseline (no --fp16)
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0 PYTHONPATH=../../src USE_TF=0 python ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500
2020-12-17 16:22:33 | INFO | __main__ | train_runtime = 11.2534
# --fp16 --fp16_backend apex
2020-12-17 16:19:41 | INFO | __main__ | train_runtime = 14.4828
# --fp16 --fp16_backend amp
2020-12-17 16:21:15 | INFO | __main__ | train_runtime = 11.7265
```
rtx-3040 is so much faster than gtx-1070.
Also wrt single gpu test: slow card 100%, fast one 75% - so the trainer is not fast enough feeding data in for the latter.
<|||||>> Any recommendations and specific command lines that you use?
Normally the first command indicated in the README of each example folder should work well :-)<|||||>> Normally the first command indicated in the README of each example folder should work well :-)
The one I tried doesn't report speed. `text-classification/run_glue.py`. I'm not sure what method you were using to detect speed improvements.<|||||>Thanks for adding the speed metrics into the core trainer, with https://github.com/huggingface/transformers/pull/9198 I run benchmarks for run_glue in text-classification, the results are terrible speed-wise just like with the finetune_trainer - terrible as in all those things that are supposed to speed things up, slow things down instead:
```
# baseline
rm -r output_dir; PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 run_glue.py --model_name_or_path bert-base-cased --task_name MRPC --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 32 --learning_rate 2e-5 --num_train_epochs 3.0 --output_dir output_dir
12/18/2020 13:29:01 - INFO - __main__ - train_runtime = 97.4012
# --fp16
12/18/2020 13:46:09 - INFO - __main__ - train_runtime = 109.1225
# --sharded_ddp
12/18/2020 13:53:18 - INFO - __main__ - train_runtime = 103.1887
# --fp16 --sharded_ddp
12/18/2020 13:50:57 - INFO - __main__ - train_runtime = 113.9132
```
this is all w/ pt-nightly, since I have to use it to run rtx-3090 card
<|||||>In that case, it looks linked to your setup/env somehow. Maybe support for the RTX-3090 is not fully fledged? On my setup and env FP16 speeds up by a factor of x2 roughly for this script. Will test the sharded DDP variants on Monday.<|||||>if you could do the above 4 runs (https://github.com/huggingface/transformers/issues/9179#issuecomment-748338332) for comparison that would be great! I need to know if something is off with my setup since I'm trying to eval deepspeed.
cuda-11.2 is out so hoping to get a normal support for rtx-3090 from pytorch really soon now.<|||||>Hi
the command @stas00 mentioend above does not work for me, I am using last version of huggingface, could you tell me which version you used to test? please see my bug here https://github.com/huggingface/transformers/issues/9215 <|||||>@rabeehk: https://github.com/huggingface/transformers/issues/9156#issuecomment-748501582<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,178 | closed | [ci] install fairscale on self-runner CIs | The tests for the new sharded ddp fairscale integration are in place, https://github.com/huggingface/transformers/pull/9177
but CIs don't have `fairscale` installed so they won't run on CI.
This is an issue to track this need so we will eventually run these tests.
Blocking event: needing a more reliable/quick way of installing fairscale - need to track the following issue for when this happens:
https://github.com/facebookresearch/fairscale/issues/264
Then we can add `pip install fairscale` to the self-runner CIs with multigpus. | 12-17-2020 22:28:45 | 12-17-2020 22:28:45 | Probably should change CI to do:
```
pip install fairscale --no-build-isolation
```
which is much faster then the new pip system, as it doesn't need to fetch dependent packages
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,177 | closed | add tests for the new sharded ddp fairscale integration | This PR adds tests for the just added sharded ddp fairscale integration https://github.com/huggingface/transformers/pull/9139
Obviously these won't run on CIs w/o having fairscale installed... hope we will sort this out down the road. the problem is building fairscale - no binary wheel - I will ask them if they could make a jit version.
@sgugger | 12-17-2020 22:11:04 | 12-17-2020 22:11:04 | I made a request for a quicker/simpler binary: https://github.com/facebookresearch/fairscale/issues/264
And added an issue to track this so we won't forget: https://github.com/huggingface/transformers/issues/9178
|
transformers | 9,176 | closed | [setup] correct transformers version format | setuptools has a pretty fixed expectation of version numbers.
```
x.y.z
x.y.z.dev0
x.y.z.rc1
```
This PR fixes the dev version number and adds a comment with correct formats for the future editors
This fix removes this warning on `make fixup|style|etc` or any other time `setup.py` is being run.
```
setuptools/dist.py:452: UserWarning: Normalizing '4.2.0dev0' to '4.2.0.dev0'
warnings.warn(tmpl.format(**locals()))
```
and the alternative:
```
/setuptools/dist.py:452: UserWarning: Normalizing '4.0.0-rc-1' to '4.0.0rc1'
```
Fixes: #8749
@LysandreJik, @sgugger
| 12-17-2020 22:06:28 | 12-17-2020 22:06:28 | |
transformers | 9,175 | closed | Add new run_swag example | # What does this PR do?
This PR adds a new example for multiple-choice using Trainer and Datasets, and moves the older one to the legacy folder. | 12-17-2020 20:53:46 | 12-17-2020 20:53:46 | |
transformers | 9,174 | closed | [WIP] Adapt Cookie Cutter For EncoderDecoder | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-17-2020 19:51:19 | 12-17-2020 19:51:19 | Cleaner version here: https://github.com/huggingface/transformers/pull/9251 |
transformers | 9,173 | closed | GPT2 eval with attention_mask not returning expected result | is this not correct use of attention_mask with padding (trying to batch sentence eval) or is it a bug? Shouldn't the return result be the same in each case because of the mask?
import torch
from transformers import GPT2LMHeadModel
model = GPT2LMHeadModel.from_pretrained('gpt2', return_dict=True)
```
with torch.no_grad():
model.eval()
loss,logits = model(input_ids=torch.tensor([15496,11,616]), attention_mask=torch.tensor([1,1,1]), labels=torch.tensor([15496,11,616]))[:2]
print(loss) # 3.0698 correct answer
loss,logits = model(input_ids=torch.tensor([50256,15496,11,616]), attention_mask=torch.tensor([0,1,1,1]), labels=torch.tensor([50256,15496,11,616]))[:2]
print(loss) # 8.96 - shouldn't it be 3.0698 as above?
``` | 12-17-2020 18:18:05 | 12-17-2020 18:18:05 | Hey @dan-i,
1) You should mask the labels of the first token it's not enough to set the attention_mask to 0 -> `labels=torch.tensor([-100, ...])`
2) You have to change the `position_ids` to ensure the expected behavior in GPT2, *i.e.* make sure that in the second case the position_ids are [0, 0, 1, 2] instead of [0, 1, 2, 3]<|||||>thank patrick. seems to work if you just do labels to -100 where the pads are without mask and position. at least it returns same result as individual sentence scores. for anybody else that needs gpt2 batch sentence perplexity scores.
```
def calc_loss(sentences,inputs,logits):
# calc loss from logits returned from model
lines_len = torch.sum(inputs['attention_mask'], dim=1) # quick way to get sent len. mask isn't used for anything else
for line_ind in range(len(sentences)):
log_prob = 0.0
for token_ind in range(lines_len[line_ind] - 1):
token_prob = F.softmax(logits[line_ind, token_ind], dim=0)
token_id = inputs['input_ids'][line_ind, token_ind + 1]
log_prob += torch.log(token_prob[token_id])
sentloss=abs(log_prob)
toks=lines_len[line_ind]
loss=sentloss/(toks-1)
print(sentences[line_ind],' loss=',loss.item(),'toks=',toks.item(),'sentloss=',sentloss.item())
sentences=["Hello, my","Hello, my dog","Hello, my dog is a dog"]
tokenizer.pad_token = tokenizer.eos_token # set pad to eos
inputs = tokenizer(sentences, return_tensors="pt", padding=True) # right side padding as usual
# mask of label_id's
label_ids=inputs['input_ids'].clone()
label_ids[label_ids==tokenizer.encode(tokenizer.pad_token)[0]] = -100
# one shot score the batch
with torch.no_grad():
model.eval()
loss, logits = model(input_ids=inputs['input_ids'], labels=label_ids)[:2]
# calc loss per sentence
calc_loss(sentences,inputs,logits)
```
outputs:
Hello, my loss= 3.069786548614502 toks= 3 sentloss= 6.139573097229004
Hello, my dog loss= 4.247798442840576 toks= 4 sentloss= 12.74339485168457
Hello, my dog is a dog loss= 3.671175003051758 toks= 7 sentloss= 22.027050018310547
|
transformers | 9,172 | open | [Flax] Implement FlaxElectraModel, FlaxElectraForMaskedLM, FlaxElectraForPreTraining | # What does this PR do?
1. Implement Flax version of Electra model : `FlaxElectraModel`, `FlaxElectraForMaskedLM`, `FlaxElectraForPreTraining`. Most of the code taken from FlaxBert version with changes in parameters and forward pass.
2. Adjust `convert_to_pytorch` to load weights for Electra
3. Implement `FlaxElectraGeneratorPredictions`, `FlaxElectraDiscriminatorPredictions` for generator and discriminator prediction head.
4. Implement test in `tests/test_modeling_flax_electra.py`
Forward pass works by running
```shell
pytest tests/test_modeling_flax_electra.py
```
Hi @patrickvonplaten , @mfuntowicz , I've seen your work on FlaxBert, so I'm tagging in case you want to review. Please note that I use `flax setup` instead of decorator `@nn.compact` since the former
- allows to test and inspect submodule
- separate submodule declaration from the forward pass. Forward pass method can be very long if using `@nn.compact`
I'm happy to revert this change to make code style consistent.
Let me know if you have any questions or feedbacks.
Thanks.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests? Yes, test added `tests/test_modeling_flax_electra.py`
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
| 12-17-2020 18:10:20 | 12-17-2020 18:10:20 | Hi @chris-tng -- thanks for trying out Flax in HF Transformers!
A quick comment on `nn.compact` and `setup` (I work on Flax) -- indeed if you want access to submodules such as for transfer learning then using `setup` is the way to go. I skimmed the PR and see that you use `setup` in some places and `nn.compact` in others? I'm curious whether you found `nn.compact` more useful in particular settings.
Indeed `setup` is more similar to the PyTorch style (though you still get shape inference if you use `nn.compact` in modules that don't have submodules). `nn.compact` is nice if you want to use loops or conditionals to define submodules based on hyperparameters, and some people also prefer how it "co-locates" their submodule definitions and usage. But ultimate it's somewhat a matter of preference.
(Please do let us know whatever other thoughts or questions on Flax on our discussion board: https://github.com/google/flax/discussions)
Happy holidays and new year!<|||||>> shape
Hey @avital,
Thanks a lot for your input here! That's very useful. Most of the main contributors to Transformers are on holiday at the moment and this is a rather big design decision to make going forward with Flax, so I think we'll have to wait here until early January until everybody is back (@sgugger, @LysandreJik, @mfuntowicz)
Happy holiday to you as well :-) <|||||>Hi @avital ,
Apology for my delayed response. I appreciate your great work on Flax. Regarding the use of `setup()` and `nn.compact`, personally I find `setup` works better for testing submodules. This is useful for converting and debugging the module (and submodules). For instance, I can create a model/module with many submodules:
```python
class Dummy(nn.Module):
def setup(self):
self.submodule1 = nn.Dense(10)
self.submodule2 = MyLayerNorm()
def __call__(self):
# do something here
```
After loading model weights from a dict, I can access/debug submodule by simply accessing the attribute: `dummy.submodule1`, `dummy.submodule2`. From this, I can debug forward pass, check model weights of invididual submodule.
Shameless plug, I wrote a blog post about porting huggingface pytorch model to flax, [here](https://chris-tng.github.io/blog/transformer/nlp/flax/pytorch/jax/2020/12/16/tips-flax.html). I'm a new Flax user so please correct me if I'm missing anything.
Happy holiday and happy new year to everyone 🎄 🍾 <|||||>Hey @chris-tng,
sorry to had you wait for this long. I'll solve the merge conflicts in your PR and then use your PR to change the `@nn.compact` to `setup` in all other flax models as well so that we have a common standard now. Since most of our users are used to the "PyTorch" style and I only see advantages for our library philosophy:
- We base most design decisions on the PyTorch style
- We prefer slightly less compact readable code over the slightly more "magic" functionalities that might reduce code
- To me the are no real downsides to using `setup`<|||||>Intermediate state is saved here: #9484 will push to this PR on Monday the latest<|||||>Hey @chris-tng,
I noticed that we will probably have to wait a bit to get this merged: https://github.com/google/flax/pull/683 to be able to continue the PR. Will keep you up-to-date :-)<|||||>Hi folks, sorry for the delay with the new-year shuffle and school shutdown.
google/flax#683 required a bit more conversation and updating some other codebases but now it's merged! If you have a moment, please take a look and see if it helps unblock progress. We'll release Flax 0.4.0 soon, but installing from GitHub now is the way to go.<|||||>Hey, sorry for barging in
I was needing a small BERT-like model in Jax and so I've recently updated this in a local branch to work with the Flax refactoring that makes checkpoints directly compatible with PyTorch (plus fixing some other issues that had gone through the cracks)
Should I push directly to this branch or make a new PR from my fork? Also should I wait #11364 and update my code accordingly?<|||||>> Hey, sorry for barging in
> I was needing a small BERT-like model in Jax and so I've recently updated this in a local branch to work with the Flax refactoring that makes checkpoints directly compatible with PyTorch (plus fixing some other issues that had gone through the cracks)
> Should I push directly to this branch or make a new PR from my fork? Also should I wait #11364 and update my code accordingly?
Hey @CoderPat,
It would be great if you could wait until #11364 is merged (should be done in the next 2 days). The PR fixes a couple of bugs :-)<|||||>No problem @patrickvonplaten! Also regarding git logistics, is it better to ask @chris-tng for permission to push directly to his branch?<|||||>> No problem @patrickvonplaten! Also regarding git logistics, is it better to ask @chris-tng for permission to push directly to his branch?
I think it's alright to copy past the code that is still useful and open a new branch, if you'd like to add Electra :-). On the branch we should then give credit to @chris-tng , but since the PR is quite old now I think he would be fine if we close this one and open a new one (Please let me know if this is not the case @chris-tng :-)) . #11364 should be the last refactor before the "fundamental" Flax design is finished.<|||||>Just to confirm @patrickvonplaten , the flax refactor is merged and the structure should be stable enough that I can work on implementing Electra right?<|||||>Exactly @CoderPat - very much looking forward to your PR :-) |
transformers | 9,171 | closed | Trainer returns logits of only one sequence instead of entire evaluation dataset | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1 (also reproduced the same issue with 3.5.1)
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): no
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: (don't know. Probably not)
### Who can help
@sgugger
(The use-case is basically the same as reported in this issue: https://github.com/huggingface/transformers/issues/9160
But I'm opening a new issue, because this is a separate problem)
## Information
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] my own task or dataset: (give details below)
## To reproduce
**Description:**
I’m trying to fine-tune a pre-trained NLI model (`ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli`) on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs [here](https://huggingface.co/transformers/custom_datasets.html) and [here](https://huggingface.co/transformers/training.html).
- When I run the training, it seems like the fine-tuning works (it does the training and saves the checkpoints).
- But during evaluation, the trainer/model seems to only compute the logits for one sequence instead of the entire training sequence. This means that the resulting metrics don't make sense. This also makes me doubt whether the training is actually executed on the entire training dataset, or only on the first sequence (the loss stays between 7.2 and 7.0 for most of the training steps).
- I've tried to change the input and dataset format in many different ways. One possible issue is that I am somehow only passing the same sequence into the trainer. But I've made sure that this is not the case (see also the printed input_ids in the code snippet below).
```
### load model and tokenizer
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name) # num_labels=3
model.config
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
https://docs.nvidia.com/deeplearning/performance/mixed-precision-training/index.html
model.to(device)
model.train();
# ... some data preprocessing
### use hf datasets object + tokenize
from datasets import Dataset
# create hf dataset objects
dataset_train = Dataset.from_pandas(df_train)
dataset_val = Dataset.from_pandas(df_val)
dataset_test = Dataset.from_pandas(df_test)
## tokenize on dataset object
def tokenize(batch):
return tokenizer(batch['premise'], batch['hypothesis'], max_length=max_length, return_token_type_ids=True, truncation=False, padding=True)
dataset_train = dataset_train.map(tokenize, batched=True, batch_size=len(df_train))
dataset_val = dataset_val.map(tokenize, batched=True, batch_size=len(df_val))
dataset_test = dataset_test.map(tokenize, batched=True, batch_size=len(df_test))
# to tensors
dataset_train.set_format('torch', columns=['input_ids', 'attention_mask', 'label', 'token_type_ids']) # format_kwargs=torch.LongTensor()
dataset_val.set_format('torch', columns=['input_ids', 'attention_mask', 'label', 'token_type_ids'])
dataset_test.set_format('torch', columns=['input_ids', 'attention_mask', 'label', 'token_type_ids'])
print(dataset_val["input_ids"][:2])
# ! to show that the tokenized input sequences are not the same
# output:
tensor([[ 0, 45784, 5, 3302, 9, 5222, 274, 1729, 7, 1719,
5, 1403, 12, 28326, 7, 1157, 1232, 6, 11, 1989,
10801, 1041, 6, 2388, 8243, 11, 1818, 173, 4, 2,
2, 133, 2788, 16, 59, 592, 1134, 4, 286, 1246,
35, 6300, 8, 3111, 6, 786, 12, 12063, 15343, 1134,
6, 6610, 1134, 6, 1692, 1380, 8, 2038, 1134, 6,
223, 25943, 33421, 5688, 1134, 2, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1],
[ 0, 1121, 570, 51, 1381, 7, 989, 106, 19, 1085,
13, 10, 353, 4, 2, 2, 133, 2788, 16, 59,
6642, 8, 1318, 9, 301, 4, 286, 1246, 35, 3039,
2591, 6, 1265, 22830, 6, 2040, 6, 6642, 194, 22830,
6, 6642, 194, 2919, 6, 9057, 6, 1265, 2919, 2,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1]])
# compute metrics with trainer https://colab.research.google.com/drive/1-JIJlao4dI-Ilww_NnTc0rxtp-ymgDgM?usp=sharing#scrollTo=N8J-TLhBuaOf
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
preds = pred.predictions.argmax(-1)
print(pred.predictions, pred.predictions.argmax(-1))
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary', pos_label=0)
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
## training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=16, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
fp16=True
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val, # evaluation dataset
compute_metrics=compute_metrics,
tokenizer=tokenizer
)
trainer.train()
### evaluate
# for some reason the logits are the the same for each prediction
trainer.evaluate()
[[ 2.055 -7.992 2.053]
[ 2.055 -7.992 2.053]
[ 2.053 -7.984 2.05 ]
...
[ 2.055 -7.992 2.053]
[ 2.055 -7.992 2.053]
[ 2.055 -7.992 2.053]] [0 0 0 ... 0 0 0]
{'eval_accuracy': 0.499,
'eval_f1': 0.6657771847898598,
'eval_loss': 0.6932016611099243,
'eval_precision': 0.499,
'eval_recall': 1.0}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Get different logits for each sequence/model prediction in therefore get meaningful metrics output on the evaluation dataset.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-17-2020 17:09:20 | 12-17-2020 17:09:20 | **Update:** I slightly changed my script based on the [text_classification example notebook](https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb) and now it works. Don't know what made the difference, but in case someone else has a similar problem, I recommend following the steps in this script. |
transformers | 9,170 | closed | Put all models in the constants | # What does this PR do?
It was impossible to use all pretrained checkpoints in the tapas tokenzier file because they were not in the constants of the file. This PR fixes that. | 12-17-2020 16:04:45 | 12-17-2020 16:04:45 | An alternative is to remove all content, but the previous state where one variable as some keys but the others not leads to failures.<|||||>Yes, we really need to remove all of this @LysandreJik @thomwolf <|||||>We do, but right now we still use them in the tests<|||||>It was a bit confusing to me which to add in the constants and which not.. 😅 <|||||>@sgugger one more small fix you can add is in tapas.rst under "Usage: inference", there should be one more space:

(Sorry, if I see more things I make a new PR myself :p)<|||||>Pushing your fix on master directly Niels |
transformers | 9,169 | closed | Added TF TransfoXL Sequence Classification | This PR implements Sequence classification for TF TransfoXL model.
TFTransfoXLForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. GPT-1 ,GPT-2) do.
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
@LysandreJik @jplu | 12-17-2020 15:16:21 | 12-17-2020 15:16:21 | |
transformers | 9,168 | closed | Fix gradient clipping for Sharded DDP | # What does this PR do?
As mentioned in the discussion of #9156, `Trainer` does not do gradient clipping correctly when using a sharded optimizer. This PR fixes that, and also allows `Trainer` to not perform any gradient clipping (by passing `None` or `0` to the corresponding argument). | 12-17-2020 14:25:41 | 12-17-2020 14:25:41 | |
transformers | 9,167 | closed | Add disclaimer to TAPAS rst file | 12-17-2020 14:23:17 | 12-17-2020 14:23:17 | ||
transformers | 9,166 | closed | Language modeling logging | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:4.0.0-rc-1
- Platform:Linux
- Python version:Python 3.7.9
- PyTorch version (GPU?):1.4.0
- Tensorflow version (GPU?):1.14.0
- Using GPU in script?:Yes
- Using distributed or parallel set-up in script?:Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): BERT
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am new to the logging package and I would like to know if we can redirect all the contents occurring in the screen to a text file when I am running the language modeling script. I am not sure if this is the correct forum to ask but can you please help me with this? | 12-17-2020 13:57:18 | 12-17-2020 13:57:18 | I think @LysandreJik will know more what to do for the logging. Also, we usually ask questions like this on the [forum](https://discuss.huggingface.co/) and keep issues for bugs/feature requests.<|||||>If you want to output everything to a file, you can redirect the standard output/standard error to a file. We use the official `logging` package for logs, and they have a full chapter en redirecting to a file: https://docs.python.org/3/howto/logging.html#logging-to-a-file<|||||>It is also possible to redirect the output from the script (without modifying anything). I use the following code to both have the output on my screen (console) and a copy in my log files (with stderr having the python logging logs):
```bash
python run_your_script.py \
> >(tee -a stdout.log) \
2> >(tee -a stderr.log >&2)
```
(Found this gem on StackOverflow somewhere.)<|||||>@Querela and @LysandreJik, thanks a lot for the help. closing the issues. |
transformers | 9,165 | closed | Roberta python Tokenizer encodes differently across transformers==2.11 and transformers==4.0.1 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 2.11 vs 4.0.1
- Platform: Any
- Python version: 3.8.3
- PyTorch version (GPU?): Any
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@mfuntowicz
## Information
I am attaching two samples below. In short, the word "suicide" gets encoded with different tokens across 2 different transformers versions. This leads to the same finetuned model behaving differently if using different transformers versions. Which is alarming in a sense.


Model I am using (Bert, XLNet ...): Roberta-base
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run the following code with transformers==2.11 and 4.0.1
`from transformers import AutoTokenizer`
`tokenizer = AutoTokenizer.from_pretrained('roberta-base', use_fast=False)`
`tokenizer.encode(' suicide ')`
`tokenizer.encode('suicide')`
`tokenizer.encode(' suicide')`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
A tokenizer for the same pretrained model, should tokenize words identically.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-17-2020 13:47:49 | 12-17-2020 13:47:49 | Hi, note that 'suicide' and ' suicide' are indeed different tokens with respect to SentencePieces, which is used by Roberta.
SentencePieces keeps white space intact. So we expect different tokens from 'suicide' and ' suicide'.
I assume in the old transformers 2.x, strip was applied in the tokenizer before it was passed to SentencePiece, which is incorrect. <|||||>Thank you for letting me know. Regardless, the above creates some pretty big inconsistency issues and for example I end up with a model trained with 2.x, when used to predict in an env with transformers >= 3.x to have major discrepancy in predicting accuracy.
This should be at least flagged somewhere and there should be some "backward compatibility" option here.<|||||>You could call `text.strip()` before passing it to the tokenizer. <|||||>Sorry, just to clarify. This doesn't have to do on how I can personally solve my issue with a hack. Of course I can do that or simply use the same transformers version when training/predicting.
This has to do on how a production system might get affected by this, in my view, major discrepancy. Tokenization behaviour shouldn't change "silently" across versions.
Think of the following case: a product trained to do sentence similarity. The model is trained to understand similarity of sentences, thus tokens. Now if you happily change tokenization across versions, you end up with a model that won't work properly when you upgrade transformers library. That should not happen, or at least get flagged somewhere.<|||||>The old tokenization was buggy and incorrect (based on the info here). I don't see a point to add an option to enable back buggy / incorrect behavior of a component. That would be quite bad to have flags that re-enable all kind of buggy behavior you had in old versions.
If you need stable results in production then there is no other way than to stay at one version or otherwise have sufficient tests to ensure the functionality of your system when you upgrade your framework.
It is the same with every framework, that between major versions that there can be significant changes that can impact your software.
(Just my personal opinion, I don't work for huggingface. I don't know if the bug in the old tokenizer was known and fixed (then it is likely mentioned in the release notes) or if the fix was just a by product by some other commit.)
It appears to be fixed here
https://github.com/huggingface/transformers/pull/5287
And was mentioned in the subsequent release <|||||>Thank you this slipped my attention, it was nevertheless a pretty significant issue. Apparently it's connected to that: https://github.com/huggingface/transformers/issues/5256
I guess this should be expected with a framewrok developing so quickly!<|||||>@ypapanik I am a bit surprised that it makes such a big difference for your use case. You wouldn't expect this as the only difference is a potential white space at the beginning of the text.
My roberta models changed performance in downstream tasks only slightly between the old tokenizer and the newer tokenizer (performance improved like 0.1 percentage points actually). |
transformers | 9,164 | closed | Add clipping to relative positional embedding | # What does this PR do?
This PR adds clipping to distance embedding as described in the paper [Improve Transformer Models with Better Relative Position Embeddings](https://arxiv.org/abs/2009.13658)
Without this addition, the model with relative positional embedding will return an error if the input length above max_position_embeddings param.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? | 12-17-2020 13:30:18 | 12-17-2020 13:30:18 | @patrickvonplaten @LysandreJik @julien-c @zhiheng-huang<|||||>Hey @hadaev8,
Thanks for your PR. It would be awesome if you could provide a code snippet to this PR that shows a case where the current implementation of BERT's relative positional embeddings would fail. Hope it's fine to tag the original author here: @zhiheng-huang for some discussion. <|||||>@patrickvonplaten
Colab notebook
https://colab.research.google.com/drive/1bAwwNMbh27JW0H6uWG9rZgyjz_b0eB1M?usp=sharing
<|||||>@patrickvonplaten
Any update?<|||||>Hey @hadaev8,
I'm having a hard time deciding here without the opinion of the official author @zhiheng-huang . @zhiheng-huang it would be great if you could leave a comment here :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten
Well, in current state model with rel pos just would not work as expected eg process longer input.<|||||>@hadaev8,
Alright, let's try to fix it together! It would be awesome if you could post a reproducible code snippet here so that we can see when an error arises<|||||>@patrickvonplaten
Here it doesnt work
https://colab.research.google.com/drive/1bAwwNMbh27JW0H6uWG9rZgyjz_b0eB1M?usp=sharing
Here it work
https://colab.research.google.com/drive/1OIHjR2kVndDzwro7n_SgkpJG4YwjgGQY?usp=sharing<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,163 | closed | Fix mixed precision in TF models | # What does this PR do?
This PR aims to fix the mixed precision issues when `tf.keras.mixed_precision.experimental.set_policy()` is set to something else than `tf.float32`. In the same page, this PR aims to fix some TFLite quantization issues.
Before to further continue this PR, the PR #9418 has to be merged.
Fixes # (issue)
#7052
| 12-17-2020 12:53:05 | 12-17-2020 12:53:05 | |
transformers | 9,162 | closed | IndexError: index out of range in self while using longformers when i try to pass token_type_ids |
- `transformers` version: 3.0.0
- Platform: windows
- Python version: 3.6.10 :: Anaconda, Inc.
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## To reproduce
import torch
from transformers import LongformerModel, LongformerTokenizer
model = LongformerModel.from_pretrained('allenai/longformer-base-4096')
tokenizer = LongformerTokenizer.from_pretrained('roberta-base')
SAMPLE_TEXT = ' '.join(['Hello world! '] * 100) # long input document
input_ids = torch.tensor(tokenizer.encode(SAMPLE_TEXT)).unsqueeze(0) # batch of size 1
attention_mask = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device)
global_attention_mask = torch.zeros(input_ids.shape, dtype=torch.long, device=input_ids.device)
segment_ids = torch.ones(input_ids.shape, dtype=torch.long, device=input_ids.device)
outputs = model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask,token_type_ids=segment_ids)
## Error info
IndexError Traceback (most recent call last)
<ipython-input-357-c7bf5dc7cbc9> in <module>
----> 1 outputs = model(input_ids=input_ids, attention_mask=attention_mask, global_attention_mask=global_attention_mask,token_type_ids=segment_ids)
~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\.conda\envs\env\lib\site-packages\transformers\modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states)
675 encoder_attention_mask=None,
676 output_attentions=output_attentions,
--> 677 output_hidden_states=output_hidden_states,
678 )
679
~\.conda\envs\env\lib\site-packages\transformers\modeling_bert.py in forward(self, input_ids, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, encoder_hidden_states, encoder_attention_mask, output_attentions, output_hidden_states)
751
752 embedding_output = self.embeddings(
--> 753 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
754 )
755 encoder_outputs = self.encoder(
~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\.conda\envs\env\lib\site-packages\transformers\modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
66
67 return super().forward(
---> 68 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
69 )
70
~\.conda\envs\env\lib\site-packages\transformers\modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
178 inputs_embeds = self.word_embeddings(input_ids)
179 position_embeddings = self.position_embeddings(position_ids)
--> 180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
~\.conda\envs\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
~\.conda\envs\env\lib\site-packages\torch\nn\modules\sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
~\.conda\envs\env\lib\site-packages\torch\nn\functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
## Expected behavior
If i am not passing a segment_id (token_type_ids) then model is executing or if i am passing segment_id's as zeros then its getting executed when i am passing segment_id's 1 or 0's and 1's then i am getting an error as index out of range in self.
| 12-17-2020 08:34:31 | 12-17-2020 08:34:31 | Can we assume that "token_type_ids" will not support in longformer?<|||||>Yes, actually yesterday a PR (#9152) was merged to update the docs stating the LongFormer does not support token type ids. <|||||>@NielsRogge Thank you<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,161 | closed | Metric calculation across batches in seq2seq examples | ### Who can help
@sshleifer, @patil-suraj
## Information
Currently, the seq2seq finetune script calculates the final metrics (bleu or rouge) per batch, and then simply averages these numbers across batches. Is it correct? For example, I just took a brief look at sacrebleu (which the script uses), and it seems that `corpus_bleu(preds, targets)` isn't necessarily equal to `(corpus_bleu(preds[:n], targets[:n] + corpus_bleu(preds[n:], targets[n:])/2`. | 12-17-2020 05:59:23 | 12-17-2020 05:59:23 | Definitely not correct. Additionally, the final batch might be much smaller (and still weighted evenly). To get "perfect" bleu scores use `run_distributed_eval.py`
<|||||>Right, that's what I thought as well. If we think saving all the intermediate output and computing the bleu score once at the end of each validation epoch is too memory-intensive, one solution would be to accumulate intermediate BLEU-internal metrics, and compute the BLEU score at the end. This way the asymptotic memory complexity wouldn't depend on the validation set size. C.f. the AllenNLP implementation. At the very least, I think we should give a warning or something so that people realize this is not a precise number and wouldn't use it, say, in a paper.<|||||>Hey @ZhaofengWu
This is now fixed in the new `run_seq2seq.py` script. |
transformers | 9,160 | closed | Trainer bug? Loss and logits are “nan” when fine-tuning NLI model (both RoBERTa/BART) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1 (also reproduced the same issue with 3.5.1)
- Platform: Google Colab
- Python version: 3.6
- PyTorch version (GPU?): 1.7.0+cu101
- Tensorflow version (GPU?): no
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: (don't know. Probably not)
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
The problem arises when using:
* [ x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ x] my own task or dataset: (give details below)
**Description:**
I’m trying to fine-tune a pre-trained NLI model (`ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli`) on a dataset of around 276.000 hypothesis-premise pairs. I’m following the instructions from the docs [here](https://huggingface.co/transformers/custom_datasets.html) and [here](https://huggingface.co/transformers/training.html). When I run the training, it seems like the fine-tuning works (it does the training and saves the checkpoints), but `trainer.train()` and `trainer.evaluate()` return "nan" as loss value.
**What I've tried:**
- I tried using both `ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli` and `facebook/bart-large-mnli` to make sure that it's not linked to specific model, but I get the issue for both models
- I tried following the advice in this [related github issue](https://github.com/huggingface/transformers/issues/1727), but adding `num_labels=3` to the config file does not solve the issue. (I think my issue is different because the models are already fine-tuned on NLI in my case)
- I tried many different ways of changing my input data because I suspected that there could be an issue with my input data, but I also couldn't solve it that way.
- **The probable source of the issue:** I inspected the prediction output from the model during training and the weird thing is that the prediction value always seems to be "0" (entailment) in 100% of cases (see printed output at the bottom of the code below). This cannot be right.
Even weirder: When I first run the model to predict a test sequences before running the trainer, I get normal logits as output. When I run the exact same code block again at the end after having run the trainer, I get `tensor([[nan, nan, nan]]` as output (see code below).
- I suspect that the source for the 'only 0 prediction output' is that the logits the model returns during training are possibly always `torch.tensor([[np.nan, np.nan, np.nan]])`. `torch.tensor([[np.nan, np.nan, np.nan]]).argmax(-1)` returns torch.tensor(0) without triggering an error. The big mystery for me is why the logits would become "nan", because the model does not do that when I use the same input data only outside of the trainer, but something changes once I've run the trainer.
=> I would be very thankful for any help on this! (I've been trying to solve this since two days now)
Thanks a lot in advance.
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
### Here is my code:
### load model & tokenize
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
max_length = 256
hg_model_hub_name = "ynie/roberta-large-snli_mnli_fever_anli_R1_R2_R3-nli"
# also tried: hg_model_hub_name = "facebook/bart-large-mnli"
tokenizer = AutoTokenizer.from_pretrained(hg_model_hub_name)
model = AutoModelForSequenceClassification.from_pretrained(hg_model_hub_name)
model.config
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Device: {device}")
if device == "cuda":
model = model.half()
model.to(device)
model.train();
**Running a test inference with the model at this point works fine:**
```
test_enc = tokenizer(nli_train[0]["premise"], nli_train[0]["hypothesis"], return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
model.eval();
test_output_loss = model(test_enc["input_ids"].to(device), attention_mask=test_enc["attention_mask"].to(device), token_type_ids=test_enc["token_type_ids"].to(device), labels=torch.tensor(2).to(device))
print(test_output_loss)
#output: SequenceClassifierOutput(loss=tensor(2.2168, device='cuda:0', dtype=torch.float16, grad_fn=<NllLossBackward>), logits=tensor([[ 0.4075, 0.8511, -0.7549]], device='cuda:0', dtype=torch.float16,
grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
**Then I continue with preprocessing and training:**
#... some data preprocessing
encodings_train = tokenizer(premise_train, hypothesis_train, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_val = tokenizer(premise_val, hypothesis_val, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
encodings_test = tokenizer(premise_test, hypothesis_test, return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=False, padding=True)
### create pytorch dataset object
class XDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}
#item = {key: torch.as_tensor(val[idx]).to(device) for key, val in self.encodings.items()}
item['labels'] = torch.as_tensor(self.labels[idx])
#item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.labels)
dataset_train = XDataset(encodings_train, label_train)
dataset_val = XDataset(encodings_val, label_val)
dataset_test = XDataset(encodings_test, label_test)
# compute metrics with trainer
from sklearn.metrics import accuracy_score, precision_recall_fscore_support
def compute_metrics(pred):
labels = pred.label_ids
print(labels)
preds = pred.predictions.argmax(-1)
print(preds)
precision, recall, f1, _ = precision_recall_fscore_support(labels, preds, average='binary', pos_label=0)
acc = accuracy_score(labels, preds)
return {
'accuracy': acc,
'f1': f1,
'precision': precision,
'recall': recall
}
## training
from transformers import Trainer, TrainingArguments
# https://huggingface.co/transformers/main_classes/trainer.html#transformers.TrainingArguments
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=100,
)
trainer = Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=dataset_train, # training dataset
eval_dataset=dataset_val # evaluation dataset
)
trainer.train()
# output: TrainOutput(global_step=181, training_loss=nan)
trainer.evaluate()
# output:
[2 2 2 0 0 2 2 2 0 2 0 0 2 2 2 2 0 2 0 2 2 2 2 0 2 0 2 0 0 2 0 0 2 0 0 0 2
0 2 0 0 0 0 0 2 0 0 2 2 2 0 2 2 2 2 2 0 0 0 0 2 0 0 0 2 2 0 0 0 2 0 0 0 2
2 0 2 0 0 2 2 2 0 2 2 0 0 0 0 0 0 0 2 0 0 0 0 2 0 2 2 0 2 0 0 2 2 2 2 2 2
2 0 0 0 0 2 0 0 2 0 0 0 0 2 2 2 0 0 0 0 0 2 0 0 2 0 2 0 2 0 2 0 0 2 2 0 0
2 2 2 2 2 2 0 0 2 2 2 2 0 2 0 0 2 2 2 0 0 2 0 2 0 2 0 0 0 0 0 0 2 0 0 2 2
0 2 2 2 0 2 2 0 2 2 2 2 2 2 0 0 2 0 0 2 2 0 0 0 2 0 2 2 2 0 0 0 0 0 0 0 0
2 0 2 2 2 0 2 0 0 2 0 2 2 0 0 0 0 2 2 2 0 0 0 2 2 2 2 0 2 0 2 2 2]
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
{'epoch': 1.0,
'eval_accuracy': 0.5137254901960784,
'eval_f1': 0.6787564766839378,
'eval_loss': nan,
'eval_precision': 0.5137254901960784,
'eval_recall': 1.0}
**Test running the model again after training, returns `tensor([[nan, nan, nan]]` for some reason:**
```
test_enc = tokenizer(nli_train[0]["premise"], nli_train[0]["hypothesis"], return_tensors="pt", max_length=max_length,
return_token_type_ids=True, truncation=True, padding=True)
model.eval();
test_output_loss = model(test_enc["input_ids"].to(device), attention_mask=test_enc["attention_mask"].to(device), token_type_ids=test_enc["token_type_ids"].to(device), labels=torch.tensor(2).to(device))
print(test_output_loss)
#output: SequenceClassifierOutput(loss=tensor(nan, device='cuda:0', dtype=torch.float16, grad_fn=<NllLossBackward>), logits=tensor([[nan, nan, nan]], device='cuda:0', dtype=torch.float16,
grad_fn=<AddmmBackward>), hidden_states=None, attentions=None)
```
[1]: https://huggingface.co/
## Expected behavior
Model should not return "nan" for logits and return a loss value.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-16-2020 23:54:37 | 12-16-2020 23:54:37 | **Update:**
I reran the training in native PyTorch with the following code and I did not get the same issue. This means that there is some issue with the trainer?
```
import torch
class XDataset(torch.utils.data.Dataset):
def __init__(self, encodings, labels):
self.encodings = encodings
self.labels = labels
def __getitem__(self, idx):
#item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}
item = {key: torch.as_tensor(val[idx]) for key, val in self.encodings.items()}
#item = {key: torch.as_tensor(val[idx]).to(device) for key, val in self.encodings.items()}
item['labels'] = torch.as_tensor(self.labels[idx])
#item['labels'] = torch.LongTensor(self.labels[idx])
#item['labels'] = self.labels[idx]
return item
def __len__(self):
return len(self.labels)
dataset_train = XDataset(encodings_train, label_train)
dataset_val = XDataset(encodings_val, label_val)
dataset_test = XDataset(encodings_test, label_test)
from torch.utils.data import DataLoader
from transformers import AdamW
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.train()
train_loader = DataLoader(dataset_train, batch_size=16, shuffle=True)
optim = AdamW(model.parameters(), lr=5e-5)
for epoch in range(1):
for batch in train_loader:
optim.zero_grad()
input_ids = batch['input_ids'].to(device)
attention_mask = batch['attention_mask'].to(device)
token_type_ids = batch['token_type_ids'].to(device)
labels = batch['labels'].to(device)
print(labels)
outputs = model(input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids, labels=labels)
loss = outputs[0] # outputs.loss
print(loss)
loss.backward()
optim.step()
# Output: it prints the labels and the loss correctly!
#tensor([2, 0, 2, 2, 2, 2, 0, 0, 0, 2, 2, 2, 0, 0, 0, 2], device='cuda:0')
#tensor(0.6895, device='cuda:0', dtype=torch.float16, grad_fn=<NllLossBackward>) ....
```
When I rerun the model for a test inference after this native pytorch training step, it also returns logits and loss as expected (no "nan"). <|||||>In the first snippet of code you convert your whole model to FP16 with `model.half()` (this is not in your second snippet of code). This is not how mixed-precision training works and you should pass the flag `fp16=True` to your `TrainingArguments`.<|||||>thanks, I don't know much about mixed-precision training (the only reason why I added model.half() is because I understood that it reduces memory usage). Now, when I add `fp16=True`, i get the error:
`ValueError: Attempting to unscale FP16 gradients.` when running `trainer.train()`
```
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1, # total number of training epochs
per_device_train_batch_size=8, # batch size per device during training
per_device_eval_batch_size=8, # batch size for evaluation
warmup_steps=500, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=30,
fp16=True
)
```<|||||>Cool, but when I remove the model.half(), it does return the loss, that's great!<|||||>Yes you have to remove that line, that's what I was saying :-)<|||||>Great, so I understand that I can use mixed precision training by simply passing the flag `fp16=True` without manual modifications to the model. Is there actually any good reason not to pass "fp16=True"? The articles on mixed precision training I've found seem to be very positive about it.
In any case, thanks for solving my issue! :) <|||||>There are no reason not to use, no. Sometimes for debugging purposes or there may be one of the exotic models that don't support FP16, but in general, it's a good way to speed up training and saving GPU memory.
Closing the issues since it's solved!<|||||>> there may be one of the exotic models that don't support FP16
That was my case with `ltgoslo/norbert` producing the nan loss with FP16. Setting `fp16` to `False` solved the issue, thanks! |
transformers | 9,159 | closed | Unified transformer interface | # 🚀 Feature request
As we're called `transformers`, it would be nice if there's a `Transformer` class that is not associated with any pretrained model and that we can directly instantiate, like
```python
from transformers import Transformer, Seq2SeqTransformer
transformer = Transformer(n_layers=8, dim=512)
seq2seq_transformer = Seq2SeqTransformer(n_enc_layers=6, n_dec_layers=8, dim=512)
```
They could follow a modular design that makes swapping components very easy, e.g.
```python
class SelfAttention:
...
class Transformer:
def __init__(self, self_attention_cls=SelfAttention, n_layers=8, ...):
...
```
## Motivation
1. Makes it easier for researchers to modify the transformer architecture and train it from scratch.
2. Allows great refactoring of the current `modeling_xxx.py` files. Often these files are highly identical minus a few minor differences. For example, the embeddings, self-attention layers, etc. are often identical across multiple models, but the current design does not allow much code re-use. With this change, I imagine a lot of the current `modeling_xxx.py` files can be reduced significantly.
3. A side-effect of this refactoring would be increased readability. Without much repetitive code, it would be much clearer to see the differences between architectures, which would be helpful for many users. | 12-16-2020 23:07:25 | 12-16-2020 23:07:25 | Hi @ZhaofengWu, thank you for proposing this feature.
You're noting a decision that we've made very early on in the design of the `transformers` library, which is to have standalone model files, **without any abstraction**, including the `Transformer` abstraction you're noting here. We believe it's much easier to read and understand a model file if there is no abstraction. Understanding a model file is done by reading the model file. There are no other files to go to to understand it, simply a single model file. This is our objective, as is described in the [philosophy](https://huggingface.co/transformers/philosophy.html).
You mention having a `Transformer` class which is not associated with any specific architecture, but how would we do that? Would it be a Transformer that has absolute positional embeddings, or relative? Would it accept token type IDs? Would it have a pooler layer? How would it perform its attention? All of these questions have answers that mean we're defining a specific `Transformer` architecture, which is not something we want to do.
Finally, I believe we already have what you're looking for; if what you want to do is to have a transformer that you can tweak, so as to implement your own architecture, I recommend you take a look at the [model templates](https://github.com/huggingface/transformers/tree/master/templates/adding_a_new_model), which add a "bland" model which you can customize to your needs. It also adds the tests (which pass), according to the naming you defined, and places the import where they should be, alongside the auto classes.<|||||>Thank you for your response! I was thinking about the `Transformer` abstraction in pytorch, fairseq, etc. Re. not associating with any specific architecture, one solution could be to implement all options in `Transformer` and allow clients to control which one to use with flags. Of course, there's no such thing as "all options" as researchers develop new architectural improvements, and we would need to be constantly adding to this class, which would not be ideal. Nevertheless, for some common boolean stuff (e.g. token type IDs), perhaps it's not the worst idea to implement it there and have a flag to control if it is included. For most other stuff, the modularity I mentioned could help. If we can swap components of `Transformer` easily, we can only write small pieces of code while reusing the rest of the architecture. We can keep some "default" choices in there which other models can override (e.g. transformer-xl has a relative positional embedding class (this implementation can also be shared across models) that overrides the absolute one in `Transformer`), but if we are committed to making the class absolutely not associated with _any_ architecture, it can be made an abstract class.
Re. model templates, they are nice, but in my understanding, it's similar to copying the entire architecture from some other model file, e.g. BERT, and modify it. This would still result in a lot of boilerplate code, even if the researcher/developer doesn't have to spend time writing it. The software engineer in me often screams when I have to copy massive amounts of architectural code which causes huge redundancy. And when I am reading the implementation of a new model, I also often don't like this boilerplate as it makes the real differences hard to notice. Of course, reading the paper is one thing, but there are often small engineering details not mentioned in the paper. This redundancy was the main motivation for this issue. Sometimes I have to do python hacks (thankfully we're not doing this in C) to accomplish code re-use.
But, of course, I definitely respect your philosophy. I'm sure there must be challenges that I'm not seeing, and this change would definitely be huge. If there are no ideal ways to do this at the moment, please feel free to close this issue.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,158 | closed | Getting a 404 error when loading 'model=facebook/bart-large-mnli' from pipeline('zero-shot-classification') | Getting a 404 when trying to load the model.
model='joeddav/bart-large-mnli-yahoo-answers' also not working.
`404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
---------------------------------------------------------------------------
HTTPError Traceback (most recent call last)
~/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
695 # Load from URL or cache if already cached
--> 696 resolved_archive_file = cached_path(
697 archive_file,
~/.local/lib/python3.8/site-packages/transformers/file_utils.py in cached_path(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, local_files_only)
999 # URL, so get it from the cache (downloading if necessary)
-> 1000 output_path = get_from_cache(
1001 url_or_filename,
~/.local/lib/python3.8/site-packages/transformers/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only)
1127 r = requests.head(url, headers=headers, allow_redirects=False, proxies=proxies, timeout=etag_timeout)
-> 1128 r.raise_for_status()
1129 etag = r.headers.get("X-Linked-Etag") or r.headers.get("ETag")
/usr/lib/python3/dist-packages/requests/models.py in raise_for_status(self)
939 if http_error_msg:
--> 940 raise HTTPError(http_error_msg, response=self)
941
HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/facebook/bart-large-mnli/resolve/main/tf_model.h5
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-48-a42b4974064c> in <module>
----> 1 classifier = pipeline('zero-shot-classification',
2 model='facebook/bart-large-mnli',
3 # model='joeddav/bart-large-mnli-yahoo-answers',
4 # model = 'bert-base-uncased'
5 # model ='phiyodr/bart-large-finetuned-squad2',
~/.local/lib/python3.8/site-packages/transformers/pipelines.py in pipeline(task, model, config, tokenizer, framework, revision, use_fast, **kwargs)
2934 model = get_default_model(targeted_task, framework, task_options)
2935
-> 2936 framework = framework or get_framework(model)
2937
2938 task_class, model_class = targeted_task["impl"], targeted_task[framework]
~/.local/lib/python3.8/site-packages/transformers/pipelines.py in get_framework(model, revision)
106 model = AutoModel.from_pretrained(model, revision=revision)
107 elif is_tf_available() and not is_torch_available():
--> 108 model = TFAutoModel.from_pretrained(model, revision=revision)
109 else:
110 try:
~/.local/lib/python3.8/site-packages/transformers/models/auto/modeling_tf_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
558
559 if type(config) in TF_MODEL_MAPPING.keys():
--> 560 return TF_MODEL_MAPPING[type(config)].from_pretrained(
561 pretrained_model_name_or_path, *model_args, config=config, **kwargs
562 )
~/.local/lib/python3.8/site-packages/transformers/modeling_tf_utils.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
709 f"- or '{pretrained_model_name_or_path}' is the correct path to a directory containing a file named one of {TF2_WEIGHTS_NAME}, {WEIGHTS_NAME}.\n\n"
710 )
--> 711 raise EnvironmentError(msg)
712 if resolved_archive_file == archive_file:
713 logger.info("loading weights file {}".format(archive_file))
OSError: Can't load weights for 'facebook/bart-large-mnli'. Make sure that:
- 'facebook/bart-large-mnli' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'facebook/bart-large-mnli' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.`
| 12-16-2020 20:25:38 | 12-16-2020 20:25:38 | After the full restart PC, works fine. |
transformers | 9,157 | closed | T5 checkpoint contains weights missing on current model. | ## Environment info
Colab (16 december 2020), transformers 4.0.1
### Who can help
T5: @patrickvonplaten
## Information
When loading the pre-trained T5 weights, directy with .from_pretrained, with the newest version, it returns the following warning:
> Some weights of the model checkpoint at t5-small were not used when initializing T5ForConditionalGeneration: ['decoder.block.0.layer.1.EncDecAttention.relative_attention_bias.weight']
> - This IS expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
> - This IS NOT expected if you are initializing T5ForConditionalGeneration from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
This also caused old checkpoints of mine to not load due to the missing weight.
## To reproduce
Reproduced in Colab:
https://colab.research.google.com/drive/158OiSKHz80b0PQYaWB6c-AI9xKCzE1ns?usp=sharing
## Expected behavior
Loading T5 should not return warnings if i am loading the pre-trained weights from the library.
| 12-16-2020 19:35:39 | 12-16-2020 19:35:39 | Yes we should fix the warning. It's no problem that those weights are missing i.e.:
#8933<|||||>Thank you! |
transformers | 9,156 | closed | Sharded DDP training fails with seq2seq models | ## Information
Model I am using (Bert, XLNet ...): T5/BART/mBART/Marian
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: seq2seq
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run
```
python -m torch.distributed.launch --nproc_per_node=2 examples/seq2seq/finetune_trainer.py \
--model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir \
~/Downloads/wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 \
--logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 \
--num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size 4 --sortish_sampler \
--src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 \
--n_train 500 --sharded_ddp
```
will fail with
```
Traceback (most recent call last):
File "examples/seq2seq/finetune_trainer.py", line 379, in <module>
main()
File "examples/seq2seq/finetune_trainer.py", line 316, in main
model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
File "/home/sgugger/git/transformers/src/transformers/trainer.py", line 821, in train
self.optimizer.step()
File "/home/sgugger/.pyenv/versions/base/lib/python3.7/site-packages/torch/optim/lr_scheduler.py", line 67, in wrapper
return wrapped(*args, **kwargs)
File "/home/sgugger/git/fairscale/fairscale/optim/oss.py", line 210, in step
self._broadcast_params()
File "/home/sgugger/git/fairscale/fairscale/optim/oss.py", line 522, in _broadcast_params
if self.should_bucket_param[param]:
KeyError: Parameter containing:
tensor([[-0.0296, 0.0038],
[ 0.0000, 0.0000],
[ 0.0298, 0.0385],
...,
[-0.0161, -0.0024],
[ 0.0022, -0.0576],
[ 0.0053, 0.0256]], device='cuda:1')
0%|
```
Using FP16 also fails.
## Expected behavior
The script should run to completion.
| 12-16-2020 18:50:31 | 12-16-2020 18:50:31 | This is just a brief log of the 2 distinct errors mentioned in OP:
w/ `--fp16` the failure is:
```
File "./finetune_trainer.py", line 379, in <module>
main()
File "./finetune_trainer.py", line 315, in main
trainer.train(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 818, in train
self.scaler.step(self.optimizer)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 330, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
```
w/o `--fp16` the failure is:
```
File "./finetune_trainer.py", line 379, in <module>
main()
File "./finetune_trainer.py", line 315, in main
trainer.train(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 821, in train
self.optimizer.step()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/optim/lr_scheduler.py", line 65, in wrapper
return wrapped(*args, **kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/optim/optimizer.py", line 89, in wrapper
return func(*args, **kwargs)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/fairscale/optim/oss.py", line 210, in step
self._broadcast_params()
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/fairscale/optim/oss.py", line 522, in _broadcast_params
if self.should_bucket_param[param]:
KeyError: Parameter containing:
tensor([[ ...]], device='cuda:1')
```
It's the very first parameter `model.shared.weight` in the case of mbart for example.
To test with t5 (same errors), run:
```
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path patrickvonplaten/t5-tiny-random --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --task translation_en_XX_to_ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp
```
<|||||>The first problem (fp16) is easily fixed, it means that the doc is not good enough. Torch's grad scaler is not shard aware (the ranks do not have all the gradients with this technique), but you can use [this](https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/grad_scaler.py#L24) and that should work.
edit for future readers: that prove wrong, ShardedGradScaler already in use
The second issue is new to me, would you mind sharing a bit more of the reproduction steps ?<|||||>We initialize the should_bucket_param dictionary when the OSS optimizer is created. The assumption is that parameters should be frozen at this point. Any chance parameters are modified after the optimizer was created?<|||||>Thank you so much @blefaudeux and @msbaines for your follow up.
To reproduce:
```
# setup
git clone https://github.com/huggingface/transformers
cd transformers
cd examples/seq2seq
wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
```
to reproduce the 2nd failure w/o `--fp16`:
```
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp
```
and then the first one is to just add `--fp16`
This is a tiny model that is good enough for testing the mechanics, so no good results to be expected. It's also very quick to download and load. To see real results swap `sshleifer/tiny-mbart` for `sshleifer/distill-mbart-en-ro-12-4`.<|||||>> The first problem (fp16) is easily fixed, it means that the doc is not good enough. Torch's grad scaler is not shard aware (the ranks do not have all the gradients with this technique), but you can use [this](https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/grad_scaler.py#L24) and that should work.
We are using it already:
https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L315
if I print the object just before it fails in `self.scaler.step(self.optimizer)`, I get:
<fairscale.optim.grad_scaler.ShardedGradScaler object at 0x7ff27034bac0>
https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L818
FWIW, I experience the exact same issue with deepspeed if I leave trainer's `--fp16` code - if I remove it and get deepspeed to handle that the failure goes away. So the common denominator is our code.<|||||>> Thank you so much @blefaudeux and @msbaines for your follow up.
>
> To reproduce:
>
> ```
> # setup
> git clone https://github.com/huggingface/transformers
> cd transformers
> cd examples/seq2seq
> wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
> tar -xzvf wmt_en_ro.tar.gz
> ```
>
> to reproduce the 2nd failure w/o `--fp16`:
>
> ```
> export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp
> ```
>
> and then the first one is to just add `--fp16`
>
> This is a tiny model that is good enough for testing the mechanics, so no good results to be expected. It's also very quick to download and load. To see real results swap `sshleifer/tiny-mbart` for `sshleifer/distill-mbart-en-ro-12-4`.
perfect, having a look right now, thanks for the repro help !<|||||>> We initialize the should_bucket_param dictionary when the OSS optimizer is created. The assumption is that parameters should be frozen at this point. Any chance parameters are modified after the optimizer was created?
good point, it's something that I was planning to address someday actually but not sure how urgent that was
> Thank you so much @blefaudeux and @msbaines for your follow up.
>
> To reproduce:
>
> ```
> # setup
> git clone https://github.com/huggingface/transformers
> cd transformers
> cd examples/seq2seq
> wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
> tar -xzvf wmt_en_ro.tar.gz
> ```
>
> to reproduce the 2nd failure w/o `--fp16`:
>
> ```
> export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/tiny-mbart --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp
> ```
>
> and then the first one is to just add `--fp16`
>
> This is a tiny model that is good enough for testing the mechanics, so no good results to be expected. It's also very quick to download and load. To see real results swap `sshleifer/tiny-mbart` for `sshleifer/distill-mbart-en-ro-12-4`.
re: fp16, could it be that the FW has not been run within an AMP context (a) and the scaler is not invoked in the backward (b) ? I cannot find it [in the trainer](https://github.com/huggingface/transformers/blob/master/src/transformers/trainer.py#L799). The scaler needs to be invoked when computing the grads, it will check for infs there which could explain why you're seeing this assert. Note that the hugginface codebase is new to me, so could be that this is wrapped somewhere else and I'm completely missing it<|||||>I think the questions you're asking about are all in this `training_step` code:
https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L1126-L1146
I didn't write it, but from a quick read it appears that it's a yes to all of your suggestions.
`self.use_amp` = native amp, `use_apex` is apex - so we are talking native amp here - that is the branches with `use_amp = True`
I'll step through with debugger to see that it is actually so.
<|||||>> I think the questions you're asking about are all in this `training_step` code:
>
> https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L1126-L1146
>
> I didn't write it, but from a quick read it appears that it's a yes to all of your suggestions.
>
> `self.use_amp` = native amp, `use_apex` is apex - so we are talking native amp here - that is the branches with `use_amp = True`
>
> I'll step through with debugger to see that it is actually so.
looks good indeed ! basically I just know that the "found_inf_per_device" are populated in the "unscale_" step, so this key being absent points to this step being skipped somehow.<|||||>@stas00 https://github.com/facebookresearch/fairscale/pull/256 Fixes your repro on a single node, it's a side effect though (bucketing is effectively disabled on a single node), multinode + huggingface is still probably broken. It does look like the model somehow changes after construction, still finding my way around your codebase.
<|||||>re: --fp16: If the machine is clean it does break, my assumption is that the dist.reduce() in the ShardedGradScaler fails somehow in between the processes. Once it dies, one process stays up actually (visible with nvidia-smi), and all the subsequent runs will work fine. So if that helps it looks like it could be an issue with the dist init, or passing the settings around that.<|||||>> looks good indeed ! basically I just know that the "found_inf_per_device" are populated in the "unscale_" step, so this key being absent points to this step being skipped somehow.
So, OK, I have it setup in the debugger
It runs `unscale_` just fine, but doesn't find any `inf`:
```
inv_scale = self._scale.double().reciprocal().float()
found_inf = torch.full((1,), 0.0, dtype=torch.float32, device=self._scale.device)
optimizer_state["found_inf_per_device"] = self._unscale_grads_(optimizer, inv_scale, found_inf, False)
```
- `inv_scale = tensor([1.5259e-05], device='cuda:0')`
- `found_inf = tensor([0.], device='cuda:0')`
- `optimizer_state["found_inf_per_device"] = {}`
rerunning - inside `self._unscale_grads_` we get:
```
per_device_and_dtype_grads = {defaultdict: 1} defaultdict(<function GradScaler._unscale_grads_.<locals>.<lambda> at 0x7f895c2174c0>, {device(type='cuda', index=1): defaultdict(<class 'list'>, {torch.float32: [tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan,
cuda:1 = {defaultdict: 1} defaultdict(<class 'list'>, {torch.float32: [tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan,
default_factory = {type} <class 'list'>
torch.float32 = {list: 92} [tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([[nan, nan],\n [nan, nan]], device='cuda:1'), tensor([nan, nan],
00 = {Tensor: 2} tensor([[nan, nan],\n [nan, nan]], device='cuda:1')
01 = {Tensor: 2} tensor([[nan, nan],\n [nan, nan]], device='cuda:1')
02 = {Tensor: 2} tensor([[nan, nan],\n [nan, nan]], device='cuda:1')
03 = {Tensor: 2} tensor([[nan, nan],\n [nan, nan]], device='cuda:1')
[...]
89 = {Tensor: 2} tensor([nan, nan], device='cuda:1')
90 = {Tensor: 2} tensor([nan, nan], device='cuda:1')
91 = {Tensor: 2} tensor([-12339.3516, -13527.4590], device='cuda:1')
```
and then this gets called:
```
for device, per_dtype_grads in per_device_and_dtype_grads.items():
for grads in per_dtype_grads.values():
torch._amp_foreach_non_finite_check_and_unscale_(grads,
per_device_found_inf.get(device),
per_device_inv_scale.get(device))
```
and it doesn't find anything.
I think the debugger also has a race condition and sometimes we get the first process (cuda:0) and other times the 2nd one (cuda:1).
ok I figured out how to switch threads in pycharm debugger, so I can now go back and force between the 2 processes.
I haven't figured out how to do `.to()` calls in pycharm debugger yet - when those happen as a step it just hangs so I have carefully to skip over those.
Now rerunning it again and hitting `cuda:0` `per_device_and_dtype_grads` is empty.
And yes, I have noticed that with `--fp16` only one process fails with this error - the other keeps on running.
So basically we are having this happen on one process but not the other. Somewhere must be a bug not running the same code for both processes.
(this has just changed in pytorch master - now all subprocesses will die nicely w/o leaving zombies - by yours truly ;)<|||||>thanks for the backtrace ! so, for one the fact that the grads are not all the same is expected with this method, the grads are sharded across the ranks (ie: partitioned), depending on which parameters each rank will optimize. The ShardedGradScaler should be aware of that, and syncs in between the ranks to make sure that they all get the same knowledge, looks like this fails somehow then. Having a quick look right now
(well done for the zombie process destruction ! now somehow if the zombie process is still here the next run "works")<|||||>So it appears that here:
```
self.scaler.scale(loss).backward()
```
doesn't set `param.grad`s in `cuda:0` but sets it in `cuda:1`
To quickly see that add:
```
for group in self.optimizer.param_groups:
for param in group["params"]:
print(f"{self.args.local_rank}: {param.grad}")
```
after:
https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L1139
So on `cuda:0` it goes into `backward` and almost immediately returns. some flag must be off.
I checked that `self.scaler.scale(loss).requires_grad == True` on both, so it's not that.
And `unscale_` fails to find any data, because all grads are None for `self.optimizer` on `cuda:0` and that's where it crashes.
(debugging parallel processes proved to be far from easy - not sure why - half the time pycharm debugger either gets stuck or suddenly can't see the other process - so it's very slow comparing what the difference is between the two sides. ideally I need to be able to run both processes side by side - each in its own debugger - then it'd be much easier to find the divergence)<|||||>hmm, looks like a device and rank mismatch then, that could match with the process group observation above. Looking into the code, are you using a specific process group ?<|||||>https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/training_args.py#L472-L474<|||||>I'm trying to find a better solution to the other issue you were seeing with the bucketing (now fixed on fairscale master for a single node), and it just appeared to me that they could be tied: could it be that the models change devices after the sharded optimizer is built ?
edit: just checked that, not the case, rank/device match at construction time and during the first step<|||||>Ah ok, got to the bottom of the first bug. There are params which don't require grad, I was skipping them, my bad. Fixing that properly so that multi-node also works for you.
edit: now this is properly done (proverbial second fix, which covers multinode)<|||||>ok, now on the fp16 issue, there's at least one thing which cannot work well in that case: the gradient clipping.
https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L809
To explain a little, when ShardedDDP is used torch.utils.clip_grad_norm is blind to the sharding (same issue with the scaler), in that it will only consider the norm of the gradients present on this rank, it's very much not aware of the distributed nature of the problem. I think that we should improve the API so that we don't need to change all these little pieces, this could be done with torch RPC for instance. We've a solution right now, in that we provide a shard-aware gradient clipping in https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/oss.py#L219
whose interface is similar.
I'm not sure that it's the only issue with fp16, but that's one of them for sure.
<|||||>Ok, second and hopefully last issue with fp16 caught, works on my machine with the following "patch" around the clipping (to select fairscale's clipping when that makes sense), and an incoming fix in Fairscale related to a broken partitioning in a pathological case.
Replace https://github.com/huggingface/transformers/blob/dc9f24544291b25b44c9e87239a0ef4355396a4c/src/transformers/trainer.py#L809
with
```
if self.use_amp:
# AMP: gradients need unscaling
self.scaler.unscale_(self.optimizer)
if hasattr(self.optimizer, "clip_grad_norm"):
# Sharded optimizer, specific gradient clipping
self.optimizer.clip_grad_norm(self.args.max_grad_norm)
else:
# Vanilla -monolithic- clipping, handling Apex or full precision
torch.nn.utils.clip_grad_norm_(
amp.master_params(self.optimizer) if self.use_apex else model.parameters(),
self.args.max_grad_norm,
)
```
<|||||>With all the linked PRs it works for me with --fp16 and ShardedDDP, and the speed bumps up nicely, AMP basically doubles the throughput.<|||||>Wow, thanks a lot for all this debugging @blefaudeux ! I'll draft a quick patch for the `Trainer` gradient clipping and tag you on the PR.<|||||>Patch is in the PR linked above!<|||||>Amazing! Thank you so much, @blefaudeux!
I merged all the suggested code and everything works. Yay!
**Except it's 30% slower w/ sharded_ddp**
> @blefaudeux wrote:
> With all the linked PRs it works for me with --fp16 and ShardedDDP, and the speed bumps up nicely, AMP basically doubles the throughput.
So I'm not seeing what you're seeing, @blefaudeux
What I did.
1. hf:
rebased to include https://github.com/huggingface/transformers/pull/9168
2. fairscale:
```
git checkout -b hf
# https://github.com/facebookresearch/fairscale/pull/262
git cherry-pick e305f10e81db95d14c9edd3f9e1e18b0fb2847fd
# (had to resolve a simple conflict)
# https://github.com/facebookresearch/fairscale/pull/259
git cherry-pick ad28820
git cherry-pick 705c188
git cherry-pick d6aa285
git cherry-pick a315ee3
# only this works for me at the moment for pytorch-nightly
python setup.py bdist_wheel
pip uninstall -y fairscale; pip install dist/fairscale-0.1.1-cp38-cp38-linux_x86_64.whl
```
# speed benchmarks
note I switched to the real model `sshleifer/distill-mbart-en-ro-12-4` (`sshleifer/tiny-mbart` was perfect for testing mechanics)
```
# base-line (no --sharded_ddp / no --fp16)
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500
2020-12-17 10:44:55 | INFO | __main__ | train_runtime = 28.0357
# --sharded_ddp
2020-12-17 10:41:02 | INFO | __main__ | train_runtime = 40.4482
### fp16
# --fp16
2020-12-17 10:43:30 | INFO | __main__ | train_runtime = 29.4658
# --sharded_ddp --fp16
2020-12-17 10:38:53 | INFO | __main__ | train_runtime = 39.4722
```
<|||||>Also, clearly it can be seen from the benchmark that we don't gain from --fp16 at the baseline (before adding fairscale) - so something is not right there either.<|||||>In the good news, I can squeeze 3x batch size with ` --sharded_ddp` as compared to the baseline before my card OOMs. Amazing!
So with BS=12 (baseline I could do only 4)
```
# --sharded_ddp
2020-12-17 11:10:17 | INFO | __main__ | train_runtime = 17.2038
# --sharded_ddp --fp16
2020-12-17 11:07:32 | INFO | __main__ | train_runtime = 15.2403
```
So the total training time is about ~50% as compared to w/o `--sharded_ddp` but must increase BS 3 times.
<|||||>quality is about the same - about 0.1% worse with ` --sharded_ddp` and same BS on a quick test.
```
# baseline
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --do_eval --eval_steps 25000 --evaluation_strategy=steps --n_val 200 --per_device_eval_batch_size $BS --do_predict --predict_with_generate --n_test 200 --fp16
2020-12-17 11:20:28 | INFO | __main__ | train_runtime = 29.6081
2020-12-17 11:20:50 | INFO | __main__ | val_bleu = 26.3473
2020-12-17 11:21:20 | INFO | __main__ | test_bleu = 25.7341
# run 2
2020-12-17 11:52:40 | INFO | __main__ | val_bleu = 26.3473
2020-12-17 11:53:10 | INFO | __main__ | test_bleu = 25.7341
# --sharded_ddp
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --do_eval --eval_steps 25000 --evaluation_strategy=steps --n_val 200 --per_device_eval_batch_size $BS --do_predict --predict_with_generate --n_test 200 --fp16 --sharded_ddp
2020-12-17 11:25:23 | INFO | __main__ | train_runtime = 39.4316
2020-12-17 11:25:46 | INFO | __main__ | val_bleu = 26.2563
2020-12-17 11:26:16 | INFO | __main__ | test_bleu = 25.5779
# run 2
2020-12-17 11:28:32 | INFO | __main__ | val_bleu = 26.1359
2020-12-17 11:29:02 | INFO | __main__ | test_bleu = 25.6613
# run 3
2020-12-17 11:30:51 | INFO | __main__ | val_bleu = 26.1359
2020-12-17 11:31:21 | INFO | __main__ | test_bleu = 25.6067
# run 4
2020-12-17 11:50:05 | INFO | __main__ | val_bleu = 26.1359
2020-12-17 11:50:35 | INFO | __main__ | test_bleu = 25.5889
```
larger BS is much faster, while the eval is on par
```
# --sharded_ddp
export BS=12; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 --master_port=9910 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --do_eval --eval_steps 25000 --evaluation_strategy=steps --n_val 200 --per_device_eval_batch_size $BS --do_predict --predict_with_generate --n_test 200 --fp16 --sharded_ddp
2020-12-17 11:22:26 | INFO | __main__ | train_runtime = 15.2409
2020-12-17 11:22:40 | INFO | __main__ | val_bleu = 26.466
2020-12-17 11:23:01 | INFO | __main__ | test_bleu = 25.6237
```
<|||||>Hi,
thanks a lot for this fixing this issue, this is great to have this option working, do you mind including this command for faster training of seq2seq models and some documentations on what is sharded_ddp option and how much it helps on README of seq2seq folder. thanks. <|||||>We will do so soon. Since it was just added we don't have enough solid stats to advertise % improvements specific to transformers, so please give us some time.
Until then you can just add `--sharded_ddp` and see for yourself. One thing I noticed is that I can use a 3 times bigger batch size w/ it.
**edit**: oh but wait, the fairscale master doesn't have the PRs with important fixes merged yet, so it's basically not ready yet unless you want to manually merge the changes. I guess you could use my branch where I did the merges already if it's urgent...
Wrt/ Optimizer state sharding, you can read the documentation in the paper on DeepSpeed/ZeRO https://arxiv.org/abs/1910.02054
If I'm not mistaken the exact entry is 5.1 Pos : Optimizer State Partitioning
But I don't know whether it's the same or not, since this is fairscale implementation.
@msbaines, what's the best place for us to link to to explain what fairscale's sharded optimizer does? I didn't find any docs on your repo/website. Thank you!<|||||>hey there, trying to cover a couple of questions:
- fp16 : should almost always give a boost, I was seeing close to 2x a couple of days back (on P100s with the above test case), could depend on your hardware ? (and workload of course, if IO is the bottleneck that wouldn't be the case)
- speed impact of shardedDDP at iso-batch size: there's some speed lost indeed, it's probably not fundamental though, could be improved on our end without any change on your end. The two ongoing PRs for hugginface should help a bit for instance. Some time lost is not trivial due to all the code being python and some impact on GIL contention for instance (the reduce involves a lot of hooks being called to redirect the gradients to the right ranks). Longer term, ShardedDDP should just be a "mode of operation" of Pytorch's DDP, which is mostly cpp, so speed will probably improve, consider the current state as a baseline. Note that the state saving support is not very elegant right now, and is very slow, so if this counts against the throughput (if there are a lot of checkpoints) you'll see a difference indeed. We plan to improve on that asap
- what does shardedDDP do ? Pos + Pg from the zero paper + mixed precision if you use AMP of course, in plain english optimizer state sharding + gradient sharding + automatic mixed precision (from PyTorch of course).<|||||>> fp16 : should almost always give a boost, I was seeing close to 2x a couple of days back (on P100s with the above test case), could depend on your hardware ? (and workload of course, if IO is the bottleneck that wouldn't be the case)
Yes, honestly I'm not sure how to approach this. @sgugger says he sees the improvements, but he is on a different hardware and pytorch.
I have a partially supported rtx-3090 and gtx-1070, so obviously the older card slows down the new one, but any speed up should be relative to that slowdown. Perhaps it has to do with the incomplete rtx-3090 support. I'm impatiently waiting for cuda-11.2 pytorch support, which supposedly should provide the full power. I'm also on pytorch-nightly - not sure if it makes a difference.
I may have to delegate benchmarking to others, since I don't see most of the benefits.
I'm in the process of doing the same integration for deepspeed and obviously have the same issue there. So I guess I will propose to merge what I have working and then either someone else will benchmark or wait till pytorch/cuda-11.2 is out. Holiday season doesn't help.
> speed impact of shardedDDP at iso-batch size: there's some speed lost indeed, it's probably not fundamental though, could be improved on our end without any change on your end. The two ongoing PRs for hugginface should help a bit for instance. Some time lost is not trivial due to all the code being python and some impact on GIL contention for instance (the reduce involves a lot of hooks being called to redirect the gradients to the right ranks). Longer term, ShardedDDP should just be a "mode of operation" of Pytorch's DDP, which is mostly cpp, so speed will probably improve, consider the current state as a baseline. Note that the state saving support is not very elegant right now, and is very slow, so if this counts against the throughput (if there are a lot of checkpoints) you'll see a difference indeed. We plan to improve on that asap
I'm a bit confused about this vs the previous quoted comment - are you saying that it's normal that it should be slower when enabling shardedDDP for the same batch size and this whole time the alluded to speedup was in the ability to dramatically increase the batch size and therefore overall speed?
If this is so then my setup only has an issue with fp16, and it's just fine otherwise, since as I mentioned I can do 3x bigger BS and the overall speed is much much faster.
Please clarify my confusion. Thank you!
> what does shardedDDP do ? Pos + Pg from the zero paper + mixed precision if you use AMP of course, in plain english optimizer state sharding + gradient sharding + automatic mixed precision (from PyTorch of course).
Great, thank you! I wasn't sure whether you had some custom variations to the deepspeed paper.
So I suppose we quote these features and send readers to the ZeRO paper as I did 2 comments above for details?
<|||||>@sgugger, we have the new arg `--sharded_ddp` documented [here](https://huggingface.co/transformers/main_classes/trainer.html?highlight=trainer#trainingarguments), but where do you think we should document what shardedDDP does (and then there will be the same question for deepspeed).
I suppose it fits the best into the Training doc, I added a new section for integrations: https://github.com/huggingface/transformers/pull/9208
<|||||>> I'm a bit confused about this vs the previous quoted comment - are you saying that it's normal that it should be slower when enabling shardedDDP for the same batch size and this whole time the alluded to speedup was in the ability to dramatically increase the batch size and therefore overall speed?
trying to word that better:
- fp16 speed: I was trying to say that in both cases I was seeing x2, with the test case provided above. I think that your setup is a bit strange, so to say, I would suspect that's why you're not seeing it yourself
- shardedddp speed (orthogonal to fp16): speed when compared to ddp is in between 105% and 70% (iso batch), from what I've seen personally, I was trying to say that it's not completely set in stone and that improving on it should not require API changes. It's a good comparison point, out of principle it should not really be slower, and it has reasons for being a bit faster (the optimizer step is faster, since it only proceeds a shard and not the full model). The lowered speed mostly comes from GIL contention and slower cpu path, from what I understand, one of the PR currently up improves a bit on that. Then you can of course change the batch size, that's the whole point, glad that in the end the benefit is very clear there :)<|||||>Thank you for clarifying on both points, @blefaudeux
So GIL comes up a lot these days - is there a reason not to switch to multiprocessing and overcome GIL altogether? I suppose passing data around would be much slower between procs and thus threads are still faster in the particular context you use them?
Or since the core of things is written in C++, perhaps POSIX threads could be used as opposed to python? I can imagine that won't be easy to do either w/o rewriting the whole thing in C++ and abandoning python altogether.
<|||||>> Thank you for clarifying on both points, @blefaudeux
>
>
>
> So GIL comes up a lot these days - is there a reason not to switch to multiprocessing and overcome GIL altogether? I suppose passing data around would be much slower between procs and thus threads are still faster in the particular context you use them?
>
>
>
> Or since the core of things is written in C++, perhaps POSIX threads could be used as opposed to python? I can imagine that won't be easy to do either w/o rewriting the whole thing in C++ and abandoning python altogether.
>
>
(Multiprocessing) the issue is that you're deferring actions tied to torch.distributed.Work (ie: when this gradient has been reduced, if this rank does not own the corresponding parameter update then drop the gradient), and these pseudo futures are not easily shared across processes. Might well change over time, maybe that I missed a trick or two, just describing the current status.
(GIL) for the backward pass / reduce again, hooks are fired on a per parameter basis by Torch Distributed autograd engine (c++). Each hook handles python objects, so it needs to get hold of the GIL.
(C++/change language) there's an ongoing RFC on making DDP more composable, there are already quite a few options with the coms hooks for instance, so that the zero use case could be expressed in pure "ddp" blocks (which are cpp). In my opinion that's a reasonable perspective (easier said than done though), ShardedDDP and DDP are intrinsically doing a very similar job, and there are other concepts which would be super interesting to mix (slowmo, pipe, more mixed precision tweaks..) which is easier when all expressed on a common basis.<|||||>Thank you for this detailed answer, @blefaudeux!
So basically in some time things will get even faster, the future is looking bright!
<|||||>Hi
I am also could not get sharded_ddp to work on seq2seq, could you provide the version you used? please see the bug I opened here https://github.com/huggingface/transformers/issues/9215 thanks <|||||>As I clearly unsuccessfully tried to explain @blefaudeux proposed several PRs that made it possible to use `--sharded_ddp` but they are not in `fairscale` master yet and you need to manually merge the PRs until those PRs are merged into master.
Since I already merged these PRs into a local branch, you can use my branch to install fairscale that works with finetune_trainer.py as following:
```
git clone -b hf https://github.com/stas00/fairscale
cd fairscale
python setup.py bdist_wheel
pip uninstall -y fairscale
# adjust the specific whl filename if needed
pip install dist/fairscale-0.1.1-cp38-cp38-linux_x86_64.whl
```
and of course, you will need `transformers` master.
<|||||>> As I clearly unsuccessfully tried to explain @blefaudeux proposed several PRs that made it possible to use `--sharded_ddp` but they are not in `fairscale` master yet and you need to manually merge the PRs until those PRs are merged into master.
>
> Since I already merged these PRs into a local branch, you can use my branch to install fairscale that works with finetune_trainer.py as following:
>
> ```
> git clone -b hf https://github.com/stas00/fairscale
> cd fairscale
> python setup.py bdist_wheel
> pip uninstall -y fairscale
> # adjust the specific whl filename if needed
> pip install dist/fairscale-0.1.1-cp38-cp38-linux_x86_64.whl
> ```
>
> and of course, you will need `transformers` master.
Hi @stas00, I just pushed a new release (see https://github.com/facebookresearch/fairscale/releases/tag/v0.1.2), I hope that works. Thanks again for all the follow ups <|||||>Awesome! Thank you, @blefaudeux!
Could you please let me know when it's on pypi - then I will retest and we will merge the doc PR<|||||>> Awesome! Thank you, @blefaudeux!
>
> Could you please let me know when it's on pypi - then I will retest and we will merge the doc PR
cc @msbaines , I don't have rights on https://pypi.org/project/fairscale/ actually, and I don't think that it's automatically tied to our github releases<|||||>I think this may help you to automate the process:
https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/
I haven't used it myself, but this seems to be the best doc I found while searching for it. I personally have [an automated release process](https://github.com/stas00/ipyexperiments/blob/8f507c1bbed095140e14318e7727176201ced20c/Makefile#L145) that does the version bumping, the tagging, releasing and uploading to pypi/conda all in one command ;)<|||||>> I think this may help you to automate the process:
> https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/
>
> I haven't used it myself, but this seems to be the best doc I found while searching for it. I personally have [an automated release process](https://github.com/stas00/ipyexperiments/blob/8f507c1bbed095140e14318e7727176201ced20c/Makefile#L145) that does the version bumping, the tagging, releasing and uploading to pypi/conda all in one command ;)
@stas00 @msbaines done, 0.1.3 published, checking right now with huggingface but so far so good<|||||>> I think this may help you to automate the process:
> https://packaging.python.org/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/
>
> I haven't used it myself, but this seems to be the best doc I found while searching for it. I personally have [an automated release process](https://github.com/stas00/ipyexperiments/blob/8f507c1bbed095140e14318e7727176201ced20c/Makefile#L145) that does the version bumping, the tagging, releasing and uploading to pypi/conda all in one command ;)
thanks for the pointers by the way @stas00, looking into that, probably github actions in the end<|||||>Also you may want to add `long_description` entry inside `setup.py` so that the resulting https://pypi.org/project/fairscale/ is not empty, e.g. see:
https://github.com/huggingface/transformers/blob/eef66035a28b935b7823c9a7ddc8c569e077ce11/setup.py#L250-L252
which results in a nice description at https://pypi.org/project/transformers/<|||||>> @stas00 @msbaines done, 0.1.3 published, checking right now with huggingface but so far so good
Thank you, @blefaudeux!
Unfortunately I can't build the pypi package w/ pt-nightly, until pt-1.8 gets released - due to rtx-3090 card, so I can only do it from source:
```
rm -r dist build
python setup.py bdist_wheel
pip uninstall -y fairscale
pip install dist/fairscale-*.whl
```
So I trust that your testing was successful
And I re-tested with master - all looks great.
So I'm going to merge the doc https://github.com/huggingface/transformers/pull/9208 and we can let users know that they training has just got a magical super-boost. I think @sgugger should do the honors since he integrated it.
This is very exciting! Thank you for your amazing support, @blefaudeux and your team! |
transformers | 9,155 | closed | evaluate_during_training is not acceptable in newer version of the Transformer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.0
- Platform: Linux-3.10.0-1127.19.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.6.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (True)
- Using GPU in script?: <True>
- Using distributed or parallel set-up in script?: <False>
### Who can help
Trainer: @sgugger
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [* ] my own modified scripts: (give details below)
*
config=RobertaConfig(vocab_size=30_000, max_position_embedding=512, num_attention_heads=12, num_hidden_layers=12, hidden_dropout_prob=0.1, attention_dropout_prob=0.1, initializer_range=0.2, intermediate_size=3072, type_vocab_size=1)
tokenizer=RobertaTokenizerFast.from_pretrained("XXX", max_len=512)
model=RobertaForMaskedLM(config=config)
dataset=LineByLineTextDataset(tokenizer, file_path="XXX")
data_collator=DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm=True, mlm_probability=0.15)
training_args=TrainingArguments(output_dir="xxxx", overwrite_output_dir=True,
num_training_epochs=2, do_train=True, do_eval=True, **evaluate_during_training=True**,
per_gpu_train_batch_size=128, learning_rate=0.0004,
gradient_accumulation_steps=32,
logging_steps=2048,
warmup_steps=10000,
weight_decay=0.01,
eval_steps=2048,
save_steps=2048,
save_total_limit=2, prediction_loss_only=True)
trainer=Trainer(model=model, args=training_args,data_collator=data_collator, train_dataset=dataset, eval_dataset=dataseteval, **prediction_loss_only=True**)
trainer.train()
The tasks I am working on is:
* [* ] my own task or dataset: (give details below)
It is a dataset of several million lines of text.
## To reproduce
Steps to reproduce the behavior:
1. Having transformers installed
2. Add lines of code for accessing a training and evaluation dataset and run it
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
The following code was working fine with transformers 3.5 but since I updated the transformers version, it does not accept the following arguments:
evaluate_during_training=True
prediction_loss_only=True
The error is invalid arguments for both of them.
## Expected behavior
I need to have them to be able to check the validation loss during the model training. By removing the following arguments, the model just reports the training loss and not the evaluation loss during the training.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-16-2020 16:45:22 | 12-16-2020 16:45:22 | This argument was deprecated in transformers version 3.5 and removed in version 4.0, as indicated in the [release notes](https://github.com/huggingface/transformers/releases/). It needs to be replaced by `evaluation_strategy="steps"` or `evaluation_strategy="epoch"` |
transformers | 9,154 | closed | AutoModelForTableQuestionAnswering | Adds the `AutoModelForTableQuestionAnswering` | 12-16-2020 15:49:26 | 12-16-2020 15:49:26 | |
transformers | 9,153 | closed | BertForSequenceClassification and DistilBertForSequenceClassification use pooler output in different ways | Hi,
The `BertForSequenceClassification` includes a forward pass of the BertModel, and it takes the second element (index 1) from its output before moving forward, as shown [here ](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bert/modeling_bert.py#L1378 )
This is the return of `BertModel`
```
return BaseModelOutputWithPoolingAndCrossAttentions(
last_hidden_state=sequence_output,
pooler_output=pooled_output,
hidden_states=encoder_outputs.hidden_states,
attentions=encoder_outputs.attentions,
cross_attentions=encoder_outputs.cross_attentions,
)
```
hence `output[1]` is taking the pooler_output.
However, in `DistilBertForSequenceClassification`, it takes the first element (index 0) of the `DistilBertModel`'s forward pass, i.e. `distilbert_output[0]`, as shown [here](https://github.com/huggingface/transformers/blob/master/src/transformers/models/distilbert/modeling_distilbert.py#L625)
This is the last hidden state for all tokens.
Why is there this discrepancy between the two models? The behaviour of `DistilBertForSequenceClassification` makes more intuitive sense to me.
Why is `BertForSequenceClassification` using only the pooler_output from `BertModel` (i.e. the hidden state of the first token)? Why are all other hidden states of other tokens not needed here, but needed in the distilled version?
Thanks for the help!
| 12-16-2020 15:36:36 | 12-16-2020 15:36:36 | BERT and DistilBERT are different models. DistilBERT isn't simply a BERT model with fewer layers, but a BERT model without the pooling layer as you have seen, and with no token type embeddings.
We try to stay as close to the original implementations as possible, hence why BERT is done this way, and why DistilBERT was done differently. I invite you to read the paper or study the original BERT codebase to see how it was done, it should be very similar (or the same) as it is done here.<|||||>Thanks a lot for clarifying @LysandreJik I'll close the issue then! |
transformers | 9,152 | closed | Add message to documentation that longformer doesn't support token_type_ids | # What does this PR do?
Fixes #9111. This pull request adds a notice to the Longformer model documentation that it does not have `token_type_ids`, similarly to the message on the [RoBERTa documentation](https://huggingface.co/transformers/model_doc/roberta.html).
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@sgugger (documentation)
@patrickvonplaten (longformer)
| 12-16-2020 15:16:11 | 12-16-2020 15:16:11 | Thanks for your PR! It looks like you did not run the `make style` command to format properly your changes.<|||||>@sgugger Sorry about that. It should be good now.<|||||>Thanks a lot! |
transformers | 9,151 | closed | Added TF CTRL Sequence Classification | This PR implements Sequence classification for TF CTRL model.
TFCTRLForSequenceClassification uses the last token in order to do the classification, as other causal models (e.g. Transformer XL ,GPT-2) do.
Fixes #7623
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you write any new necessary tests?
@jplu @LysandreJik | 12-16-2020 13:15:04 | 12-16-2020 13:15:04 | |
transformers | 9,150 | closed | Add flags to return scores, hidden states and / or attention weights in GenerationMixin | # What does this PR do?
Add flags and logic to return attention scores, hidden states, and/or logits when using a model in generation mode.
Fixes #9121
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. Issue: #9121
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
=> Check out forum post for more detail: https://discuss.huggingface.co/t/announcement-generationoutputs-scores-attentions-and-hidden-states-now-available-as-outputs-to-generate/3094
## UPDATE:
This PR is almost ready for merge. Here are the last things to take care of:
- [x] @patrickvonplaten, @SBrandeis, @LysandreJik, @sgugger - check if naming of return arguments is ok.
- [x] @patrickvonplaten add a correct `_check_outputs()` function for special models (Reformer, XLNet, XLM, TransfoXL, ...)
- [x] @SBrandeis We could think about a simple code snippet to advertise the new feature. Maybe use "Beam Search" for translation and show the different "sequence_scores" (probs) for 3 different translations.
## Future PR:
- [ ] Change the output in all notebooks, examples, tests to `return_dict_in_generate=True`.
- [ ] Even further in the future PR: Think about a way to deprecate the default usage of `return_dict_in_generate=False`.
- [ ] @patrickvonplaten Discuss with @sgugger if model documentation is ok. @sgugger - I think `generate()` might deserve its own "main" doc page maybe in "main classes" under "Models". I would add a small description at the top, then add all the generate functions and the new "Generation Outputs" there. Do you think this makes sense or would you rather keep "generate" as a subsection under "Models"? IMO, "generate" could be more visible in the docs.
- [ ] @patrickvonplaten, @LysandreJik, @sgugger - this PR adds a lot more generate testing to many models. Since `generate` quite an expensive method, it might be worth checking here that this PR does yield a significant slow down of the tests. Simon and I tried to make the generate test as short and light-weight as possible, but it might still be significant.
| 12-16-2020 12:56:03 | 12-16-2020 12:56:03 | @patrickvonplaten ready for a second review
I guess the next steps are to implement GreedySearchOutput for TF and to update the pipelines using generation?<|||||>Think it should now be relatively straight-forward to implement the outputs for the other generate methods. Seems like circle ci is in holiday...let's check back on Monday again<|||||>When will this PR be merged? Now I have a model which need the score of generated sequences.<|||||>Hi @sgugger and @LysandreJik, thansk for the review ! I made the suggested changes to the documentation.<|||||>Thanks for your work on this @SBrandeis, great job!<|||||>Nice work, am I correct that this only works for the PyTorch models?<|||||>When would this be added to a release version?
@SBrandeis How would one go about converting these scores into probabilities? Specifically, the probability of the word when it was generated? Seems like we would have had to softmax during generation and store that value rather than the raw score? Does that mean I am still left to role something custom to do that or am I missing something?<|||||>Hey @mshuffett, we will do a release later today, so this will be included in the release.
Regarding how to turn the scores into probabilities, please see this discussion on the forum: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175
We don't want to add this inside `generate()` because there are all kinds of probs one could calculate and we want to keep it as "barebone" as possible for better maintenance. |
transformers | 9,149 | closed | Log metrics along with hparams in TensorBoardCallback | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Log metrics along with hparams in TensorBoardCallback.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
It seems useless to log hparams with empty metric dict because the training arguments are already logged in the text section. Now users can see only a blank screen when they click on the HPARAMS of the TensorBoard. So I think it will be better to call `add_hparams` with evaluation metrics when available. Otherwise just don't call this function.
| 12-16-2020 12:15:28 | 12-16-2020 12:15:28 | This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,148 | closed | DistilBertForSequenceClassification | DistilBertForSequenceClassification
fix small shape error in comments | 12-16-2020 10:26:53 | 12-16-2020 10:26:53 | |
transformers | 9,147 | closed | BertTokenizer.from_pretrained fails for local_files_only=True when added_tokens.json is missing | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-centos-7.6.1810-Core
- Python version: 3.7.6
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@mfuntowicz
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
## Information
Model I am using (Bert, XLNet ...): `google/bert_uncased_L-2_H-128_A-2`
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Run the following:
```
from transformers import BertTokenizer
BertTokenizer.from_pretrained('google/bert_uncased_L-2_H-128_A-2')
BertTokenizer.from_pretrained('google/bert_uncased_L-2_H-128_A-2', local_files_only=True)
```
In the Python interpreter, this produces the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/gscratch/cse/julianjm/anaconda3/lib/python3.7/site-packages/transformers-4.0.1-py3.8.egg/transformers/tokenization_utils_base.py", line 1747, in from_pretrained
File "/gscratch/cse/julianjm/anaconda3/lib/python3.7/site-packages/transformers-4.0.1-py3.8.egg/transformers/file_utils.py", line 1007, in cached_path
File "/gscratch/cse/julianjm/anaconda3/lib/python3.7/site-packages/transformers-4.0.1-py3.8.egg/transformers/file_utils.py", line 1171, in get_from_cache
ValueError: Cannot find the requested files in the cached path and outgoing traffic has been disabled. To enable model look-ups and downloads online, set 'local_files_only' to False.
```
Looking more closely, I have isolated the issue to the logic [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1774). In this case, the error is because the cached path for the url `https://huggingface.co/google/bert_uncased_L-2_H-128_A-2/resolve/main/added_tokens.json` cannot be found in the cache when `local_files_only=True`. This is because the URL 404s; i.e., the file does not exist.
When `local_files_only=False`, the GET returns a 404 and the tokenizer init code just ignores the missing file. However, when `local_files_only=True` and the file is not found, it throws a `ValueError` instead which is not caught.
What makes this non-trivial is that without making HTTP requests, there is no way of telling the difference between a file that doesn't exist and a file which exists but hasn't been downloaded. It seems to me that there are several potential ways of fixing the issue.
1. Ensure that all files exist. Don't let people upload incomplete sets of files (and fix the ones which are currently incomplete).
2. Recover from 404s by caching an "empty" file here. But this only works where there is a meaningful notion of "empty" file, like lists of tokens. I think this would not work for json files or serialized models.
3. Put a special kind of file in the cache which says "hey, this file isn't supposed to exist", and handle appropriately everywhere files are loaded. Potentially could throw a special error saying the file isn't supposed to exist; HTTP 404s could then be caught and re-thrown as this special error, so, the case could be handled uniformly.
4. Just log a warning for files that aren't in the cache, and treat them like 404s. Wild west, but at least if the code unexpectedly fails later the user will be able to guess the problem. Easy to implement, but will worsen the UX every time someone tries to use `local_files_only` without downloading the model first.
Option 3 seems the cleanest to me, while option 4 is what I'm shunting into my transformers egg for now so I can keep working.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
After downloading, I would expect any artifact to be loadable from cache and equivalent to the downloaded one.
<!-- A clear and concise description of what you would expect to happen. -->
| 12-16-2020 08:28:14 | 12-16-2020 08:28:14 | Actually, all of the files 404 here except `vocab.txt`. I have `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, and `tokenizer.json` all missing for this model.<|||||>> Actually, all of the files 404 here except `vocab.txt`. I have `added_tokens.json`, `special_tokens_map.json`, `tokenizer_config.json`, and `tokenizer.json` all missing for this model.
If these files are missing even BertTokenizer.from_pretrained('google/bert_uncased_L-2_H-128_A-2'); should give an error; however it passed due to the below code; any particular reason this logic was added in the below mentioned:
https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1232<|||||>@hlahkar Are you sure? The code you linked seems to just check for `requests.exceptions.ConnectionError` and `requests.exceptions.Timeout`. I think a 404 will raise a `requests.exceptions.HTTPError`, which bubble up to be thrown by `get_from_cache`, through `cached_path`, and then [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1774) where it is then caught and ignored.
In fact, my hacky workaround was to replace [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1257) with `raise requests.exceptions.HTTPError("404 Client Error")`, so the same thing happens when `local_files_only=True`; now I can load the tokenizer in that case.<|||||>> @hlahkar Are you sure? The code you linked seems to just check for `requests.exceptions.ConnectionError` and `requests.exceptions.Timeout`. I think a 404 will raise a `requests.exceptions.HTTPError`, which bubble up to be thrown by `get_from_cache`, through `cached_path`, and then [here](https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1774) where it is then caught and ignored.
>
> In fact, my hacky workaround was to replace [this line](https://github.com/huggingface/transformers/blob/master/src/transformers/file_utils.py#L1257) with `raise requests.exceptions.HTTPError("404 Client Error")`, so the same thing happens when `local_files_only=True`; now I can load the tokenizer in that case.
My concern is should we also not be going into the error flow whenever we are getting a 404 error also; otherwise it might give a false sense of working to the user<|||||>In my previous comment, I mentioned the wrong line number. My Question is; why is the 404 error ignored in the below code segment:
https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1784<|||||>So, is this problem solved in any way?
It seems it is now impossible to use most Bert-like models without the Internet connection, even though all the model files are cached.
Transformers tries to get the `added_tokens.json` file, can't find it, and fails with "ValueError: Connection error, and we cannot find the requested files in the cached path. Please try again or make sure your Internet connection is on."
This is really bothersome on HPC systems, where compute nodes are often offline by design.<|||||>@akutuzov on which version of transformers are you?
I agree that this is a bug that we should solve, cc @LysandreJik @sgugger <|||||>Taking a look.<|||||>@julien-c I use Transformers 4.1.1<|||||>Aimed to fix that in #9807, feedback appreciated @julianmichael <|||||>The PR looks good as a stopgap — I guess the subsequent check [at L1766](https://github.com/huggingface/transformers/pull/9807/files#diff-85b29486a884f445b1014a26fecfb189141f2e6b09f4ae701ee758a754fddcc1R1766) will catch the case where the tokenizer hasn't been downloaded yet since no files should be present. But is this problem necessarily only for tokenizers? It seems like a general issue which is going to hold for any cached resources that have optional files. It might be cleaner to handle it in the file cache itself. But that's a much bigger issue I guess.<|||||>I believe this is only the case for tokenizers. The two other that could be possibly affected by this are:
- Configuration downloads -> downloads a single file
- Model downloads -> downloads the configuration file and the model state dict, both of which are necessary and need to raise an error if missing.
Let me know if you think I'm missing something and I'll see what we can do. <|||||>Ok, sounds good. No need for unnecessary/premature refactoring then :) |
transformers | 9,146 | closed | Ray tune hyperparameters search error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.0.dev0
- Platform: Linux-4.4.0-139-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.3.1 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, GPT2, XLM: @LysandreJik
tokenizers: @mfuntowicz
Trainer: @sgugger
Speed and Memory Benchmarks: @patrickvonplaten
Model Cards: @julien-c
TextGeneration: @TevenLeScao
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten @TevenLeScao
Blenderbot: @patrickvonplaten
Bart: @patrickvonplaten
Marian: @patrickvonplaten
Pegasus: @patrickvonplaten
mBART: @patrickvonplaten
T5: @patrickvonplaten
Longformer/Reformer: @patrickvonplaten
TransfoXL/XLNet: @TevenLeScao
RAG: @patrickvonplaten, @lhoestq
FSMT: @stas00
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
-->
@sgugger
## Information
Model I am using (Bert, XLNet ...): Roberta-large
The problem arises when using:
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: GLUE SST-2
## To reproduce
Steps to reproduce the behavior:
1. I wanted to do a hyperparameter search so I referred to https://huggingface.co/blog/ray-tune and modified the `examples/text-classification/run_glue.py` replacing the training part with
```
def model_init():
model = AutoModelForSequenceClassification.from_pretrained(
model_args.model_name_or_path,
from_tf=bool(".ckpt" in model_args.model_name_or_path),
config=config,
cache_dir=model_args.cache_dir,
)
return model
trainer = Trainer(
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset if training_args.do_eval else None,
compute_metrics=compute_metrics,
tokenizer=tokenizer,
# Data collator will default to DataCollatorWithPadding, so we change it if we already did the padding.
data_collator=default_data_collator if data_args.pad_to_max_length else None,
model_init=model_init,
)
```
```
# Training
if training_args.do_train:
from ray import tune
import ray
ray.init()
best_trial = trainer.hyperparameter_search(
hp_space=lambda _ : {"seed": tune.grid_search([31, 42, 53])},
direction="maximize",
backend="ray",
)
logger.info(" Best run %s" % str(best_trial))
```
2. Run `python run_glue.py --model_name_or_path roberta-large --do_train --do_eval --per_gpu_train_batch_size 8 --output_dir hypersearch-0 --task_name sst2 --evaluation_strategy steps --eval_steps 20 --logging_steps 10`
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Then the script exited with exception:
```
Traceback (most recent call last):
File "run_glue.py", line 428, in <module>
main()
File "run_glue.py", line 359, in main
best_trial = trainer.hyperparameter_search(
File "/data1/howard/transformers/src/transformers/trainer.py", line 1039, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/data1/howard/transformers/src/transformers/integrations.py", line 241, in run_hp_search_ray
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/tune.py", line 299, in run
experiments[i] = Experiment(
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/experiment.py", line 138, in __init__
self._run_identifier = Experiment.register_if_needed(run)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/experiment.py", line 276, in register_if_needed
register_trainable(name, run_object)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 71, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 124, in register
self.flush_values()
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/tune/registry.py", line 146, in flush_values
_internal_kv_put(_make_key(category, key), value, overwrite=True)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/ray/experimental/internal_kv.py", line 27, in _internal_kv_put
updated = worker.redis_client.hset(key, "value", value)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/client.py", line 3004, in hset
return self.execute_command('HSET', name, key, value)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/client.py", line 877, in execute_command
conn.send_command(*args)
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/connection.py", line 720, in send_command
self.send_packed_command(self.pack_command(*args),
File "/home/howard/anaconda3/envs/transformers/lib/python3.8/site-packages/redis/connection.py", line 712, in send_packed_command
raise ConnectionError("Error %s while writing to socket. %s." %
redis.exceptions.ConnectionError: Error 104 while writing to socket. Connection reset by peer.
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The script should run without errors.
## Related Issues
https://github.com/ray-project/ray/issues/2931
https://ray.readthedocs.io/en/latest/tune-usage.html#handling-large-datasets
| 12-16-2020 05:08:34 | 12-16-2020 05:08:34 | I googled for the error and it may be related to sending a large object to redis. Was it because the datasets are too large?<|||||>Hi! Did you try to open an issue at ray directly? It seems to be linked to their library rather than `transformers`<|||||>> Hi! Did you try to open an issue at ray directly? It seems to be linked to their library rather than `transformers`
I googled and found some related issues: https://github.com/ray-project/ray/issues/2931 and according to the replies the solution is https://ray.readthedocs.io/en/latest/tune-usage.html#handling-large-datasets
But I don't know how to pass that `tune.with_parameters`. Maybe the `Trainer` should take care of this?<|||||>It looks like something way too complex to implement so I'd suggest using optuna and see if you have the same problem, or re-implementing your own loop to use `ray.tune` on this. I don't think it can be supported easily by `Trainer`, and the documentation on the ray side is a bit too sparse on this subject to help us do it ourselves.<|||||>I have the same issue, and Optuna seems to be working fine. I think the biggest difference is that Optuna uses SQLite / in-memory, where Ray wants to send a (very large) object to Redis.<|||||>I don't have a solution for this problem, but just for others that might encounter the same problem, I tried the proposed solution (passing the arguments to `tune.run` via `ray.tune.with_parameters` in `run_hp_search_ray`) but the results were exactly the same. By what I have been able to gather, I would say that the problem arises from models bigger than 512M, not from the datasets.
<|||||>hey folks, this should be working on the latest version of ray -- could you try installing the newest version via `pip install -U ray` and trying again?<|||||>>
>
> hey folks, this should be working on the latest version of ray -- could you try installing the newest version via `pip install -U ray` and trying again?
Hi @richardliaw! After updating ray to the latest version (1.1.0), it still isn't working for me, although the exception stack trace has changed a little (prior to this, I got the same exception as @howardlau1999 in their first comment):
```Traceback (most recent call last):
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/connection.py", line 706, in send_packed_command
sendall(self._sock, item)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/_compat.py", line 9, in sendall
return sock.sendall(*args, **kwargs)
BrokenPipeError: [Errno 32] Broken pipe
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/local/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/DATA/nperez/PROJECTS/DNG/src/system/train_span_in_context.py", line 266, in <module>
main()
File "/DATA/nperez/PROJECTS/DNG/src/system/train_span_in_context.py", line 142, in main
local_dir='/DATA/nperez/PROJECTS/DNG/hsearch/ray-search/'
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/transformers/trainer.py", line 979, in hyperparameter_search
best_run = run_hp_search(self, n_trials, direction, **kwargs)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/transformers/integrations.py", line 187, in run_hp_search_ray
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/tune.py", line 325, in run
restore=restore)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/experiment.py", line 149, in __init__
self._run_identifier = Experiment.register_if_needed(run)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/experiment.py", line 287, in register_if_needed
register_trainable(name, run_object)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/registry.py", line 71, in register_trainable
_global_registry.register(TRAINABLE_CLASS, name, trainable)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/registry.py", line 124, in register
self.flush_values()
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/tune/registry.py", line 146, in flush_values
_internal_kv_put(_make_key(category, key), value, overwrite=True)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/ray/experimental/internal_kv.py", line 27, in _internal_kv_put
updated = worker.redis_client.hset(key, "value", value)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/client.py", line 3050, in hset
return self.execute_command('HSET', name, *items)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/client.py", line 900, in execute_command
conn.send_command(*args)
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/connection.py", line 726, in send_command
check_health=kwargs.get('check_health', True))
File "/DATA/nperez/VENV/DNG/lib/python3.7/site-packages/redis/connection.py", line 718, in send_packed_command
(errno, errmsg))
redis.exceptions.ConnectionError: Error 32 while writing to socket. Broken pipe.
```
To be specific, in case it helps, I've _been_ able to make hyperparameter search work for the following pre-trained models—before and after updating ray—:
* dccuchile/bert-base-spanish-wwm-cased
* allenai/scibert_scivocab_cased
* skimai/spanberta-base-cased
* distilbert-base-uncased
But not these:
* bert-base-multilingual-cased
* xlm-roberta-base
<|||||>I couldn't get ray tune working either for roberta-large after upgrading ray to version 1.1.0 @richardliaw<|||||>Got it! I'll take a closer look this week. Thanks!<|||||>Thanks for raising this issue. I could reproduce it (with `roberta-large`) on an AWS p2.xlarge instance. I created a PR that should fix this issue via `tune.with_parameters`: https://github.com/huggingface/transformers/pull/9749
@naiarapm it would be interesting to see what you did differently in your try to use `tune.with_parameters` - do you still have that piece of code available? We designed this utility exactly for handling large datasets and it worked for me in my experiments.
If you have the chance @howardlau1999 it would be great if you could check if this fixes your issue.<|||||>@krfricke Big thanks for your fix! I checked out your branch and the hyperparameters search with `ray` now works for me with `roberta-large`!<|||||>Hi @krfricke!
Sorry for the delay. In response to your question, I simply changed the following line in `transformers.integrations.py` (function `run_hp_search_ray`):
```
analysis = ray.tune.run(_objective, config=trainer.hp_space(None), num_samples=n_trials, **kwargs)
````
to this:
````
analysis = ray.tune.run(
ray.tune.with_parameters(_objective),
config=trainer.hp_space(None), num_samples=n_trials, **kwargs
)
````
I see now in your PR that that alone was not enough though :-) But I did not know what else to change, I just followed the suggested instructions to the best of my ability.
I can confirm as well that the error has been fixed for me. Thanks a lot!! |
transformers | 9,145 | closed | TableQuestionAnsweringPipeline | ## TableQuestionAnsweringPipeline
Introduces the `TableQuestionAnsweringPipeline` which will be used for the `TableQuestionAnswering` widget:
<p align="center">
<img src="https://user-images.githubusercontent.com/30755778/102266591-a9606880-3ee6-11eb-9f16-7173a9a85b58.gif" width="500">
</p>
There are examples of usage within the documentation, but here are some others if you want to give it a spin:
### WTQ and aggregators
```py
tqa_pipeline = pipeline("table-question-answering")
data = {
"Repository": ["Transformers", "Datasets", "Tokenizers"],
"Stars": ["36542", "4512", "3934"],
"Contributors": ["651", "77", "34"],
"Programming language": ["Python", "Python", "Rust, Python and NodeJS"],
}
table = pd.DataFrame.from_dict(data)
queries = [
"What repository has the largest number of stars?",
"Given that the numbers of stars defines if a repository is active, what repository is the most active?",
"What is the number of repositories?",
"What is the average number of stars?",
"What is the total amount of stars?"
]
outputs = tqa_pipeline(table, queries)
print(outputs)
```
This outputs the following (given that the aggregator setup is respected in the model configuration, which won't be the case until the configuration changes as proposed here are accepted):
```
[
{'answer': 'Transformers', 'coordinates': [(0, 0)], 'cells': ['Transformers'], 'aggregator': 'NONE'},
{'answer': 'Transformers', 'coordinates': [(0, 0)], 'cells': ['Transformers'], 'aggregator': 'NONE'},
{'answer': 'COUNT > Transformers, Datasets, Tokenizers', 'coordinates': [(0, 0), (1, 0), (2, 0)], 'cells': ['Transformers', 'Datasets', 'Tokenizers'], 'aggregator': 'COUNT'},
{'answer': 'AVERAGE > 36542, 4512, 3934', 'coordinates': [(0, 1), (1, 1), (2, 1)], 'cells': ['36542', '4512', '3934'], 'aggregator': 'AVERAGE'},
{'answer': 'SUM > 36542, 4512, 3934', 'coordinates': [(0, 1), (1, 1), (2, 1)], 'cells': ['36542', '4512', '3934'], 'aggregator': 'SUM'}
]
```
Please note the aggregators, their presence in the answer when they exist, and their absence when they do not.
### SQA and sequential inference
```py
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"],
'Age': ["56", "45", "59"],
'Number of movies': ["87", "53", "69"],
'Date of birth': ["7 february 1967", "10 june 1996", "28 november 1967"]}
queries = ["How many movies has George Clooney played in?", "How old is he?", "What's his date of birth?"]
table = pd.DataFrame.from_dict(data)
tqa_pipeline = pipeline("table-question-answering", model="nielsr/tapas-base-finetuned-sqa", tokenizer="nielsr/tapas-base-finetuned-sqa")
outputs = tqa_pipeline(table, queries, sequential=True)
print(outputs)
```
This outputs the following:
```
[
{'answer': '69', 'coordinates': [(2, 2)], 'cells': ['69']},
{'answer': '59', 'coordinates': [(2, 1)], 'cells': ['59']},
{'answer': '28 november 1967', 'coordinates': [(2, 3)], 'cells': ['28 november 1967']}
]
```
Please note the relationship between questions ("how old is he", who is "he"?) and the correct answers given by the model. One can try passing `sequential=False` and obtain vastly different results.
Here is the [documentation page](https://138478-155220641-gh.circle-artifacts.com/0/docs/_build/html/main_classes/pipelines.html#transformers.TableQuestionAnsweringPipeline). | 12-16-2020 02:25:45 | 12-16-2020 02:25:45 | @NielsRogge I can't ping you for review but I would love your input on this!<|||||>Thanks for your comments @patrickvonplaten @sgugger, applied your changes. This PR includes the `AutoModelForTableQuestionAnswering` defined [here](https://github.com/huggingface/transformers/pull/9154), adds a check on `pandas` which will raise an error with how to install it if necessary, and will raise an error if the `tf` framework was specified.
Instantiating a pipeline with the following:
```py
tqa_pipeline = pipeline("table-question-answering", framework="tf")
```
yields:
```
ValueError: Pipeline using tf framework, but this framework is not supported by this pipeline.
```
Instantiating the pipeline without having pandas installed:
```py
tqa_pipeline = pipeline("table-question-answering")
```
yields:
```
ImportError: Pandas is required for the TAPAS tokenizer.
``` |
transformers | 9,144 | closed | Saving model errors | I am trying to fine-tune distillbert for a multilabel task using a V100 gpu and latest transformers from pip. When i try to save the model i get this error:
```
Traceback (most recent call last):
File "script/fine_tune_distillbert.py", line 51, in <module>
model.save_pretrained(ROOT_DIR)
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_tf_utils.py", line 534, in save_pretrained
self.save_weights(output_model_file)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/engine/training.py", line 2085, in save_weights
hdf5_format.save_weights_to_hdf5_group(f, self.layers)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/keras/saving/hdf5_format.py", line 640, in save_weights_to_hdf5_group
param_dset = g.create_dataset(name, val.shape, dtype=val.dtype)
File "/usr/local/lib/python3.7/dist-packages/h5py/_hl/group.py", line 143, in create_dataset
if '/' in name:
TypeError: a bytes-like object is required, not 'str'
```
The model init:
```
model = TFDistilBertForSequenceClassification.from_pretrained('distilbert-base-uncased', num_labels= max_lab)
optimizer = tf.keras.optimizers.Adam(learning_rate=0.0001)
model.compile(optimizer=optimizer, loss=model.compute_loss, metrics=['accuracy']) # can also use any keras loss fn
model.fit(train_dataset.shuffle(1000).batch(256), epochs=3, batch_size=256,
validation_data=val_dataset.shuffle(1000).batch(256))
```
The way i am saving after sucessfuly training the model is this :
```
import os
ROOT_DIR = os.path.abspath(os.curdir)
ROOT_DIR = ROOT_DIR + "/model"
tokenizer.save_pretrained(ROOT_DIR)
model.save_pretrained(ROOT_DIR)
```
Tokenizer is saved perfectly. I have tried to run the same code in a colab notebook ( with much less data) and it saves perfectly but when i use a service like `spell.ml` i get this error.
@LysandreJik | 12-16-2020 00:48:01 | 12-16-2020 00:48:01 | Hi, I'm sorry but I don't know spell.ml or how it works; if it works in a colab notebook and saves correctly, it seems the issue comes from spell.ml rather than transformers.<|||||>This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread. |
transformers | 9,143 | closed | Pass kwargs to Pipeline's tokenizer call | # What does this PR do?
When calling a Pipeline, the `kwargs` argument is not passed to the tokenizer (it is actually not used at all).
I think the intended behavior is to pass it (as the base tokenizer's `__call__()` method already supports `kwargs`), and that's what this PR does.
[Related to #8180]
The call order is:
```Python3
SpecificPipeline.__call__(..., **kwargs)
# Which calls
Pipeline.__call__(..., **kwargs)
# Which calls
SpecificPipeline._parse_and_tokenize(..., **kwargs)
# Which in turn calls
self.tokenizer(...) # No kwargs in this call
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@sgugger @LysandreJik | 12-15-2020 22:46:41 | 12-15-2020 22:46:41 | cc @Narsil who probably knows better.<|||||>Hi @guyrosin , we actually don't want to enable that.
The problem is that kwargs are used both by `_parse_and_tokenize` and by `generate`/`forward`.
See discussion here: https://github.com/huggingface/transformers/pull/9432#discussion_r552550844
I'm guessing you want to override a tokenizer argument at runtime in the pipeline. The best way to do that is to whitelist all arguments of the `tokenizer` (like we did with truncation). and *only* pass `**kwargs` to generate. That's the best way to isolate arguments of both functionalities without creating a mess. **Hopefully** there won't be any arguments with the same name in both function calls..
The `**kwargs` in the function signature, is legacy for now as it simply captures previous arguments that used to be sent, and prevents triggering an error for previously written code.
<|||||>Ohh, got it. Thanks for the explanation @Narsil! |
transformers | 9,142 | closed | RAGRetriever loads dataset in the default cache dir even if a different one is specified | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: latest
- Platform: any
- Python version: 3.8
- PyTorch version (GPU?): any
- Tensorflow version (GPU?): any
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
RAG: @patrickvonplaten, @lhoestq
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. load a `RagRetriever` model with `cache_dir='/mnt/.cache/huggingface/'
2. notice that the dataset is still downloaded to `'~/.cache'`
```{python
from transformers import RagRetriever
rag_retriever = RagRetriever.from_pretrained('facebook/rag-token-base', cache_dir='/mnt/.cache/huggingface')
```
## Expected behavior
The dataset is still downloaded in `'~/.cache'` even though we want it to download to the cache in '/mnt'
The reason this is happening is because, in `retrieval_rag.py`, `config.cache_dir` isn't passed through to `load_dataset` on line 273, for example | 12-15-2020 22:13:10 | 12-15-2020 22:13:10 | Indeed! Would you like to open a PR with your fix?<|||||>I am dealing with this issue too, would love to find a way around this. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,141 | closed | Support for private models from huggingface.co | Add a `use_auth_token` flag (or string) to all `from_pretrained` entry points, to specify token to use as Bearer authorization for remote files.
- if it's a string, use it
- If it's true, will get token from `~/.huggingface/token` (will throw if no token there)
You can test this with:
```python
model = AutoModelForMaskedLM.from_pretrained("pierric/hf-private", use_auth_token=True)
```
We'll add unit tests down the line but need to think about which environment those tests are going to hit.
⚠️ For now, I decided against adding token by default to all calls if user is logged in. Let's discuss though! | 12-15-2020 22:02:22 | 12-15-2020 22:02:22 | @patrickvonplaten I don't follow here. In this PR we want to have a way to either pass a token directly, or to opt in to use the one that's store in `~/`. Don't see how I can do that with just an optional string?<|||||>> @patrickvonplaten I don't follow here. In this PR we want to have a way to either pass a token directly, or to opt in to use the one that's store in `~/`. Don't see how I can do that with just an optional string?
I might have misunderstood a bit what constrains there are on the functionality. I thought, the following logic is possible and makes sense here:
- If user passes a string `use_auth_token`, then use this as the token
- Else look for token in `~/.huggingface`:
- if there is no token and model is private -> throw error
- if there is no token and model is **not** private -> load the model as usual
- if there is a token -> use this one
Not sure if there is something I am completely overlooking here in the logic though, *e.g.* if we cannot know before hand whether the model is private or not
<|||||>> > @patrickvonplaten I don't follow here. In this PR we want to have a way to either pass a token directly, or to opt in to use the one that's store in `~/`. Don't see how I can do that with just an optional string?
>
> I might have misunderstood a bit what constrains there are on the functionality. I thought, the following logic is possible and makes sense here:
>
> * If user passes a string `use_auth_token`, then use this as the token
> * Else look for token in `~/.huggingface`:
> - if there is no token and model is private -> throw error
> - if there is no token and model is **not** private -> load the model as usual
> - if there is a token -> use this one
>
> Not sure if there is something I am completely overlooking here in the logic though, _e.g._ if we cannot know before hand whether the model is private or not
Ok never mind - as discussed offline this would require more features to add which is out-of-scope for this PR -> so LGTM!<|||||>Also cc'ing @borisdayma as this PR adds a `exist_ok` param to `HfApi.create_repo()` |
transformers | 9,140 | closed | Fix T5 Encoder model parallel tests | 12-15-2020 21:03:52 | 12-15-2020 21:03:52 | ||
transformers | 9,139 | closed | Experimental support for fairscale ShardedDDP | # What does this PR do?
This PR adds support for [FairScale](https://github.com/facebookresearch/fairscale)'s shared DDP training to save GPU memory when training distributed models. Initial tests see a nice reduction of GPU memory used indeed!
This follows the steps of the [main example](https://github.com/facebookresearch/fairscale/blob/master/benchmarks/oss.py) provided on the FairScale repo, integrating them in our Trainer API. To activate training with shared DDP, one must pass along the flag `--sharded_ddp` in a distributed launch command.
Benchmarks tried:
- a fine-tuning on MRPC with `bert_base_uncased` -> goes from 5GB per GPU to 4GB per GPU with no hurt on accuracy
- a fine-tuning on SQUAD v2 with `xlnet_large-cased` -> goes from 11.5GB per GPU to 8GB per GPU (didn't go until the end so didn't check if the accuracy was the same. Training loss seemed equivalent.) | 12-15-2020 20:47:32 | 12-15-2020 20:47:32 | wrt your notes on GPU memory consumption improvements - from what I have seen checking GPU allocation often doesn't show the real difference, as pytorch tends to use more than it absolutely needs if there is spare memory - or rather it can go with less when the memory is tight - so to get the best improvements stats it's the best to try to push instead the BS until it OOMs, and then you get a more precise difference - which usually leads to more precise improvement numbers than just comparing memory allocation. This is just in my experience.
All I'm saying is that probably the improvements are even better than what they seem.<|||||>finetune_trainer crashes with this option:
```
export BS=4; rm -r output_dir; CUDA_VISIBLE_DEVICES=0,1 PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch --nproc_per_node=2 ./finetune_trainer.py --model_name_or_path sshleifer/distill-mbart-en-ro-12-4 --output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_train --fp16 --freeze_embeds --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --sortish_sampler --src_lang en_XX --task translation --tgt_lang ro_RO --val_max_target_length 128 --warmup_steps 500 --n_train 500 --sharded_ddp
```
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 379, in <module>
main()
File "./finetune_trainer.py", line 315, in main
trainer.train(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 677, in train
model = ShardedDDP(model, self.optimizer)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/fairscale/nn/data_parallel/sharded_ddp.py", line 96, in __init__
self._param_iterator = chain(*[optim.should_bucket_param.keys() for optim in self.sharded_optimizers])
TypeError: 'AdamW' object is not iterable
```
could probably extend `test_finetune_trainer.py` to deploy this option if `fairscale` is available? but CIs won't have it - and it's quite slow to build
<|||||>Oh it's just because it overrides the `create_optimizer_and_scheduler` method. Will fix that method.<|||||>OK, next we have this:
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 379, in <module>
main()
File "./finetune_trainer.py", line 315, in main
trainer.train(
File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 818, in train
self.scaler.step(self.optimizer)
File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 330, in step
assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
AssertionError: No inf checks were recorded for this optimizer.
```
Coincidentally I have just had the same issue with deepspeed integration when I enable its internal fp16 handling. Didn't get to the root of it yet, but removing `--fp16` arg and thus disabling all the fp16 handling trainer does removed this error.
note: I'm switching to deepspeed fp16 handling there...
<|||||>Is it FP16 with AMP or with apex? I don't believe fairscale is compatible with apex.<|||||>native amp
See the command line I'm testing with at:
https://github.com/huggingface/transformers/pull/9139#issuecomment-745581491<|||||>If you're joining in and discovered you can't build `fairscale`, please see [this](https://github.com/facebookresearch/fairscale/pull/249) and perhaps [that](https://github.com/facebookresearch/fairscale/issues/250).<|||||>> OK, next we have this:
>
> ```
> Traceback (most recent call last):
> File "./finetune_trainer.py", line 379, in <module>
> main()
> File "./finetune_trainer.py", line 315, in main
> trainer.train(
> File "/mnt/nvme1/code/huggingface/transformers-master/src/transformers/trainer.py", line 818, in train
> self.scaler.step(self.optimizer)
> File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/torch/cuda/amp/grad_scaler.py", line 330, in step
> assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were recorded for this optimizer."
> AssertionError: No inf checks were recorded for this optimizer.
> ```
>
> Coincidentally I have just had the same issue with deepspeed integration when I enable its internal fp16 handling. Didn't get to the root of it yet, but removing `--fp16` arg and thus disabling all the fp16 handling trainer does removed this error.
>
> note: I'm switching to deepspeed fp16 handling there...
hey there, a bit late, but one of the fairscale/shardedDDP author. The issue with Apex (and vanilla Torch) grad scaler is that it does not know about the gradient sharding, so not all the ranks will have the same behaviour. Torch AMP is supported though, you just have to pass in the ShardedGradScaler as defined here https://github.com/facebookresearch/fairscale/blob/master/fairscale/optim/grad_scaler.py#L24<|||||>Yes, we're passing that scaler :-) The issue was with AMP not Apex. It looks like there is a problem with or without FP16 with one of models.
Ah reading more, I see there is a lot on the issue I posted so will look there. Thanks for coming helping us! |
transformers | 9,138 | closed | adapting trainer.py for multiple optimizers | Hi
I was wondering if there can be an easy way to adapt trainer.py for multiple optimizers, where each optimizer is responsible for updating a part of model. thanks | 12-15-2020 19:32:16 | 12-15-2020 19:32:16 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,137 | closed | Add possibility to switch between APEX and AMP in Trainer | # What does this PR do?
When PyTorch >= 1.6 is installed, Trainer is always using native AMP right now. This PR adds the option to switch between AMP and APEX, which can be useful:
- because of the memory leak in AMP fixed (fixed in 1.7.1 but present in 1.6)
- to benchmark APEX vs. AMP
It also simplifies a little bit the internal of Trainer with those. | 12-15-2020 19:30:54 | 12-15-2020 19:30:54 | This PR also removes `_use_ddp_no_sync` since presumably `transformers` no longer supports pytorch < 1.2<|||||>There was one nit that you agreed with but didn't integrate - but I'm fine if it remains as merged - just a potential for divergence down the road...<|||||>Oh, which one did I miss?<|||||>https://github.com/huggingface/transformers/pull/9137#discussion_r543638157
<|||||>Argh, one sec - I see what happened - that wasn't what I meant - sorry for not being clear. `choices` is crucial here - since you don't validate the user-provided values - this is error-prone.
I tried to suggest not repeating the options in the help comment - please let's have `choices` back, and duplicate them if you prefer the help to have the explicit repetition - thanks.<|||||>Fixed directly on master in [this commit](https://github.com/huggingface/transformers/commit/51adb97cd644a5840d971868d18c1d436fd6ff5d).<|||||>That's perfect. Thank you, @sgugger! |
transformers | 9,136 | closed | Update notebook table and transformers intro notebook | # What does this PR do?
Update the examples table and the notebooks table to include all recent examples. Also fix the intro notebook to the transformers library, in particular, the image that was missing.
Fixes #9083
| 12-15-2020 18:42:46 | 12-15-2020 18:42:46 | Discussion on pinning/testing notebooks needs to be global on all notebooks (not just one) so merging this for now. We can think of a strategy and implement it in a follow-up PR. |
transformers | 9,135 | closed | Fix Bart Shift | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Previous PR #9134 was still WIP and accidentally merged to quickly -> sorry for the many commits.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-15-2020 17:42:20 | 12-15-2020 17:42:20 | |
transformers | 9,134 | closed | [Bart] Correct wrong order in shift token to right in Bart | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
The previous PR #9131 implemented the replacement of -100 with pad_token after retrieving the eos_token_idx. However it should be done before to make sure the correct eos_token_id is found.
Thanks a lot @patil-suraj for spotting this.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-15-2020 17:27:03 | 12-15-2020 17:27:03 | |
transformers | 9,133 | closed | [Examples] Add automatic dataset splitting in language-modeling examples | # What does this PR do?
Currently, language-modeling examples support passing a HF-datasets dataset as training data. However, this dataset needs to have a `train` and `validation` split, which is not the case for many language-modeling datasets, which are just unstructured text. The updated scripts automatically partition the `train` split to create a `validation` split if it doesn't exist already, and adds `validation_split_percentage` argument to control the split ratio, set to 5% by default. | 12-15-2020 17:17:20 | 12-15-2020 17:17:20 | Ah, the commit from #9127 seems to have snuck its way in there. Should I remove it?<|||||>If you can do it easily, that would be best!<|||||>> If you can do it easily, that would be best!
I've tried for a bit but I think I just made things worse ! If that's OK I'll leave it there and I'll fix things at merge time. |
transformers | 9,132 | closed | Fix typo in trainer_tf.py | # What does this PR do?
Fixes a typo in trainer_tf.py
Fixes #9053
@sgugger
| 12-15-2020 17:04:05 | 12-15-2020 17:04:05 | |
transformers | 9,131 | closed | [Bart] fix bart loss masking | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #9123
Bart should be able to replace -100 tokens when prepping `decoder_input_ids`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
albert, bert, XLM: @LysandreJik
GPT2: @LysandreJik, @patrickvonplaten
tokenizers: @mfuntowicz
Trainer: @sgugger
Benchmarks: @patrickvonplaten
Model Cards: @julien-c
examples/distillation: @VictorSanh
nlp datasets: [different repo](https://github.com/huggingface/nlp)
rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Text Generation: @patrickvonplaten, @TevenLeScao
Blenderbot, Bart, Marian, Pegasus: @patrickvonplaten
T5: @patrickvonplaten
Rag: @patrickvonplaten, @lhoestq
EncoderDecoder: @patrickvonplaten
Longformer, Reformer: @patrickvonplaten
TransfoXL, XLNet: @TevenLeScao, @patrickvonplaten
examples/seq2seq: @patil-suraj
examples/bert-loses-patience: @JetRunner
tensorflow: @jplu
examples/token-classification: @stefan-it
documentation: @sgugger
FSMT: @stas00
-->
| 12-15-2020 16:52:07 | 12-15-2020 16:52:07 | cc @patil-suraj |
transformers | 9,130 | closed | Trainer: support iterable datasets for evaluation | The trainer seems to support passing iterable datasets as the `train_dataset` (see #5829) but misses to support the same for the `eval_dataset`. I have implemented an iterable dataset for training and now I cannot use the same implementation for evaluation. This doesn't make much sense as evaluation could easily be done using the iterable dataset.
Currently the evaluation fails with the following exception:
```
ValueError: DataLoader with IterableDataset: expected unspecified sampler option, but got sampler=<torch.utils.data.sampler.SequentialSampler object at 0x7f53900c50d0>
```
I suspect the below method should return None in case of an iterable dataset:
https://github.com/huggingface/transformers/blob/ef2d4cd4457a344b633173c14ca7789f18f75b59/src/transformers/trainer.py#L402-L408
Just like how it is handled for the `train_dataset`:
https://github.com/huggingface/transformers/blob/ef2d4cd4457a344b633173c14ca7789f18f75b59/src/transformers/trainer.py#L380-L400
| 12-15-2020 16:39:45 | 12-15-2020 16:39:45 | This is more complex than this as the `Trainer` needs to knows in advance the number of elements in your evaluation dataset, so you need to implement a `__len__` method in your evaluation dataset to have `Trainer` work on it.
An iterable dataset might make sense for training since you want to yield "infinite" examples and stop at a certain step, it doesn't really makes sense for evaluation where, by definition, you have a finite number of samples.<|||||>I see, but that also means that you'd have to implement two different datasets for the same data, which is a bit annoying. Why does the trainer need to know the length of the eval dataset?<|||||>This is for the distributed evaluation to work: we need to initialize the containers for the logits and predictions to the right size and fill them with the data returned by each node.
You iterable dataset must be finite, so you can just wrap it like this before sending it to `Trainer`:
```
class FromIterableDataset:
def __init__(self, iterable_dataset):
self.dataset = list(iterable_dataset)
def __getitem__(self, i):
return self.dataset[i]
def __len__(self):
return len(self.dataset)
```<|||||>Yeah I guess that works for most cases. Although It might make sense to catch the case where one implements `__len__` in an iterable dataset, which might be reasonable depending on how the data is stored. Currently you're first told to implement `__len__` and then the code just fails with above exception. It would probably be better if there was a more meaningful exception regarding the use of iterable datasets for evaluation.<|||||>Anyways, this seems to be an edge case, thanks for your help! |
transformers | 9,129 | closed | Fix TF Transfo XL | # What does this PR do?
This PR fixes an issue in TFTransfoXL the last layer was added into the complete list of `hidden_states` while being already transposed. Then adding it at the end after all the other states have been transposed.
| 12-15-2020 16:37:24 | 12-15-2020 16:37:24 | |
transformers | 9,128 | closed | BartForCausalLM analogs to `ProphetNetForCausalLM` | # What does this PR do?
Implementing BartForCausalLM anologs for ProphetNetForCausalLM
Fixes #9066
| 12-15-2020 16:13:07 | 12-15-2020 16:13:07 | Hy @sadakmed,
let me know if you need help on the issue or if you don't find the time to tackle it. I'll then just make it open to the "public" again :-) <|||||>@patrickvonplaten The loss function it what I stuck on, thank you very much for your guidance.
> Let me know if you need help or are stuck :-)
for sure I will ;-) <|||||>Hey @sadakmed,
Do you have an update on the PR? It's been three weeks now and it would be great to merge this soon. Sorry, we're very fast-moving in this lib and other community contributors have started asking for this feature. By next week, I'll probably have to take a look myself or redistribute the issue. <|||||>Hi @patrickvonplaten my apologies,
Could you Please see it now, lemme know if anything is missing. <|||||>Hi @patrickvonplaten, working on the test 'BartStandaloneCausalLM': I dont know if the `self.model_tester` in 'setUp' should be 'BartDecoderTester' (needed to be implemented), or do u recommend something else.
<|||||>Hey @sadakmed,
Thanks a lot for you additions here :-)
Yes, we need a new `BartStandaloneDecoderModelTester` analog to how it's done for ProphetNet in `tests/test_modeling_prophetnet.py`. Do you want to give it a try? Otherwise, I can go into your PR and see how to add the tests :-) <|||||>Hi @patrickvonplaten
> Do you want to give it a try?
of course, I'm working on it, <|||||>Hi @patrickvonplaten, I just pushed the test, could you please check it out!
thaaaanks <|||||>Hey @sadakmed,
I corrected `BartForCausalLM` and also added `MBartForCausalLM`. It would be awesome if you could take care of adding `MarianForCausalLM`, `PegasusForCausalLM`, `BlenderbotForCausalLM`, and `BlenderbotSmallForCausalLM`.
To do so you can simply copy everything that was done for `MBart` in this PR 1-to-1 to the mentioned models above. Let me know if sounds feasible for you :-)
Thanks a lot for your help so far!<|||||>> adding `MarianForCausalLM`, `PegasusForCausalLM`, `BlenderbotForCausalLM`, and `BlenderbotSmallForCausalLM`.
implementing it for use with EncoderDecoder or just the test?
Yes I would like to do it, with all pleasure.<|||||>> > adding `MarianForCausalLM`, `PegasusForCausalLM`, `BlenderbotForCausalLM`, and `BlenderbotSmallForCausalLM`.
>
> implementing it for use with EncoderDecoder or just the test?
>
> Yes I would like to do it, with all pleasure.
For those models, there is no need to add a test to `EncoderDecoderModel`. We should only copy-paste the code that was added to MBart to those models and also copy-paste the test in `test_modeling_marian.py` e.g.<|||||>@patrickvonplaten Could you please check the test if it well, and about the test of `Decoder only` I didn't get what do you mean!!<|||||>It would be nice to fix the tests and also add tests for `Pegasus`, `Blenderbot`, and `BlenderbotSmall`<|||||>> It would be nice to fix the tests and also add tests for `Pegasus`, `Blenderbot`, and `BlenderbotSmall`
@patrickvonplaten, exactly like the one was for Marian?<|||||>@patrickvonplaten could you check please!<|||||>**UPDATE:**
@LysandreJik @sgugger
This PR enables all Bart-like models to be used in combination with the Encoder-Decoder framework. The model `BartForCausalLM` is added for Bart and then copied to all other models via the copying mechanism. Also, a new model tester is added for all those models.
While working on this I found a small bug for a very edge-case scenario for Bart and corrected it here: https://github.com/huggingface/transformers/pull/9128/files#r569436360 . The newly added tests were failing, which made me aware of the bug.
Also, I had to slightly change the `check_repo.py` file so that it counts both classes from `all_model_classes` with 1 and 2 paratheses.
<|||||>Great job @sadakmed <|||||>> Great job @sadakmed
wouldn't happen without you, thank you very much. see u in the next PR ;) |
transformers | 9,127 | closed | [Flax] Bugfixes in `run_mlm_flax.py` | # What does this PR do?
This PR fixes a few bugs I have observed when using `run_mlm_flax.py`:
- As discussed with @mfuntowicz , `jnp.split` is a lot slower than `np.split` on the first iteration, outright hanging in my tests on simplewiki (~20MB). As this operation doesn't need to be traced. we can use `np.split` instead.
- When using a HF `datasets`, the text column was also passed to the model as input, causing a bug. The PR removes the text column in `dataset.map` to avoid this.
- Finally, using `warmup_steps = 0` (as is default) causes the Flax optimizer to output NaNs. We use 1 as a minimum value for the same warmup-less behaviour. | 12-15-2020 16:05:53 | 12-15-2020 16:05:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,126 | closed | seq2seq finetuning scripts break before training (cannot import name ParallelMode) | File "finetune_trainer.py", line 24, in <module>
from seq2seq_trainer import Seq2SeqTrainer
File "/home/---/transformers/examples/seq2seq/seq2seq_trainer.py", line 35, in <module>
from transformers.training_args import ParallelMode
ImportError: cannot import name 'ParallelMode' from 'transformers.training_args'
| 12-15-2020 15:32:57 | 12-15-2020 15:32:57 | Make sure you install a recent version of transformers, `ParallelMode` was added to the master branch some ~2 weeks ago.<|||||>Yes, as @KDercksen points out, you need an up-to-date install from source to be able to run the examples (as mentioned in the main examples folder README). |
transformers | 9,125 | closed | Predict single sentence for Glue Tasks | I have trained a custom binary classifier using `run_glue.py` and have the `pytorch_model.bin` file saved to a directory. Is there a way to predict for a given sentence and extract its label?
I know `trainer.predict(test_dataset)` does it. But I am having some trouble converting the string to the format that it takes.
| 12-15-2020 15:04:05 | 12-15-2020 15:04:05 | Here is how I managed to do it. I have considered a `pandas` dataframe, but you can easily extend it to predict individual sentence too.
```
import pandas as pd
import numpy as np
from datasets import Dataset, load_dataset
from scipy.special import softmax
from transformers import Trainer
from transformers import BertTokenizer, BertForSequenceClassification
tokenizer = BertTokenizer.from_pretrained(<model_name_or_path>)
model = BertForSequenceClassification.from_pretrained(<model_name_or_path>)
def preprocess_function(examples):
# Tokenize the texts
result = tokenizer(examples['sentence'], padding=False, max_length=None, truncation=True, verbose=False)
return result
def predict(dataframe):
eval_dataset = Dataset.from_pandas(dataframe)
eval_dataset = eval_dataset.map(preprocess_function, batched=False, load_from_cache_file=True)
# Initialize our Trainer
trainer = Trainer(model=model, tokenizer=tokenizer)
predictions = trainer.predict(test_dataset=eval_dataset).predictions
# Adding a softmax layer to get probabilities. If you want class labels instead - np.argmax(predictions, axis=1)
predictions = np.array([softmax(element) for element in predictions])[:, 1]
return predictions
```<|||||>@Nickil21
I used the torch model in the trainer. It's much faster than using pandas and creating a dataset.
`import torch
def test_2(trainer, sentence1, sentence2):
id_tolabel = {0:'negative', 1: 'positive'}
model = trainer.model.eval()
tokenized = tokenizer(sentence1, sentence2, return_tensors='pt').to(model.device)
with torch.no_grad():
label = torch.argmax(trainer.model.forward(**tokenized).logits, dim=1)[0].cpu().item()
return id_tolabel[label]
print(test_2(trainer, 'it is not possible', 'this is impossible'))` |
transformers | 9,124 | closed | Improve BERT-like models performance with better self attention | # What does this PR do?
This PR updates the way we implement the self attention layers in order to be aligned on the original BERT performance. Small breaking change, this improvement needs at least TF 2.3. This change has already been discussed with @thomwolf, and he agreed. But still needs the approval of @LysandreJik @patrickvonplaten and @sgugger
@patrickvonplaten I have removed the comment for `check_copies` in the Longformer model because I don't know enough this model to apply the proper changes, I will apply this update to one model by one model for the ones I know but can you take this one?
@jlei2 as I'm on Windows, unfortunately the GPU profiling is not yet available in WSL, can you clone this branch and be sure that everythings works like expected with your benchmark? Thanks!!
Fixes # (issue)
#6771
| 12-15-2020 13:07:04 | 12-15-2020 13:07:04 | A Python profiling call gives the following improvements:
```
model = TFBertModel.from_pretrained("bert-base-cased")
# With the improvements
cProfile.run("model(model.dummy_inputs)")
54591 function calls (53774 primitive calls) in 0.064 seconds
# Currently on master
cProfile.run("model(model.dummy_inputs)")
76166 function calls (75204 primitive calls) in 0.095 seconds
```<|||||>Thanks @patrickvonplaten !!
1. Slow tests are passing for these models
2. I confirm that "Old" pre-trained models `tf_model.h5` files that were saved with tf < 2.3 can be loaded into the new layer design
I haven't tested the tf1 models, you mean testing the `load_tf_weights_in_bert` in the `modeling_bert.py` file?<|||||>@jlei2 has confirmed that now everything works as expected in the profiler and benchmark 👍 https://github.com/huggingface/transformers/issues/6771#issuecomment-745786314<|||||>> 2\. "Old" pre-trained models `tf_model.h5` files that were saved with tf < 2.3 can be loaded into the new layer des
Yeah I mean loading a tf `.ckpt` file using the `from_pretrained(...)` method. The `from_pretrained(...)` method automatically uses the correct functions to load `.ckpt`. I think the easiest way would be to download one of the zips of the official google bert: https://github.com/google-research/bert#bert and quickly check that it can be loaded and that the output on this branch and on master is the same.<|||||>> > 2. "Old" pre-trained models `tf_model.h5` files that were saved with tf < 2.3 can be loaded into the new layer des
>
> Yeah I mean loading a tf `.ckpt` file using the `from_pretrained(...)` method. The `from_pretrained(...)` method automatically uses the correct functions to load `.ckpt`. I think the easiest way would be to download one of the zips of the official google bert: https://github.com/google-research/bert#bert and quickly check that it can be loaded and that the output on this branch and on master is the same.
Ok as discussed offline TF1 checkpoints cannot even be loaded into TF2 at the moment (only if one goes through PT), so this PR is good to go for me! |
transformers | 9,123 | closed | BART cannot accept -100 as ignored label | ## Environment info
- `transformers` version: 4.0.1
- Platform: Linux
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Bart: @patrickvonplaten
## Information
I'm using ``BartForConditionalGeneration`` to do some natural language generation tasks. By the [doc](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration) I should be able to set -100 for some tokens to ignore. However, it would raise an out of index error.
## To reproduce
```python
from transformers import BartForConditionalGeneration, AutoTokenizer
b = BartForConditionalGeneration.from_pretrained("facebook/bart-base")
t = AutoTokenizer.from_pretrained("facebook/bart-base")
s1 = "hello hello hello hello world"
inputs = t(s1, return_tensors="pt")
label = inputs["input_ids"].clone()
label[0, 2:3] = -100
outputs = b(**inputs, labels=label)
```
Then it raise the following error:
> Traceback (most recent call last):
> File "<stdin>", line 1, in <module>
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 1032, in forward
> return_dict=return_dict,
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 915, in forward
> return_dict=return_dict,
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/transformers/models/bart/modeling_bart.py", line 564, in forward
> x = self.embed_tokens(input_ids) * self.embed_scale
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
> result = self.forward(*input, **kwargs)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/modules/sparse.py", line 126, in forward
> self.norm_type, self.scale_grad_by_freq, self.sparse)
> File "/home/hongru/.conda/envs/commonsense/lib/python3.7/site-packages/torch/nn/functional.py", line 1814, in embedding
> return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
> IndexError: index out of range in self
Without giving -100 in the label, it can return the output correctly.
## Expected behavior
Should return the output correctly. | 12-15-2020 12:56:05 | 12-15-2020 12:56:05 | |
transformers | 9,122 | closed | RobertaTokenizer fails to do_lower_case, different behavior between version 2 and 3 | ## Environment info
- `transformers` version: 3.4.0 / 2.8.0
- Platform: linux
- Python version: 3.8
### Who can help
@mfuntowicz
## Information
Tokenizer I am using: RobertaTokenizer
The tokenizer do not lower case the text even if I explicitly set do_lower_case=True. The behavior is different between version 2.8.0 and 3.4.0
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("roberta-base", do_lower_case=True)
print(tokenizer.tokenize("Huggingface"))
```
## Expected behavior
Version 3.4.0 prints: ['Hug', 'ging', 'face']
Version 2.8.0 prints: ['h', 'ug', 'ging', 'face']
| 12-15-2020 12:20:22 | 12-15-2020 12:20:22 | Could you please also try with the most recent transformers release and report what happens?<|||||>Version 4.0.1 prints: ['Hug', 'ging', 'face'] <|||||>Explicitly setting the attribute 'do_lower_case' to True solves the problem.
```python
from transformers import RobertaTokenizer
tokenizer = RobertaTokenizer.from_pretrained("roberta-base", do_lower_case=True)
tokenizer.do_lower_case = True
print(tokenizer.tokenize("Huggingface"))
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>If we use the AutoTokenizer library, this still does not work.
```
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("roberta-base", do_lower_case=True)
tokenizer.do_lower_case = True
print(tokenizer.tokenize("Huggingface"))
``` |
transformers | 9,121 | closed | [Generation] Add generation outputs | # 🚀 Feature request
We've had multiple issues asking for the possibility to output the scores/probabilities of each token during generation, see:
https://github.com/huggingface/transformers/issues/7654
https://github.com/huggingface/transformers/issues/3891
https://github.com/huggingface/transformers/issues/8656
Also we should be able to output the models attentions `hidden_states` at each generation step, *a.k.a* make use of those model outputs:
https://github.com/huggingface/transformers/blob/c19d04623eacfbc2c452397a5eda0fde42db3fc5/src/transformers/models/bert/modeling_bert.py#L883 in generation as well.
To do so we should create a new generation output class for each "sub" generation function:
1) `GreeySearchDecoderOnlyOutput(output_ids, logits, attentions, hidden_states)` for decoder-only models, where as `output_ids` are the current outputs of generate, `logits` will be the logit vectors at each generation step (so should be of shape `Tuple((logits_1,), ..., (logits_max_length,))`) and `attentions` and `hidden_states` should be of shape `Tuple((attentions_1,), ..., (attentions_max_length,))`. As before `attentions` and `hidden_states` will be output if a flag `output_attentions` or `output_hidden_states` iset to True and for the logits we should add a flag `output_scores`. Also we should have a `GreeySearchEncoderDecoderOutput(output_ids, logits, encoder_attentions, decoder_attentions, encoder_hidden_states, decoder_hidden_states)` class with the respective enc and dec outputs.
2) `SampleDecoderOnlyOutput(output_ids, probabilities, attentions, hidden_states)` -> the same outputs only that we replace the logits output with the softmax probabilities (of the same shape); same flags as in 1) and encoder-decoder class as well
3) `BeamSearchDecoderOnlyOutput(output_ids, probs, attentions, hidden_states)` -> `probs` should be this tensor: https://github.com/huggingface/transformers/blob/c19d04623eacfbc2c452397a5eda0fde42db3fc5/src/transformers/generation_utils.py#L1235 at each step same flags as in 1) and encoder-decoder class as well
Each output class should be derived from https://github.com/huggingface/transformers/blob/c19d04623eacfbc2c452397a5eda0fde42db3fc5/src/transformers/file_utils.py#L1306 just as the model output classes are in https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_outputs.py .
A PR should start with the "GreedySearchOutput" model classes and add this to `generation_utils.py` => then we should add the three flags to both `generate()` and `greedy_search()`. Then `SampleOutput` and `BeamSerachOutput` should be added. The PR should also include good documentation for each of the outputs as it is the case for the current model outputs.
## Your contribution
I'm happy to help the contributor throughout the PR :-)
| 12-15-2020 11:03:39 | 12-15-2020 11:03:39 | |
transformers | 9,120 | closed | Fix tf2.4 | # What does this PR do?
Fix the tests to make them compliant with the new TF 2.4
| 12-15-2020 10:20:50 | 12-15-2020 10:20:50 | LGTM! |
transformers | 9,119 | closed | Which dataset is used for training GPT, GPT2 from scratch? | Hi,
I checked the model card of GPT and GPT2, but I can't find the dataset which was used for training.
Where can I find the datasets which is used for?
https://huggingface.co/openai-gpt
https://huggingface.co/gpt2 | 12-15-2020 09:00:01 | 12-15-2020 09:00:01 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,118 | closed | Different inference results of a keras including transformer model on TPU vs CPU? | transformers version: 4.0.0
Platform: Linux-4.9.0-11-amd64-x86_64-with-debian-9.11
Python version: 3.7.9
PyTorch version (GPU?): 1.6.0a0+bf2bbd9 (False)
Tensorflow version (GPU?): 2.3.1 (False)
Using GPU in script?: No
Using distributed or parallel set-up in script?: Distributed
I am building a Keras model which consists of a TFRobertaModel with 2 custom heads on top. One is a QA head which outputs span predictions and the other is a binary classification head. I train the model on TPUs and everything works fine with prediction and inference on the TPU with great model performance. The issue I am having is loading the saved model and/or weights and doing inference on CPU. I am getting completely different results compared to inference on the TPU.
I save the model using model.save and the weights as well with models.save_weights and it doesn't matter which one I load, I get the same results (using tf.keras.models.load_model). I do get a warning that there was an error saving the state of the optimiser which is initialized at random on model loading. I figure this is not an issue with keras since I compile the model with a tf.keras.optimizers.Adam which should be saved with model.save with a keras only model. I have also tried building the model from scratch and only loading the saved weights but I get the same results.
This is a convoluted problem to reproduce but I was wondering if you had any pointers on how to debug this or if this was a known problem. Here is a sample of the model output on TPU vs CPU:
TPU - the first 10 binary answer predictions:
array([[9.9994159e-01, 5.8382138e-05],
[9.9990284e-01, 9.7181561e-05],
[9.9995410e-01, 4.5917721e-05],
[9.9996519e-01, 3.4784229e-05],
[9.9975628e-01, 2.4374224e-04],
[9.9997389e-01, 2.6103005e-05],
[9.9995828e-01, 4.1662061e-05],
[9.9998319e-01, 1.6824890e-05],
[7.0182599e-02, 9.2981732e-01],
[9.9993420e-01, 6.5814391e-05]], dtype=float32)
CPU - the first 10 binary answer predictions:
array([[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ],
[0.06942184, 0.9305781 ]], dtype=float32)
The model building and compilation:
```
dropout=False
def create_model():
input_shape = (None,)
model = TFRobertaForQuestionAnswering.from_pretrained(model_path, from_pt=True, trainable=True)
input_ids = tf.keras.layers.Input(shape=input_shape, dtype=np.int32, name='input_ids')
attention_mask = tf.keras.layers.Input(shape=input_shape, dtype=np.int32, name='attention_mask')
token_type_ids = tf.keras.layers.Input(shape=input_shape, dtype=np.int32, name='token_type_ids')
outputs = model.roberta(input_ids,
attention_mask=attention_mask,
token_type_ids=token_type_ids,
output_hidden_states=True)
seq_output = outputs[0]
logits = model.qa_outputs(seq_output)
start_logits, end_logits = tf.split(logits, 2, axis=-1)
start_logits = tf.squeeze(start_logits, axis=-1)
end_logits = tf.squeeze(end_logits, axis=-1)
#BINARY_ANSWER
concat_hidden_layers = tf.concat(tuple([outputs.hidden_states[i] for i in [-4, -3, -2, -1]]), axis=-1)
pooled_output = concat_hidden_layers[:, 0, :]
binary_answer_logits = tf.keras.layers.Dense(768,
kernel_initializer=tf.keras.initializers.truncated_normal(stddev=0.02),
activation="tanh",
name="dense_tanh")(pooled_output)
if dropout:
binary_answer_logits = tf.keras.layers.Dropout(0.1)(binary_answer_logits)
binary_answer_probs = tf.keras.layers.Dense(2, activation='softmax', name="binary_answer")(binary_answer_logits)
keras_model = Model(inputs={'input_ids':input_ids,
'attention_mask':attention_mask,
'token_type_ids':token_type_ids},
outputs={'start_logits':start_logits,
'end_logits':end_logits,
'binary_answer_probs':binary_answer_probs})
return keras_model
with strategy.scope():
keras_model = create_model()
optimizer = tf.keras.optimizers.Adam(learning_rate=3e-5, epsilon=1e-08)
keras_model.compile(loss={'start_logits':compute_loss,
'end_logits':compute_loss,
'binary_answer_probs':tf.keras.losses.binary_crossentropy},
optimizer=optimizer)
```
I also get completely different results for span predictions on TPU vs CPU which is not surprising, seeing how different the binary prediction is. Any help or pointers are appreciated.
## Expected behavior
The same inference results on TPU vs CPU.
| 12-15-2020 08:39:37 | 12-15-2020 08:39:37 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 9,117 | closed | Tapas v4 (tres) | Here we are again, opening a new PR based on the former (#8988) which had some Github issues.
cc @LysandreJik | 12-15-2020 08:09:38 | 12-15-2020 08:09:38 | Don't worry about the TF tests, these are because of TF2.4 which are fixed on `master`.<|||||>The conversion script currently includes a line in which I'm importing a local `vocab.txt`, I know this should be removed in the future. |
transformers | 9,116 | closed | Roberta training crashing due to position_id embedding | I've been trying to work out why I keep getting a CUDA assert in a specific mini-batch when training RoBERTa from scratch. I finally tracked it down after switching to CPU.
I don't understand why `padding_idx` is added to `incremental_indices` below? - _edit: I do in that the embedding needs a padding mask but it I'm not sure it's the correct way to do it._
In my case padding_idx=3. And one of my input_ids rows was truncated. Say I have input_ids = [[4,5,6],[4,3,3]], this results in mask=[[1,1,1],[1,0,0]] and incremental index=[[1,2,3],[1,0,0]]. Adding padding_idx then produces [[4,5,6],[4,3,3]].
The issue is `self.position_embeddings = nn.Embedding(config.max_position_embeddings, config.hidden_size)` so for any sequences which are truncated adding anything to the indices results in an index which is greater than the embedding dim.
Perhaps you can argue that max_position_embeddings is supposed to be larger than the largest possible sequence so this doesn't happen? There is a check in `run_mlm.py` that `data_args.max_seq_length > tokenizer.model_max_length` but it seems that in actual fact to avoid a very hard to track down error you must have tokenizer.model_max_length < data_args.max_seq_length
```
def create_position_ids_from_input_ids(input_ids, padding_idx):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
"""
# The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
mask = input_ids.ne(padding_idx).int()
incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
return incremental_indices.long() + padding_idx
``` | 12-15-2020 06:47:16 | 12-15-2020 06:47:16 | One potential fix is to not use the same padding_idx for the position_ids embedding, why not just use 0? The least actual unpadded value of incremental_indices will be one so 0 is a valid pad.
In the `__init__` of `RobertaEmbeddings` (note this code is also duplicated, self.position_embeddings is intiialised twice!)
self.position_embeddings = nn.Embedding(
config.max_position_embeddings, config.hidden_size, padding_idx=0 # replaced self.padding_idx with 0
)
And then modify create_position_ids_from_input_ids
```
def create_position_ids_from_input_ids(input_ids, padding_idx):
"""
Replace non-padding symbols with their position numbers. Position numbers begin at padding_idx+1. Padding symbols
are ignored. This is modified from fairseq's `utils.make_positions`.
Args:
x: torch.Tensor x:
Returns: torch.Tensor
"""
# The series of casts and type-conversions here are carefully balanced to both work with ONNX export and XLA.
mask = input_ids.ne(padding_idx).int()
incremental_indices = torch.cumsum(mask, dim=1).type_as(mask) * mask
return incremental_indices.long() # removed + padding_idx
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,115 | closed | [doc] pytorch native amp leak fix landed in 1.7.1 | update README with good news that the leak fix has been applied to pytorch-1.7.1 and not just 1.8.
Reference: https://github.com/pytorch/pytorch/issues/48049#issuecomment-742790722
@LysandreJik | 12-15-2020 04:42:36 | 12-15-2020 04:42:36 | |
transformers | 9,114 | closed | Fix stack overflow | Currently calling `n_sequences` on a `BatchEncoding` results in a stack overflow. | 12-15-2020 04:14:56 | 12-15-2020 04:14:56 | |
transformers | 9,113 | closed | Some Models do not support gradient checkpointing | Thanks for this wonderful library.
I found some models do not support gradient_checkpointing, which I believe is a very important feature. For example,
ElectraModel: ElectraConfig has no gradient_checkpointing option but ElectraModel will use gradient_checkpointing if config.gradient_checkpointing = True
DistillBERT: DistillBertConfig has no gradient_checkpointing option and DistillBertModel does not support gradient_checkpointing.
I assume all transformer-based models should be able to support gradient_checkpointing. | 12-15-2020 03:28:41 | 12-15-2020 03:28:41 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 9,112 | closed | Add BORT | Hi,
this PR adds the recently introduced BORT model from @adewynter and Daniel J. Perry from the Alexa team into Transformers.
----
BORT was introduced in the [Optimal Subarchitecture Extraction For BERT](https://arxiv.org/abs/2010.10499).
Details about BORT:
> We extract an optimal subset of architectural parameters for the BERT architecture from Devlin et al. (2018) by applying recent breakthroughs in algorithms for neural architecture search. This optimal subset, which we refer to as "Bort", is demonstrably smaller, having an effective (that is, not counting the embedding layer) size of 5.5% the original BERT-large architecture, and 16% of the net size. Bort is also able to be pretrained in 288 GPU hours, which is 1.2% of the time required to pretrain the highest-performing BERT parametric architectural variant, RoBERTa-large (Liu et al., 2019), and about 33% of that of the world-record, in GPU hours, required to train BERT-large on the same hardware. It is also 7.9x faster on a CPU, as well as being better performing than other compressed variants of the architecture, and some of the non-compressed variants: it obtains performance improvements of between 0.3% and 31%, absolute, with respect to BERT-large, on multiple public natural language understanding (NLU) benchmarks.
This should fix #8135 :hugs:
---
ToDo tasks:
* [x] Upload models (both PyTorch and TensorFlow model) to model hub
* [x] Add conversion script from Gluonnlp to Transformers
* [x] Enable unit tests (they are working and just wait for the model upload) | 12-15-2020 00:24:40 | 12-15-2020 00:24:40 | 🔥 Looking forward to taking a look at the conversion script from GluonNLP/mxnet!<|||||>@patrickvonplaten I added some examples for both `modeling_bort.py` and modeling_tf_bort.py` :hugs:
@julien-c The conversion script is also added - you just need to install `gluonnlp==0.8.3` and `mxnet==1.5.0`.
These versions are defined in the BORT [requirements file](https://github.com/alexa/bort/blob/master/requirements.txt). The conversion script also performs a version check.<|||||>We'll have to think a bit how to advertise this. Let me draft up a "Contribution Proposal" for the fine-tuning algorithm.<|||||>Hey @stefan-it,
I've discussed a bit with @LysandreJik and @sgugger offline and I do agree with @LysandreJik after having thought about it again. I think it's better if we actually don't add any new code (besides the conversion script) that should be added to `src/transformers/models/bert/` and the docs page. I'm very sorry to have you asked to go down this road! I think however it does make more sense to not add any "tokenizer" or "model" code as those are exact copies of the `RobertaTokenizer` and `BertModel`. It's probably most efficient to open a new PR and only add the required files. Super sorry again!<|||||>Are we planning to implement the architectural optimization (FPTAS) or just the pre-trained models?<|||||>> Are we planning to implement the architectural optimization (FPTAS) or just the pre-trained models?
Great question! For now, we'll just add the model weights - see: #9813. A community contribution showing how to do FPTAS in a notebook would be extremely valuable though.<|||||>Closing in favor of #9813 |
transformers | 9,111 | closed | Longformer `token_type_ids` Vocabulary Size is 1 But Documentation States Otherwise | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.0.2 and latest (4.0.0)
- Platform: Linux (Ubuntu)
- Python version: 3.6.9 and 3.8.6
- PyTorch version (GPU?): 3.7.0 Tesla P100-PCIE-16GB and Nvidia RTX 3090
- Tensorflow version (GPU?): None
- Using GPU in script?: Yes and No
- Using distributed or parallel set-up in script?: No
### Who can help
Possibly @patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Longformer
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained('allenai/longformer-base-4096')
model = AutoModel.from_pretrained('allenai/longformer-base-4096')
tokenizer_bert = AutoTokenizer.from_pretrained('bert-base-uncased')
inputs = tokenizer("How old are you?", "I'm 6 years old", return_tensors="pt", return_token_type_ids=True, return_attention_mask=True)
inputs_bert = tokenizer_bert("How old are you?", "I'm 6 years old", return_tensors="pt", return_token_type_ids=True, return_attention_mask=True)
print(inputs)
print(inputs_bert)
```
```python
inputs = {'input_ids': tensor([[ 0, 6179, 793, 32, 47, 116, 2, 2, 100, 437, 231, 107,
793, 2]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
inputs_bert = {'input_ids': tensor([[ 101, 2129, 2214, 2024, 2017, 1029, 102, 1045, 1005, 1049, 1020, 2086,
2214, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
```python
inputs['token_type_ids'] = torch.tensor([[0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1]])
model.forward(**inputs)
```
Stack Trace:
```
---------------------------------------------------------------------------
IndexError Traceback (most recent call last)
<ipython-input-65-905d6a7d6135> in <module>()
----> 1 model.forward(**inputs)
6 frames
/usr/local/lib/python3.6/dist-packages/transformers/modeling_longformer.py in forward(self, input_ids, attention_mask, global_attention_mask, token_type_ids, position_ids, inputs_embeds, output_attentions, output_hidden_states)
995
996 embedding_output = self.embeddings(
--> 997 input_ids=input_ids, position_ids=position_ids, token_type_ids=token_type_ids, inputs_embeds=inputs_embeds
998 )
999
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/transformers/modeling_roberta.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
66
67 return super().forward(
---> 68 input_ids, token_type_ids=token_type_ids, position_ids=position_ids, inputs_embeds=inputs_embeds
69 )
70
/usr/local/lib/python3.6/dist-packages/transformers/modeling_bert.py in forward(self, input_ids, token_type_ids, position_ids, inputs_embeds)
178 inputs_embeds = self.word_embeddings(input_ids)
179 position_embeddings = self.position_embeddings(position_ids)
--> 180 token_type_embeddings = self.token_type_embeddings(token_type_ids)
181
182 embeddings = inputs_embeds + position_embeddings + token_type_embeddings
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
725 result = self._slow_forward(*input, **kwargs)
726 else:
--> 727 result = self.forward(*input, **kwargs)
728 for hook in itertools.chain(
729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/sparse.py in forward(self, input)
124 return F.embedding(
125 input, self.weight, self.padding_idx, self.max_norm,
--> 126 self.norm_type, self.scale_grad_by_freq, self.sparse)
127
128 def extra_repr(self) -> str:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse)
1850 # remove once script supports set_grad_enabled
1851 _no_grad_embedding_renorm_(weight, input, max_norm, norm_type)
-> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
1853
1854
IndexError: index out of range in self
```
## Expected behavior
The longformer documentation should be updated and state that the longformer does not support `token_type_ids` like RoBERTa. The `token_type_ids` [vocabulary size is 1](https://huggingface.co/allenai/longformer-base-4096/raw/main/config.json) (compared to [2 for BERT](https://huggingface.co/bert-base-uncased/raw/main/config.json)) for `allenai/longformer-base-4096`, which means `0` is the only valid input for `token_type_ids`. However, [the documentation](https://huggingface.co/transformers/model_doc/longformer.html#transformers.LongformerModel.forward) says `token_type_ids` can be selected in `[0, 1]` for the longformer. The documentation should to specify that the longformer doesn't support `token_type_ids`. For instance, the [RoBERTa documentation](https://huggingface.co/transformers/model_doc/roberta.html) states "RoBERTa doesn’t have `token_type_ids`, you don’t need to indicate which token belongs to which segment. Just separate your segments with the separation token `tokenizer.sep_token` (or `</s>`)." Should a similar message be added for the longformer since it is based on RoBERTa?
| 12-14-2020 22:39:35 | 12-14-2020 22:39:35 | It also might be a good idea to catch this error somewhere before `IndexError: index out of range in self` because that is not descriptive and makes debugging difficult.<|||||>You're right @HHousen - thanks for the note! Do you want to open a PR to fix the docs? That would be awesome :-) Otherwise, I can do it as well <|||||>We are using longformer and we are passing (input_ids, attention_mask , global_attention_mask ,token_type_ids) as input. if we are passing token_type_ids as 0's we are not having any issues but when we try to pass token_type_ids as 1's, or 0's afor segments within the sequence it is throwing following error.
**IndexError: index out of range in self**
**C:/w/b/windows/pytorch/aten/src/ATen/native/cuda/Indexing.cu:658: block: [52,0,0], thread: [32,0,0] Assertion `srcIndex < srcSelectDimSize` failed.**<|||||>i have created separate issue for **index out of range in self ** (https://github.com/huggingface/transformers/issues/9162) while using token_type_ids, from the above comment by @HHousen should i remove token_type_ids as parameter while passing it to model ?<|||||>@yuvarajvc Correct. The Longformer doesn't support `token_type_ids`, so you should not pass them to the model. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.