repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 12,325 | closed | How to assign gpu when using run_language_modeling.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux n188-182-130 4.14.81.bm.23-amd64
- Python version: 3.7.3
- PyTorch version (GPU?):1.7.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
- gpt2: @patrickvonplaten, @LysandreJik
## Information
Model I am using (gpt2.):
The tasks I am working on is:
* my own task or dataset: a txt file, each line is regarded as a sample because I set --line_by_line
Dear all,
there are 8 GPU in my workspace, but from 0 to 3 have been occupied. So i can only use 4 GPUs from Rank4 to Rank7.
How could I set the parameters when using run_language_modeling.py?
And could someone please explain it clearer, what the parameter --local_rank and --tpu_num_cores can do if I want to select only part of my GPU in this training.
Besides, i also found some bug in Trainer.
for example:
1. In initialization process, Trainer object has not attribute "prefiction_loss_only" line 318
2. After training, when start doing evaluation, Trainer object has not attribute ‘is_world_master', I set --do_eval and give a eval set as input.
Here is the script I use in shell:
python3 run_language_modeling.py \
--output_dir $output \
--model_type 'gpt2' \
--model_name_or_path 'gpt2' \
--tokenizer_name 'bert-base-uncased' \
--cache_dir $pretrained_config \
--do_train true \
--train_data_file $train_file \
--do_eval true \
--eval_data_file $test_file \
--line_by_line true \
--mlm true \
--learning_rate 1e-4 \
--num_train_epochs 15 \
--per_device_train_batch_size 128 \
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 06-23-2021 14:37:59 | 06-23-2021 14:37:59 | You should be able to control this with the `CUDA_VISIBLE_DEVICES` environment variable. See this [stackoverflow issue](https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on) for example<|||||>> You should be able to control this with the `CUDA_VISIBLE_DEVICES` environment variable. See this [stackoverflow issue](https://stackoverflow.com/questions/39649102/how-do-i-select-which-gpu-to-run-a-job-on) for example
Thank you for your fast reply.
I will try the solution mentioned in above.
By the way, when initializing Trainer in run_language_modeling.py, the errors Trainer object has not attribute "prediction_loss_only" and after training before evaluation Trainer object has not attribute ‘is_world_master' are still here, I just delete this two line, is this a bug or i miss some necessary parameters that should be input?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,324 | closed | fill-mask pipeline: fix handling topk() indices | # What does this PR do?
Fixes #12113, where indices in the targets array were used instead of their corresponding token ids.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Narsil @LysandreJik | 06-23-2021 14:23:51 | 06-23-2021 14:23:51 | Hi, thanks for letting us know! We just reverted the PR in the meantime.<|||||>Fixed by #12330 |
transformers | 12,323 | closed | Conda build | 06-23-2021 13:35:21 | 06-23-2021 13:35:21 | Will need to rebase this PR on `master` once https://github.com/huggingface/transformers/pull/12187 is merged |
|
transformers | 12,322 | closed | Generate text with `model.generate` on TPU does not work | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29 (Ubuntu 20.04.2 LTS)
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- PyTorch XLA version: 1.8.1
- Using GPU in script?: No, using TPU
- Using distributed or parallel set-up in script?: No, using a single TPU core
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): `facebook/m2m100_1.2B`, but other text generating models have the same problem.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
On a machine with a TPU run:
```python
import torch_xla.core.xla_model as xm
from transformers import M2M100ForConditionalGeneration, M2M100Tokenizer
model_name = 'facebook/m2m100_1.2B'
source_lang = 'en'
target_lang = 'de'
docs = [
"This is some document to translate.",
"And another document to translate."
]
device = xm.xla_device()
model = M2M100ForConditionalGeneration.from_pretrained(model_name).to(device)
tokenizer = M2M100Tokenizer.from_pretrained(model_name, src_lang=source_lang)
encoded_docs = tokenizer(docs, return_tensors='pt', padding=True).to(device)
generated_tokens = model.generate(**encoded_docs, forced_bos_token_id=tokenizer.get_lang_id(target_lang))
```
The call to `model.generate()` runs without ever terminating. It seems to be stuck somewhere in the beam search.
The same code runs perfectly fine on CPUs and GPUs.
## Expected behavior
I'd expect that the generation of text works in the same way as for CPUs and GPUs.
| 06-23-2021 13:01:37 | 06-23-2021 13:01:37 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>The issue still seems to be unresolved. Maybe the generation is not supposed to be supported on TPUs? In that case, a short note in the documentation could be helpful 😄 <|||||>@stekiri - I think we indeed don't support PyTorch-XLA generation yet and yes a comment in the docs would be good! Would you be interested in making such a PR? I've put "enabling TPU generation for PyTorch-XLA" on my TODO list now so hope to tackle this sometime in 3-4 weeks<|||||>I have the same situation on GCP Cloud TPU v2-8 (summarization pipeline with T5ForConditionalGeneration).
I'm eagerly waiting for the support.<|||||>This is still broken<|||||>@patil-suraj it worked for you on TPU no?<|||||>It's possible to use PT `generate` on TPU with `accelerate`, here's a colab which uses GPT2 as an example
https://colab.research.google.com/drive/1OqCLWuEbWLp4fLLWcT-vteEJZHHZ3SZ5?usp=sharing<|||||>> It's possible to use PT `generate` on TPU with `accelerate`, here's a colab which uses GPT2 as an example https://colab.research.google.com/drive/1OqCLWuEbWLp4fLLWcT-vteEJZHHZ3SZ5?usp=sharing
This doesn't seem faster than GPU. I tried to run it on 400 examples and it was slower than GPU. Seems like you are just unloading the model back to CPU, which is actually slower, and we can't really leverage the power of TPU.<|||||>Is there any update on this?<|||||>> Is there any update on this?
I had an exchange with @gante about it and it seems like the code will need major refactoring for this. https://huggingface.co/spaces/joaogante/tf_xla_generate_benchmarks/discussions/1#62eb9350985a691200cf2921<|||||>@mikcnt @divyanshuaggarwal The previous TF generate function was almost a (reduced) copy of the current PT generate function. We had to do a major rework of the TF generate function to make it compatible with XLA, so yeah... PT needs the same treatment if we want to use it with XLA :D
I've shared a twitter thread today about the subject: https://twitter.com/joao_gante/status/1555527603716444160<|||||>@gante thanks a lot for the super exhaustive explanation. Do you think we can expect a refactoring for PT some time soon?
Otherwise, do you know of a temporary workaround to use the generate method on TPU?<|||||>@mikcnt we don't have refactoring PT's generate in our short-term plans -- it is a very labor-intensive refactor whose main benefit is to enable TPU usage (i.e. a niche usage :) ). For context, the TF refactor took me >2 months of dedicated effort (contrarily to PT, the old TF implementation was slow on GPUs, and it was much smaller than PT's generate).
There are no immediate alternatives -- not being fully compatible with XLA implies that it can't get on a TPU effectively. Maybe the model you want to use exists on FLAX/TF, whose generate is compatible with TPUs.
I don't want to clip your wings, so here's my suggestion: our efforts go towards what the majority of the community wants. If you open an issue in `transformers` and attract some attention to PT generation on TPU, the odds of it happening soon increase significantly! |
transformers | 12,321 | closed | [Proposal] Image segmentation pipeline | # What does this PR do?
- Currently very low-level results, simply classification masks + score
+ label for each detected class
- Could support panoptic, instance segmentation, bounding boxes and even
part of instance segmentation (might require adding the "parent" info in
addition but that's about it.)
- Happy to hear some thoughts about the design.
- Future could maybe add a "aggregation_strategy" like, which could maybe
output a single image with color filters on top of the classes and so on
(or leave this reduction outside the pipeline). Getting final images like
https://colab.research.google.com/drive/16jcaJoc6bCFAQ96jDe2HwtXj7BMD_-m5#scrollTo=8IRGo8d0qkgR
is still a bit involved for users that simply want to "see" outputs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @NielsRogge
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 06-23-2021 12:16:19 | 06-23-2021 12:16:19 | @NielsRogge can you give this pipeline a look when you get a chance?<|||||>Thanks for this proposal! So the output of the API would be `[{mask, score, label}, ...]`, right? And the shape of `mask` is (height, width)?
I wonder whether this can support any image segmentation model. I'm currently working on implementing [SegFormer](https://arxiv.org/abs/2105.15203) (a new model by NVIDIA), which is a semantic segmentation model. It takes an image as input and produces a segmentation map as output (i.e. it assigns a class to each pixel). This model will only have `outputs.logits`, and they are of shape (batch_size, num_labels, height/4, width/4) - the logits are produced at 1/4th of the original image size. The SegFormer model does not have `outputs.pred_masks`, for example.
This pipeline should support all forms of image segmentation, right? Panoptic, semantic, instance, etc? I think that if we want that, then we should first define what models for each of these forms of segmentation should produce in their `outputs`.<|||||>@NielsRogge
I think pipelines should support as much different models as possible, regardless of the model output. (and clean error when model is not supported)
Proposed implem uses `pred_masks` because that's what currently available for DetR but if some other arch have different outputs, it should be the pipeline's role to be able to use `.logits` instead and still produce the same outputs.
Those are different kind of models, right ? (Like XXXXForSegmentation vs XXXXForPanopticSegmentation)
If yes, then I think it's kind of routine to have switches on those for the pipelines. Less desirable would be to switch on actual model outputs (.logits vs .pred_masks) , and something that we should really strive to avoid is actually looking at the model arch to decide.
Regardless, I think we can recover masks from raw logits, right ? If yes, then I think that proves that current output is good as it would enable pipeline to support SegFormer too.
Would you agree? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,320 | closed | Add mention of the huggingface_hub methods for offline mode | Adding mention of the `huggingface_hub` for offline mode | 06-23-2021 08:53:29 | 06-23-2021 08:53:29 | You mean the environment variable mentioned 10 lines above?<|||||>Lol, it was not showing in the diff, in my defense ;-)
Thanks for adding this! |
transformers | 12,319 | closed | `fill-mask` pipeline cannot load tokenizer's `config.json` (fixed in 4.8.0) | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
@LysandreJik
## Information
Model I am using: RoBERTa
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: see details below
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: see details below
## To reproduce
Following official notebook to train from scratch RoBERTa (tokenizer and model alike). The only addition is to save the RoBERTa tokenizer
```
tokenizer = RobertaTokenizerFast.from_pretrained("/path/to/BPE/tokenizer", return_special_tokens_mask=True, model_max_length=32) # BPE tokenizer previously trained using the tokenizer library, as per docs, then vocab and merges loaded from transfromers' RobertaTokenizerFast
tokenizer.save_pretrained("/path/to/roberta_tk") # resaving the tokenizer, full model now
```
Saving outputs the following:
```
('/path/to/roberta_tk/tokenizer_config.json',
'/path/to/roberta_tk/special_tokens_map.json',
'/path/to/roberta_tk/vocab.json',
'/path/to/roberta_tk/merges.txt',
'/path/to/roberta_tk/added_tokens.json',
'/path/to/roberta_tk/tokenizer.json')
```
Note that there is no `config.json` file, only `tokenizer_config.json`
Then try to load the tokenizer:
```
fill_mask = pipeline(
"fill-mask",
model="/path/to/model",
tokenizer="/path/to/roberta_tk"
)
```
Errors out, complaining that `config.json` is missing. Symlinking `tokenizer_config.json` to `config.json` solves the issues.
## Expected behavior
File name match between tokenizer save output and pipeline input.
| 06-23-2021 07:44:17 | 06-23-2021 07:44:17 | The config it asks for is the model config, not the tokenizer config. The fact the tokenizer can be loaded independently of the model has been fixed recently, so you should try on a source install.<|||||>I will try with a source install, however the error message says that the `config.json` file is missing from the file path specified with the `tokenizer` parameter, not from the file path specified with the `model` argument. My bad that I didn't report the full error message before, here it is:
```
OSError: Can't load config for '/nfs/home/rspreafico/workspace/models/v1/tokenizer/roberta'. Make sure that:
- '/nfs/home/rspreafico/workspace/models/v1/tokenizer/roberta' is a correct model identifier listed on 'https://huggingface.co/models'
- or '/nfs/home/rspreafico/workspace/models/v1/tokenizer/roberta' is the correct path to a directory containing a config.json file
```<|||||>Yes, that was the bug: the tokenizer required to have the model saved in the same directory to be reloaded in a pipeline.<|||||>Gotcha, thank you!<|||||>I cloned the `transformers` repo as of 5 min ago and installed from source, but I am getting the same error message. `transformers-cli env` confirms that I am using the `dev` version of `transformers`:
```
- `transformers` version: 4.8.0.dev0
- Platform: Linux-5.4.0-74-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
```<|||||>I'm trying to reproduce but it all works fine on my end. Since I don't have your model and tokenizer, here is the code I execute:
```
from transformers import RobertaTokenizerFast, RobertaForMaskedLM, pipeline
tokenizer = RobertaTokenizerFast.from_pretrained("roberta-base")
tokenizer.save_pretrained("test-tokenizer") # Only the tokenizer files are saved here
model = RobertaForMaskedLM.from_pretrained("roberta-base")
model.save_pretrained("test-model") # Only the model files are saved there
fill_mask = pipeline(
"fill-mask",
model="test-model",
tokenizer="test-tokenizer",
)
fill_mask("My <mask> is Sylvain.")
```<|||||>Ok, found it.
I was merely re-running `fill_mask = pipeline(...)` upon installing the dev version of transformers. This is insufficient to get rid of the error.
Conversely, I needed to re-run the whole notebook, most crucially `tokenizer.save_pretrained(...)`. In `4.8.0.dev0` this adds an additional field to `tokenizer_config.json` which is missing in `4.7.0`, namely `"tokenizer_class": "RobertaTokenizer"`. Without this field (either because the tokenizer was saved with `4.7.0` or because one manually removes it from a file generated with `4.8.0.dev0`), the error message pops up.
Thanks for looking into this!<|||||>Ah yes, good analysis!<|||||>@rspreafico-absci FYI there was an issue with the fill-mask pipeline with the `targets` argument on `master` recently, so if you're running on a source installation I suggest to update it to a more recent version<|||||>Thanks @LysandreJik ! I saw that the official 4.8.0 was released yesterday, so I switched to using the PyPI version now. Can you confirm that 4.8.0 on PyPI is ok to use? Thank you.<|||||>Version v4.8.0 on PyPi is indeed ok to use and should work perfectly well for the fill-mask pipeline. :) <|||||>In my program the `fill-mask` is requiring the **tokenizer_config.json** file. However when I run `tokenizer.save_model` I only get 2 files: vocab.json and merges.txt for my own `ByteLevelBPETokenizer`. How can I generate automatically the tokenizer_config.json file?<|||||>For anyone stumbling here because their tokenizer only saved vocab.config and merges.txt, you need to load your tokenizer and pass it instead of the config.
```python
pipeline(args..., tokenizer=TokenizerClass.from_pretrained('path_to_saved_files'))
``` |
transformers | 12,318 | closed | Downloading the models is getting slower than before |
Downloading the models (using the **model.pre_trained('')** function) is getting slower than before, now the download speed is stable at 1MB/s, but it could reach 10MB/s a month ago.
I wonder if the official limits the download speed, please let me know if so.
Thanks for your supports. | 06-23-2021 06:52:25 | 06-23-2021 06:52:25 | Nothing changed on our side.
Can you paste the output of e.g. `wget -O /dev/null https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin`?<|||||>
Of course, please check the above picture.<|||||>Yep, that's downloading directly from [AWS Cloudfront](https://aws.amazon.com/cloudfront/), there's not much we can do on our side unfortunately
cc @osanseviero <|||||>Okay, thanks for your attention to this issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,317 | closed | Tokenizer's normalization preprocessor cause misalignment in return_offsets_mapping for tokenizer classification task | ## Environment info
- `transformers` version: 4.6.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu102 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: not relevant
- Using distributed or parallel set-up in script?: not relevant
### Who can help
Models:
- albert, bert, xlm: @LysandreJik
Library:
- tokenizers: @LysandreJik
-->
## Information
Model I am using "xlm-roberta"
The problem arises when using:
I am using my own dataset building script in the provided example, but the script should be equivalent to the changes made by this [update](https://github.com/huggingface/datasets/pull/2466)
`get_dataset `is just a simple wrapping for `load_dataset`
and the `tokenizer` is just `XLMRobertaTokenizerFast.from_pretrained("xlm-roberta-large")`
The tasks I am working on is:
Xtreme udpos dataset (or potentially any other multilingual token classification task)
## To reproduce
[This colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) implements a token classification input pipeline extending the logic from [this hugging example](https://huggingface.co/transformers/custom_datasets.html#tok-ner).
The pipeline works fine with most instance in different languages, but unfortunately, [the Japanese Kana ligature (a form of abbreviation? I don't know Japanese well)](https://en.wikipedia.org/wiki/Kana_ligature) break the alignment of `return_offsets_mapping`:

Without the try catch block, it riase `ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true`, example shown here [(another colab notebook)](https://colab.research.google.com/drive/1ZGj-4LzhnjrDv3PC5nlmi-BVj9vgvFTp?usp=sharing)
```
/content/MLM-disentangle/experiment_datasets/xtreme_ds.py in __getitem__(self, id_absolute)
605 labels[
606 (arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0) & (ids[:] != 6)
--> 607 ] = self.dataset[lan][id]["pos_tags"]
608 return {
609 "tokens": torch.from_numpy(ids).long(),
ValueError: NumPy boolean array indexing assignment cannot assign 88 input values to the 87 output values where the mask is true
```
It is clear that the normalizer is the process that break the alignment, as it is observed that `tokenizer._tokenizer.normalizer.normalize_str('ヿ')` return 'コト'. And both the tokens, 'コ' and 'ト', have True value when evaluating the `(arr_offset[:, 0] == 0) & (arr_offset[:, 1] != 0) & (ids[:] != 6) ` logic , which breaks the alignment of `return_offsets_mapping`.
## Expected behavior
One workaround is to include `tokenizer._tokenizer.normalizer.normalize_str` before the tokenizer preprocessing pipeline, which is also provided in the [first colab notebook](https://colab.research.google.com/drive/151gKyo0YIwnlznrOHst23oYH_a3mAe3Z?usp=sharing) with the name `udposTestDatasetWorkaround`.
I guess similar logics should be included inside the tokenizer and the offsets_mapping generation process such that user don't need to include them in their code. But I don't understand the code of tokenizer well that I think I am not able to do this.
| 06-23-2021 05:05:38 | 06-23-2021 05:05:38 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,316 | closed | [models] respect dtype of the model when instantiating it | update for future readers - the initially proposed API has changed through the process of review, so below is slightly autodated, e.g. there is no `torch_dtype_auto_detect`, but `torch_dtype=auto`.
----------------
This PR resolves the issue discussed in https://github.com/huggingface/transformers/issues/12062.
The main feature is:
1. model will now be instantiated with the `dtype` passed via `from_pretrained` and `from_config` `torch_dtype` arg
2. alternatively `from_pretrained` now has `torch_dtype_auto_detect` which can do the same automatically
Examples:
```
model = AutoModel.from_config(config, torch_dtype=torch.float16)
model = T5ForConditionalGeneration.from_pretrained(model_path, torch_dtype=torch.float16)
model = T5ForConditionalGeneration.from_pretrained(model_path, torch_dtype_auto_detect=True)
```
**Important: only float dtypes are supported by `torch.set_default_dtype(dtype)`, so if it's not float, say int8, an exception will be generated**
Changes:
- `PreTrainedModel` now has a new `_from_config` method where all the context managers for the model instantiation are done
- `auto_factory`'s `from_config` is back to being a thin wrapper - all the deepspeed stuff has now moved to PT's version of `_from_config`
- The PT's version of `_from_config` now also sports the context manager-like functionality for dtype
- TF and Flax now have a thin `_from_config` method
- `from_pretrained`: had to move `torch.load` before model instantiation to enabled auto-discovery
- `from_pretrained`: like `from_config` has a similar context manager for dtype
- extensive tests added
Possible changes:
- I wasn't sure whether to call `config.torch_dtype` or `config.dtype` - I went with the former as it'd be easy to rename after reviews. I don't know whether we want it generic or torch specific - I don't know if tf/flax use similar names, but I guess we could automatically remap those if needed.
- When saving the dtype I saved only "float32" part of "torch.float32" - I could save "torch.float32" instead - either way I have to reconstruct the dtype object from string. same uncertainty as in the item above.
- the dtype context managers are poor man's versions since at the moment they are unique in each place, due to 3 frameworks using the same `from_pretrained` method - if we ever split it then we can use an actual context manager in `from_pretrained`.
Questions:
- should `T5ForConditionalGeneration.from_config(config)` be supported? Currently it's not and `T5ForConditionalGeneration(config)` ignores model instantiation context managers - just saves the config object
- probably should document this feature somewhere in the docs? Any suggestion where? It will work out-of-the-box but the documenting part would be handy for someone who wants to create a model from scratch in a non-default dtype.
Also note the new `log.info` entry:
```
Instantiating T5ForConditionalGeneration model under default dtype torch.float16
```
So one can tell what's happening.
The original version of this PR which tried to do the right thing automatically was dropped due to:
Possible issues:
- fp16 saved models now will be loaded as such and not fp32 as before - so some usages under CPU may fail, e.g. with:
```
RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
```
e.g. many of our tiny models are fp16.
Another one on CUDA:
```
RuntimeError: Found param model.shared.weight with type torch.cuda.HalfTensor, expected torch.cuda.FloatTensor.
```
Fixes: https://github.com/huggingface/transformers/issues/12062
@sgugger, @LysandreJik, @patrickvonplaten | 06-22-2021 23:45:01 | 06-22-2021 23:45:01 | @sgugger, where should we document the functionality added by this PR? (besides the API).<|||||>ok, this should be good to go.<|||||>Yes, it looks great!<|||||>Great work @stas00, this is really useful! |
transformers | 12,315 | closed | Model is saved every eval_steps steps if eval_steps < save_steps. Is this expected behavior? | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert, but I don't think that is relevant
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Make a `TrainingArgs` object with `eval_steps < save_steps` and `eval_strategy` and `save_strategy` both set to `"steps"`
2. Pass those to a `Trainer`
3. Model checkpoints every `eval_steps` steps, not every `save_steps` steps
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Here is my `TrainingArguments` code:
```python
args = TrainingArguments(
output_dir=outpath,
save_total_limit=10,
load_best_model_at_end=True,
save_strategy="steps" if cli_args.save_steps is not None else "epoch",
save_steps=cli_args.save_steps,
evaluation_strategy="steps" if cli_args.eval_steps is not None else "epoch",
eval_steps=cli_args.eval_steps,
metric_for_best_model="loss",
learning_rate=cli_args.learning_rate,
per_device_train_batch_size=cli_args.batch_size,
per_device_eval_batch_size=cli_args.batch_size,
num_train_epochs=cli_args.num_train_epochs,
weight_decay=cli_args.weight_decay,
fp16=cli_args.fp16,
deepspeed=deepspeed,
local_rank=cli_args.local_rank,
)
```
with the values I am using filled in, this is:
```python
args = TrainingArguments(
output_dir="ten_m/model",
save_total_limit=10,
load_best_model_at_end=True,
save_strategy="steps",
save_steps=6, # for testing
evaluation_strategy="steps",
eval_steps=2, # for testing
metric_for_best_model="loss",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=3,
weight_decay=0.01,
fp16=False,
deepspeed=None,
local_rank=-1,
)
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Well, maybe this is expected? But if so, I feel like it should be documented more obviously.
I wrote a callback to upload the saved checkpoint to GCS, but the eval step is very quick, so I was going to do those much more frequently. However, if evaluating means I have to upload to GCS, then I will evaluate less often. However, I verified that even if I don't use the GCS save callback, with the above settings, a checkpoint is saved every 2 steps, not every 6.
If this is expected behavior, then is the correct way to change it to write a Callback that `on_evaluate` sets the argument of type `transformers.TrainerControl` to have property `should_save` to `False`?
Thank you | 06-22-2021 21:47:15 | 06-22-2021 21:47:15 | You can't have different evaluation and save intervals when using `load_best_model_at_end=True` (save need to be synchronized with evaluation otherwise we can't keep track of the best model). Remove that option and you will have the evaluation and save disconnected as requested.<|||||>Thank you, that makes sense.
Also, now that I know it's related I immediately noticed

Might be worth mentioning under `save_strategy` as well? But maybe it was just me.<|||||>Sure! Do you want to make a PR with that change?<|||||>sure!<|||||>haha it's been a while!

<|||||>Oh indeed! :sweat_smile: |
transformers | 12,314 | closed | Add all XxxPreTrainedModel to the main init | # What does this PR do?
This PR adds all XxxPreTrainedModel classes to the main init, making them public, and more generally, adds a CI quality check to make sure every object in the modeling files that is a subclass of `PreTrainedModel`, `TFPreTrainedModel` or `FlaxPreTrainedModel` is public (except the Encoder, Decoder, Wrapper and the ones explicitly listed as privates).
Fixes #12193 | 06-22-2021 21:27:41 | 06-22-2021 21:27:41 | |
transformers | 12,313 | closed | FlaxBartPretrainedModel -> FlaxBartPreTrainedModel | # What does this PR do?
All is said in the description. I think it's fine to fix for now without backward compatibility as this is a private class and we did not officially release the Flax models yet. | 06-22-2021 20:16:17 | 06-22-2021 20:16:17 | |
transformers | 12,312 | closed | Add possibility to maintain full copies of files | # What does this PR do?
#12252 introduced a file that is a full copy of another file. This PR adds to the existing `check_copies` util script the ability to make sure those copies stay in sync, with a check in `make quality` and an update with `make fix-copies`. | 06-22-2021 19:01:31 | 06-22-2021 19:01:31 | |
transformers | 12,311 | closed | [Flax/JAX] Add how to propose projects markdown | # What does this PR do?
This PR adds a HOW-TO to propose projects in JAX/Flax. | 06-22-2021 17:54:13 | 06-22-2021 17:54:13 | |
transformers | 12,310 | closed | New `--log_level` feature introduces failures using 'passive' mode | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `nightly`
- Platform: PyTorch
- Python version: 3.6
- PyTorch version (GPU?): TPU
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: yes
### Who can help
@stas00 @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): XLNet
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
This was captured by Cloud TPU tests (XLNet/MNLI/GLUE), but I think this behavior is model/dataset agnostic. Essentially, it seems that:
1. The `training_args`'s `__post_init__` method _should_ [convert the `log_level` to `-1`](https://github.com/huggingface/transformers/blob/dad414d5f9c20627ee6c16f62e8a2056916bf35b/src/transformers/training_args.py#L606) if it's set to 'passive' (which it is by default).
2. However in the end-to-end `run_glue.py` example, using [`parse_args_into_dataclasses()`](https://github.com/huggingface/transformers/blob/dad414d5f9c20627ee6c16f62e8a2056916bf35b/examples/pytorch/text-classification/run_glue.py#L199) seems to not call `__post_init__`, as our tests are failing with:
```
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 329, in _mp_start_fn
_start_fn(index, pf_cfg, fn, args)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch_xla/distributed/xla_multiprocessing.py", line 323, in _start_fn
fn(gindex, *args)
File "/transformers/examples/pytorch/text-classification/run_glue.py", line 554, in _mp_fn
main()
File "/transformers/examples/pytorch/text-classification/run_glue.py", line 468, in main
data_collator=data_collator,
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/trainer.py", line 295, in __init__
logging.set_verbosity(log_level)
File "/root/anaconda3/envs/pytorch/lib/python3.6/site-packages/transformers/utils/logging.py", line 161, in set_verbosity
_get_library_root_logger().setLevel(verbosity)
File "/root/anaconda3/envs/pytorch/lib/python3.6/logging/__init__.py", line 1284, in setLevel
self.level = _checkLevel(level)
File "/root/anaconda3/envs/pytorch/lib/python3.6/logging/__init__.py", line 195, in _checkLevel
raise ValueError("Unknown level: %r" % level)
ValueError: Unknown level: 'passive'
```
## To reproduce
Steps to reproduce the behavior:
1. The command we're using is:
```
git clone https://github.com/huggingface/transformers.git
cd transformers && pip install .
git log -1
pip install datasets
python examples/pytorch/xla_spawn.py \
--num_cores 8 \
examples/pytorch/text-classification/run_glue.py \
--logging_dir=./tensorboard-metrics \
--task_name MNLI \
--cache_dir ./cache_dir \
--do_train \
--do_eval \
--num_train_epochs 3 \
--max_seq_length 128 \
--learning_rate 3e-5 \
--output_dir MNLI \
--overwrite_output_dir \
--logging_steps 30 \
--save_steps 3000 \
--overwrite_cache \
--tpu_metrics_debug \
--model_name_or_path xlnet-large-cased \
--per_device_train_batch_size 32 \
--per_device_eval_batch_size 16
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 06-22-2021 17:43:16 | 06-22-2021 17:43:16 | Thank you for the report, it will be fixed shortly via https://github.com/huggingface/transformers/pull/12309
I'm just working on a test - need another 10min or so<|||||>Thank you for fixing this so quickly! |
transformers | 12,309 | closed | [trainer] 2 bug fixes and a rename | Fixes a bug in https://github.com/huggingface/transformers/pull/12257 and https://github.com/huggingface/transformers/pull/12276 and renames the function in the latter and adds a docstring.
Also added an extended DDP test to tests `log_level_replica`.
Fixes: https://github.com/huggingface/transformers/issues/12310
@sgugger
| 06-22-2021 16:27:19 | 06-22-2021 16:27:19 | |
transformers | 12,308 | closed | odd whitespace handling with imported sentencepiece models | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-4.15.0-143-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik Although my example uses ReformerTokenizer, I think this problem is present in several of the model architectures using sentencepiece tokenizers.
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
#!/usr/bin/env python3
import sentencepiece as spm
import transformers as tr
src = (
'Lorem Ipsum dolor sit amet, consectetur adipiscing elit, sed do',
'eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut',
'enim ad minim veniam, quis nostrud exercitation ullamco laboris',
'nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in',
'reprehenderit in voluptate velit esse cillum dolore eu fugiat',
'nulla pariatur. Excepteur sint occaecat cupidatat non proident,',
'sunt in culpa qui officia deserunt mollit anim id est laborum.',
)
spm.SentencePieceTrainer.train(
sentence_iterator=iter(src),
model_prefix='test',
vocab_size=96,
treat_whitespace_as_suffix=True,
user_defined_symbols=['<pad>', '<mask>'],
minloglevel=1,
)
def show(label, toks):
print('%14s %2d: %s' % (label, len(toks), toks))
text = 'Lo<mask>m Ipsum'
tok = spm.SentencePieceProcessor(model_file='test.model')
show('sentencepiece', tok.encode(text, out_type=str))
tok = tr.models.reformer.ReformerTokenizerFast('test.model',
mask_token='<mask>',
pad_token='<pad>')
show('transformers', tok.tokenize(text))
tok.save_pretrained('test')
tr.models.reformer.ReformerConfig().save_pretrained('test')
tok = tr.AutoTokenizer.from_pretrained('test')
show('AutoTokenizer', tok.tokenize(text))
```
is giving
```
sentencepiece 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']
transformers 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']
AutoTokenizer 11: ['▁', 'L', 'o', '<mask>', '▁', 'm', '▁', 'I', 'p', 's', 'um']
```
## Expected behavior
I believe the tokenization of input text should be more consistent. I think these variations are cropping up between my attempts to pretrain a language model and then later finetune the saved model, resulting in model accuracy problems.
The use of `treat_whitespace_as_suffix=True` in `sentencepiece` makes this problem worse, but using a sentencepiece model without this flag still shows the `AutoTokenizer.from_pretrained()` created tokenizer inserting whitespace that was not present in the source text. I haven't been able to track down where this is coming from or how to avoid it. | 06-22-2021 16:07:04 | 06-22-2021 16:07:04 | `spm.SentencePieceTrainer` and `ReformerTokenizerFast` are not the same tokenizers, so it's not unusual that each of them outputs different results.
However, I'm not sure how the two tokenizers are different. It's because of the lack of my knowledge.
Regarding the difference between `ReformerTokenizerFast` and `AutoTokenizer`, I discovered something.
One of the easiest ways to make the two tokenizers output the same results is to remove `mask_token='<mask>'` and `test` directory where previous config files exist (if there is `test` folder).
Another way is to remove `special_tokens_map.json` and `tokenizer_config.json` (after `save_pretrained`) that are unnecessary files when using fast tokenizer.
I don't know what is the cause of this problem, but I guess there are conflicts between configurations of fast tokenizer and tokenizer.
<|||||>This may have two underlying causes, one perhaps a serialization issue between `.save_pretrained()` and `AutoTokenizer.from_pretrained()`, and a separate issue related to different behavior between `PreTrainedTokenizer` and `PreTrainedTokenizerFast`.
Here is perhaps a clearer example of the variations:
```python
#!/usr/bin/env python3
import sentencepiece as spm
import transformers as tr
src = (
'Lorem Ipsum dolor sit amet, consectetur adipiscing elit, sed do',
'eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut',
'enim ad minim veniam, quis nostrud exercitation ullamco laboris',
'nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in',
'reprehenderit in voluptate velit esse cillum dolore eu fugiat',
'nulla pariatur. Excepteur sint occaecat cupidatat non proident,',
'sunt in culpa qui officia deserunt mollit anim id est laborum.',
)
spm.SentencePieceTrainer.train(
sentence_iterator=iter(src),
model_prefix='test',
vocab_size=96,
treat_whitespace_as_suffix=True,
user_defined_symbols=['<pad>', '<mask>'],
minloglevel=1,
)
def show(label, toks):
print('%8s %2d: %s' % (label, len(toks), toks))
text = 'Lo<mask>m Ipsum'
cfg = tr.T5Config()
tok = tr.T5Tokenizer('test.model', mask_token='<mask>', pad_token='<pad>')
show('tr.slow', tok.tokenize(text))
cfg.save_pretrained('test_slow')
tok.save_pretrained('test_slow')
tok = tr.AutoTokenizer.from_pretrained('test_slow', use_fast=False)
show('at.slow', tok.tokenize(text))
tok = tr.T5TokenizerFast('test.model', mask_token='<mask>', pad_token='<pad>')
show('tr.fast', tok.tokenize(text))
cfg.save_pretrained('test_fast')
tok.save_pretrained('test_fast')
tok = tr.AutoTokenizer.from_pretrained('test_fast')
show('at.fast', tok.tokenize(text))
```
giving
```
tr.slow 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']
at.slow 10: ['L', 'o', '▁', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']
tr.fast 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']
at.fast 11: ['▁', 'L', 'o', '<mask>', '▁', 'm', '▁', 'I', 'p', 's', 'um']
```
The first one is consistent with `sentencepiece` directly, which is not surprising because these tokenizers use `spm.SentencePieceProcessor()` to encode.
@europeanplaice It looks like you're making headway on the serialization part, which is great. I can file a separate ticket for the differences between the `PreTrainedTokenizer` and `PreTrainedTokenizerFast` subclasses if that part turns out unrelated.<|||||>@tlby
Thank you for giving another example.
It helped me a lot.
For now, I'm not sure whether the two underlying causes are unrelated.
The following is about the serialization issue.
I've found that `tr.T5Tokenizer` or `ReformerTokenizer` don't expect `mask_token`. So this attribute is taken as an element of `kwargs`. (fast tokenizer may also be the same.)
Refer to
https://huggingface.co/transformers/model_doc/t5.html#t5tokenizer
https://huggingface.co/transformers/model_doc/reformer.html#reformertokenizer
I guess that when first initializing the tokenizer, it ignores `mask_token`, but when you recreate it by `from_pretrained`, it treats `mask_token` as a special token, then it tokenizes text differently.
In the first initialization, the tokenizer uses the default setting.
So it doesn't consider `mask_token` even if a user passes it as an argument. It gives the text to sentencepiece without any preprocessing, and `sentencepiece` recognizes `<mask>` as a mask token.
However, the tokenizer also writes `<mask>` as a special token in the config JSON.
When recreating the tokenizer by `from_pretrained`, it processes the tokens defined in the config JSON.
Before passing the text to sentencepiece, it splits the text to ['Lo', '`<mask>`', 'm Ipsum'] and sentencepiece tokenizes the elements except for `<mask>` then combines.
When I removed `mask_token='<mask>'` as below
```python
tok = tr.T5Tokenizer('test.model', pad_token='<pad>')
tok = tr.T5TokenizerFast('test.model', pad_token='<pad>')
```
then results were
```
tr.slow 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']
at.slow 9: ['L', 'o', '<mask>', 'm▁', 'I', 'p', 's', 'um', '▁']
tr.fast 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']
at.fast 10: ['▁', 'L', 'o', '<mask>', 'm', '▁', 'I', 'p', 's', 'um']
```
It would explain the difference between `tr.slow` and `at.slow`.
It also would explain the difference between `tr.fast` and `at.fast`.
<|||||>When I do not specify a mask_token to the `PreTrainedTokenizer`, trying to use it in `examples/pytorch/language-modeling/run_mlm.py` gives errors.
```
[ERROR|tokenization_utils_base.py:1017] 2021-06-26 22:40:50,070 >> Using mask_token, but it is not set yet.
Traceback (most recent call last):
File "run_mlm.py", line 504, in <module>
main()
File "run_mlm.py", line 438, in main
pad_to_multiple_of=8 if pad_to_multiple_of_8 else None,
File "<string>", line 6, in __init__
File "/usr/local/lib/python3.6/dist-packages/transformers/data/data_collator.py", line 335, in __post_init__
"This tokenizer does not have a mask token which is necessary for masked language modeling. "
ValueError: This tokenizer does not have a mask token which is necessary for masked language modeling. You should pass `mlm=False` to train on causal language modeling instead.
```
I don't have a standalone example of this, but if you need one I can work one out.<|||||>Pinging @n1t0 for advice<|||||>@LysandreJik @n1t0
Thank you for checking this conversation.<|||||>My situation is somewhat more sensitive to whitespace than most western languages because I am hoping to do language modeling with HTML source code. In a sample such as `<a href="page.html">the page</a>` the insertion of whitespace after an attribute name `<a href ="page.html">the page</a>` takes us outside the realm of samples in the training set, which is why this problem is significant.
Rather than trying to recycle one of the existing `sentencepiece` based tokenizers, I worked out [my own](/tlby/rnd-html/blob/e65f246cece32b55f1a49e76f5bcad8dfc077839/mytok.py) PreTrainedTokenizer subclass I hope will be compatible with various transformer architectures. So far it is working nicely for BERT in `run_mlm.py`. The `AutoTokenizer.from_pretrained()` instance is behaving consistently and since I don't implement a PreTrainedTokenizerFast I don't have problems with it getting upgraded and changing tokenization behavior.
If you can confirm this approach isn't an obvious anti-pattern, then that might be enough to consider my issue resolved.<|||||>Tokenizers are generally engineered for specific use-cases and don't always adapt to different domains. The tokenizers here didn't fit your use-cases so you went and implemented one yourself so that it best fits your particular problem - I believe this is the best way to handle it and ensure it behaves consistently across your tasks.
I took a look at your `PreTrainedTokenizer` and I think you have everything covered! If ever you have some time available, it would be very helpful for us to know if you ran into issues implementing that subclass, or general ideas of how we could make it easier for you to implement custom tokenizers such as this one. For example, adding an option to register new tokenizers (such as the proposition in https://github.com/huggingface/transformers/issues/10256) to the `AutoTokenizer` mapping would probably have come in handy.
If you think of anything else, please feel free to open an issue with proposals, even if very high-level, so that we may improve the API. Thank you!<|||||>> If ever you have some time available, it would be very helpful for us to know if you ran into issues implementing that subclass
As with most mildly complicated inheritance APIs I had a couple of false starts trying to write something clean and minimalist first, then something heavy that got way out of hand. Once I tracked down a working example that seemed closest to what I was trying to do, retooling and cleaning progressed rapidly.
Probably the biggest technical hurdle was assuming I wanted a "Fast" tokenizer and trying too hard to trick the `tokenizers` library into doing what I needed, which was almost, but not quite possible.<|||||>That's interesting feedback, thank you. We'll keep this in mind when working on improving the tokenizer API.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,307 | closed | Tokenizing in the dataset and padding manually using tokenizer.pad in the collator | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.2
- Platform: Linux-5.4.0-74-generic-x86_64-with-debian-bullseye-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I am trying to avoid tokenizing in the collator in order to improve the speed of Data Loading, which is why I wanted to tokenize everything in advance and then simply pad in the collator. I can't provide the entire code however here are my `Dataset` and my `Collator` which will hopefully be enough.
```python
class DatasetTokenized(Dataset):
def __init__(self, data: pd.DataFrame, text_column: str,
label_columns: List[str], tokenizer_name: str):
super(DatasetTokenized, self).__init__()
self.data = data
self.text_column = text_column
self.label_columns = label_columns
self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
self.tokenized_data = self.tokenize_data(data)
def __len__(self) -> int:
return len(self.tokenized_data)
def __getitem__(self, index: int) -> Dict:
return self.tokenized_data[index]
def tokenize_data(self, data: pd.DataFrame):
tokenized_data = []
print('Tokenizing data:')
for _, row in tqdm(data.iterrows(), total=len(data)):
text = row[self.text_column]
labels = row[self.label_columns]
encoding = self.tokenizer(text,
add_special_tokens=True,
max_length=512,
padding=False,
truncation=True,
return_attention_mask=True,
return_tensors='pt')
tokenized_data.append({
'text': text,
'encoding': encoding,
'labels': torch.FloatTensor(labels)
})
return tokenized_data
class BertCollatorTokenized:
def __init__(self, tokenizer_name: str):
super(BertCollatorTokenized, self).__init__()
self.tokenizer = BertTokenizer.from_pretrained(tokenizer_name)
def __call__(self, batch: List[Any]):
text, encodings, labels = zip(
*[[sample['text'], sample['encoding'], sample['labels']]
for sample in batch])
encodings = list(encodings)
encodings = self.tokenizer.pad(encodings,
max_length=512,
padding='longest',
return_tensors='pt')
return {
'text': text,
'input_ids': encodings['input_ids'],
'attention_mask': encodings['attention_mask'],
'labels': torch.FloatTensor(labels)
}
```
Error:
>ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
Full error message:
```
File "train_text_classificator.py", line 78, in main
trainer.fit(lightning_system, data_module)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 458, in fit
self._run(model)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 756, in _run
self.dispatch()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 797, in dispatch
self.accelerator.start_training(self)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/accelerators/accelerator.py", line 96, in start_training
self.training_type_plugin.start_training(trainer)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 144, in start_training
self._results = trainer.run_stage()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 807, in run_stage
return self.run_train()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 842, in run_train
self.run_sanity_check(self.lightning_module)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1107, in run_sanity_check
self.run_evaluation()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 949, in run_evaluation
for batch_idx, batch in enumerate(dataloader):
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__
data = self._next_data()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1199, in _next_data
return self._process_data(data)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 1225, in _process_data
data.reraise()
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/_utils.py", line 429, in reraise
raise self.exc_type(msg)
ValueError: Caught ValueError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 771, in convert_to_tensors
tensor = as_tensor(value)
ValueError: expected sequence of length 4 at dim 2 (got 13)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/_utils/worker.py", line 202, in _worker_loop
data = fetcher.fetch(index)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 47, in fetch
return self.collate_fn(data)
File "/home/jav/experimental-framework/data_utils/collators/transformers_collatоrs.py", line 97, in __call__
return_tensors='pt')
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 2706, in pad
return BatchEncoding(batch_outputs, tensor_type=return_tensors)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 276, in __init__
self.convert_to_tensors(tensor_type=tensor_type, prepend_batch_axis=prepend_batch_axis)
File "/home/jav/anaconda3/envs/experimental_framework/lib/python3.7/site-packages/transformers-4.2.2-py3.8.egg/transformers/tokenization_utils_base.py", line 788, in convert_to_tensors
"Unable to create tensor, you should probably activate truncation and/or padding "
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I would expect `self.tokenizer.pad(encodings, ... )` in the collator to work without issues when given a list of `BatchEncoding` elements.
| 06-22-2021 15:44:24 | 06-22-2021 15:44:24 | Some additional info that might help. Encodings looks like: `encodings = [batch_encoding_1, ... , batch_encoding_2]`. Each batch encoding looks like:
```python
{'input_ids': tensor([[ 101, 1006, 1039, 1007, 2065, 1996, 13666, 11896, 2000, 14037,
2007, 2019, 14987, 2104, 2023, 11075, 3429, 1010, 1999, 2804,
2000, 2151, 2060, 2128, 7583, 3111, 1997, 1996, 4054, 1010,
1996, 4054, 2089, 4685, 2008, 14987, 2006, 1996, 13666, 1005,
1055, 6852, 1998, 2151, 3465, 22667, 2011, 1996, 4054, 2097,
2022, 1037, 7016, 2349, 2013, 1996, 13666, 2000, 1996, 4054,
1012, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}
```
And this is the line that eventually raises an exception:
https://github.com/huggingface/transformers/blob/1498eb9888d55d76385b45e074f26703cc5049f3/src/transformers/tokenization_utils_base.py#L699
<|||||>I managed to make a small reproducible example:
```python
from transformers import BertTokenizer
from torch import tensor
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
encodings = [{'input_ids': tensor([[ 101, 1006, 1039, 1007, 2065, 1996, 13666, 11896, 2000, 14037,
2007, 2019, 14987, 2104, 2023, 11075, 3429, 1010, 1999, 2804,
2000, 2151, 2060, 2128, 7583, 3111, 1997, 1996, 4054, 1010,
1996, 4054, 2089, 4685, 2008, 14987, 2006, 1996, 13666, 1005,
1055, 6852, 1998, 2151, 3465, 22667, 2011, 1996, 4054, 2097,
2022, 1037, 7016, 2349, 2013, 1996, 13666, 2000, 1996, 4054,
1012, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}, {'input_ids': tensor([[ 101, 1006, 1037, 1007, 2202, 2046, 4070, 2035, 1997, 1996, 7882, 6214,
1997, 1996, 3563, 3105, 1010, 2164, 1996, 3872, 2030, 3635, 1997, 1996,
7170, 2000, 2022, 2333, 1010, 3292, 2000, 2022, 7837, 2005, 4651, 1010,
2334, 4026, 3785, 1010, 2051, 1997, 2154, 1010, 3517, 3403, 2335, 1010,
2569, 2609, 3785, 1998, 2060, 2569, 6214, 1025, 1998, 102]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1]])}]
batched_encodings = tokenizer.pad(encodings, padding='longest', return_tensors='pt')
```<|||||>@LysandreJik any update on this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik @patrickvonplaten @sgugger
I apologize for tagging Patric and Sylvain, but as Lysandre seems to be busy, do you perhaps know someone who can help with this?<|||||>The `tokenizer.pad` method only applies padding for list of examples, so each of the elements in your `encoding` should be one-dimensional. If you remove the extra pair of [] in all your tensors in your minimal example, it will work.
Also please use the [forums](https://discuss.huggingface.co/) for questions around the library as we keep the issues for bugs and feature requests only.<|||||>Thanks a lot @sgugger , I posted it her as it looked like a bug to me based on the documentation. Additionally, those extra set of parenthesis come from the tokenizer not me. When running:
```python
encoding = self.tokenizer(text,
add_special_tokens=True,
max_length=512,
padding=False,
truncation=True,
return_attention_mask=True,
return_tensors='pt')
```
You get those extra parenthesis. I am assuming they come because in the background of the `__call__` method, `batch_encode` is called instead of `encode`. Am I doing something wrong in the way I am using the tokenizer? My main goal is to simply tokenize the entire dataset beforehand, and only pad during training.<|||||>You should not use `return_tensors='pt'` for just one text, that option is designed to create batches you directly pass to your model. So if you use it with one text, you get a batch of one encoding. Either add [0] to select the only element of that batch in your dataset, or create the tensors in the collate function.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,306 | closed | Dimensional weight error | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Windows
- Python version: 3.7.5
- PyTorch version (GPU?): 1.8.1+cu111
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
Models:
wav2vec2: @patrickvonplaten
## To reproduce
Steps to reproduce the behavior:
1. Same steps as in the fine-tuning wav2vec2 : https://huggingface.co/blog/fine-tune-wav2vec2-english
```
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-51-9db903a616c9> in <module>
----> 1 trainer.train()
2 trainer.save_model('content/wav2vec2-nepali-openslr-54_10000')
3 trainer.tokenizer.save_pretrained('content/wav2vec2-nepali-openslr-54_10000')
4 processor.save_pretrained('content/wav2vec2-nepali-openslr-54_10000')
d:\work\asr\transformer_test\env\lib\site-packages\transformers\trainer.py in train(self, resume_from_checkpoint, trial, **kwargs)
1261 tr_loss += self.training_step(model, inputs)
1262 else:
-> 1263 tr_loss += self.training_step(model, inputs)
1264 self.current_flos += float(self.floating_point_ops(inputs))
1265
d:\work\asr\transformer_test\env\lib\site-packages\transformers\trainer.py in training_step(self, model, inputs)
1744 if self.use_amp:
1745 with autocast():
-> 1746 loss = self.compute_loss(model, inputs)
1747 else:
1748 loss = self.compute_loss(model, inputs)
d:\work\asr\transformer_test\env\lib\site-packages\transformers\trainer.py in compute_loss(self, model, inputs, return_outputs)
1778 else:
1779 labels = None
-> 1780 outputs = model(**inputs)
1781 # Save past state if it exists
1782 # TODO: this needs to be fixed and made cleaner later.
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, input_values, attention_mask, output_attentions, output_hidden_states, return_dict, labels)
1470 output_attentions=output_attentions,
1471 output_hidden_states=output_hidden_states,
-> 1472 return_dict=return_dict,
1473 )
1474
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, input_values, attention_mask, mask_time_indices, output_attentions, output_hidden_states, return_dict)
1042 return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1043
-> 1044 extract_features = self.feature_extractor(input_values)
1045 extract_features = extract_features.transpose(1, 2)
1046
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, input_values)
329 hidden_states = input_values[:, None]
330 for conv_layer in self.conv_layers:
--> 331 hidden_states = conv_layer(hidden_states)
332
333 return hidden_states
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\transformers\models\wav2vec2\modeling_wav2vec2.py in forward(self, hidden_states)
222
223 def forward(self, hidden_states):
--> 224 hidden_states = self.conv(hidden_states)
225
226 hidden_states = hidden_states.transpose(-2, -1)
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
887 result = self._slow_forward(*input, **kwargs)
888 else:
--> 889 result = self.forward(*input, **kwargs)
890 for hook in itertools.chain(
891 _global_forward_hooks.values(),
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\conv.py in forward(self, input)
261
262 def forward(self, input: Tensor) -> Tensor:
--> 263 return self._conv_forward(input, self.weight, self.bias)
264
265
d:\work\asr\transformer_test\env\lib\site-packages\torch\nn\modules\conv.py in _conv_forward(self, input, weight, bias)
258 _single(0), self.dilation, self.groups)
259 return F.conv1d(input, weight, bias, self.stride,
--> 260 self.padding, self.dilation, self.groups)
261
262 def forward(self, input: Tensor) -> Tensor:
RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 1, 10], but got 4-dimensional input of size [1, 1, 1, 43200] instead
``` | 06-22-2021 14:16:57 | 06-22-2021 14:16:57 | I am currently getting the same error - how did you solve it? Thanks in advance! |
transformers | 12,305 | closed | [Flax] Main doc for event orga | # What does this PR do?
This PR adds the main document for the Flax/JAX community week organization.
@patil-suraj @suzana-ilic @osanseviero @thomwolf @avital @marcvanzee | 06-22-2021 13:59:31 | 06-22-2021 13:59:31 | |
transformers | 12,304 | closed | Add CodeCarbon Integration | # What does this PR do?
This PR adds `codecarbon` for carbon footprint tracking. This is also useful for BigScience.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-22-2021 12:39:56 | 06-22-2021 12:39:56 | cc @sashavor |
transformers | 12,303 | closed | Fix and improve documentation for LEDForConditionalGeneration | # What does this PR do?
As reported in #12268, the example for text generation with LED does not work because it relies on an implementation detail of the BART model for which it was originally conceived. Further, the summarization example uses a checkpoint that was not finetuned on a summarization task, leading to the model just repeating the entire input.
This PR replaces both examples with versions that are fully functional and illustrate the respective task.
<!-- Remove if not applicable -->
Fixes #12268
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger @patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-22-2021 12:26:27 | 06-22-2021 12:26:27 | Thanks again! |
transformers | 12,302 | closed | electra SequenceClassification layer change | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
from `ElectraClassificationHead` removed extra projection layer to make the fine-tuning models architure even with the original implementation.
As the ELECTRA pre-train doesn't have a sentence contrastive task there is no pooling layer for ELECTRA.
Code in the original codebase -
1. No additional projection, just returning the CLS token representation.
https://github.com/google-research/electra/blob/8a46635f32083ada044d7e9ad09604742600ee7b/model/modeling.py#L266
2. pooling_output -> dropout -> classification_dense
https://github.com/google-research/electra/blob/8a46635f32083ada044d7e9ad09604742600ee7b/finetune/classification/classification_tasks.py#L218
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-22-2021 11:27:53 | 06-22-2021 11:27:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,301 | closed | Are there any examples showing how to use the `metric_for_best_model` of class `TrainingArguments`? | <img width="567" alt="截屏2021-06-22 下午5 49 14" src="https://user-images.githubusercontent.com/37548571/122903624-2a128c80-d382-11eb-9400-4bb884d6c0c6.png">
In the [documents](https://huggingface.co/transformers/master/main_classes/trainer.html#trainingarguments), it says one can pass a `str` to `metric_for_best_model `.I am kind of confused why this can work. For example, can I just set `metric_for_best_model="accuracy"`, and it will compute accuracy itself? And if I have my own metric, how can I customize it? Thank you😄
| 06-22-2021 09:50:42 | 06-22-2021 09:50:42 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Pinging @sgugger
Thanks!<|||||>The metrics need to be computed and have a name that is in what your `compute_metrics` function returns.<|||||>> The metrics need to be computed and have a name that is in what your `compute_metrics` function returns.
this works for me,thank you~ |
transformers | 12,300 | closed | [Flax] ViT training example | # What does this PR do?
This PR adds image classification script for fine-tuning flax vit.
For faster processing and loading , torch dataloader is used, since this can become a bottleneck on TPU. | 06-22-2021 09:23:25 | 06-22-2021 09:23:25 | Can you run `make style` ? |
transformers | 12,299 | closed | T ** 2 in distillation process | Hello,
I am confused about one part in the https://github.com/huggingface/transformers/blob/master/examples/research_projects/distillation/distiller.py script by VictorSanh et. al.

Why does the script weight the KL-Divergence between the student and the teacher distribution with an additional `T ** 2` namely the ```* (self.temperature) ** 2``` part in line 417.
The Hinton-paper says something about weighting with the squared temperature value but in this blog post: https://medium.com/huggingface/distilbert-8cf3380435b5 by VictorSanh (the same author) the KL value (can be found in about the middle of the page) does not seem to be weighted.
What am I missing here? Thank you.
@VictorSanh | 06-22-2021 08:04:39 | 06-22-2021 08:04:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,298 | closed | add FlaxAutoModelForImageClassification in main init | # What does this PR do?
Adds `FlaxAutoModelForImageClassification` and `FLAX_MODEL_FOR_IMAGE_CLASSIFICATION_MAPPING` in main init. | 06-22-2021 06:35:59 | 06-22-2021 06:35:59 | |
transformers | 12,297 | closed | Error: while executing run_qa.py from examples/pytorch/question-answering/ directory | I am using the Google Colab environment to perform QA task. To evaluate finetuned model I am using run_qa.py file but in the **STEP-2** below, while using run_qa.py, I am getting mentioned error.
**STEP1:**
In the first step, the bert_base_cased model trained on the MLM task is finetuned on QA task using SQuAD.
model = AutoModelForQuestionAnswering.from_pretrained(path_of_my_bert_base_cased_MLM_checkpoint)
args = TrainingArguments(
f"test-squad",
evaluation_strategy = "epoch",
learning_rate=2e-5,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
num_train_epochs=1,
weight_decay=0.01,
)
_In the below code I am trying to freeze embedding learning for experimental purpose_
for param in model.parameters():
param.requires_grad = True
for param in model.get_input_embeddings().parameters():
param.requires_grad = False
data_collator = default_data_collator
tokenizer = BertTokenizer(<location-of-my-custom-vocab>,
do_lower_case=False,
model_max_length=128)
trainer = Trainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
)
trainer.train()
**STEP 2:**
Now, while using above QA finetuned model with run_qa.py file with following parameters I am getting _panicked at 'assertion failed:_' error :
python run_qa.py \
--model_name_or_path <path-to-my-above-model>\
--dataset_name xquad \
--dataset_config_name xquad.hi \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /content
Error Message:
06/22/2021 04:49:39 - WARNING - __main__ - The max_seq_length passed (384) is larger than the maximum length for themodel (128). Using max_seq_length=128.
Running tokenizer on validation dataset: 0% 0/2 [00:00<?, ?ba/s]thread '<unnamed>' panicked at 'assertion failed: stride < max_len', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/encoding.rs:322:9
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
## Environment info
- `transformers` version: 4.8.0.dev0
- Platform:Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?):1.9.0+cu102 (True)
- Tensorflow version (GPU?):2.5.0 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: parallel set-up
### Who can help
@LysandreJik,@sgugger, @patil-suraj
| 06-22-2021 06:00:35 | 06-22-2021 06:00:35 | You are using a model with `max_length = 128` but then pass along `max_seq_length 384`, which is overridden to become 128 since that is the maximum the tokenizer thinks the model can handle because of:
```
tokenizer = BertTokenizer(do_lower_case=False, model_max_length=128)
```
and then trying to use a stride of 128. Either pick a greater max length or lower the stride to be < 128.<|||||>@sgugger Thanks a lot. I have reduced stride and it is working now. |
transformers | 12,296 | closed | BART infilling example? | Hi,
I'm trying this official example from the documentation (v4.7.0) for BART mask infilling :
"https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration"
```
from transformers import BartForConditionalGeneration, BartTokenizer
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", force_bos_token_to_be_generated=True)
tok = BartTokenizer.from_pretrained("facebook/bart-large")
example_english_phrase = "UN Chief Says There Is No <mask> in Syria"
batch = tok(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
assert tok.batch_decode(generated_ids, skip_special_tokens=True) == ['UN Chief Says There Is No Plan to Stop Chemical Weapons in Syria']
```
First, I get this `TypeError: __init__() got an unexpected keyword argument 'force_bos_token_to_be_generated`
After removing that argument, the output is:` ['UNALSO SEE']`, which is unexpected.
I also tried several other examples but get behavior which does not seem like infilling:
```
example_english_phrase = "They are having a <mask> in a park."
batch = tok(example_english_phrase, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
tok.batch_decode(generated_ids, skip_special_tokens=True)
```
Output: `"'They are in a park.They are having a party.'"`
**Do I have the wrong model or documentation?** Grateful for any pointers to help resolve this.
@patrickvonplaten, @patil-suraj
- `transformers` version: 4.7.0
- Python version: 3.7
- PyTorch version (GPU?): 1.7.1+cu110
| 06-22-2021 05:39:19 | 06-22-2021 05:39:19 | (Resolved, but documentation needs to be updated)
```
from transformers import BartConfig
config = BartConfig(force_bos_token_to_be_generated=True)
model = BartForConditionalGeneration.from_pretrained("facebook/bart-large", config=config)
...
```
will let the model do infilling. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,295 | closed | [examples] replicate the new `--log_level` feature to all trainer-based pytorch examples | https://github.com/huggingface/transformers/pull/12276 introduced a new `--log_level` feature, which now allows users to set their desired log level via CLI or TrainingArguments.
`run_translation.py` was used as a "model" for other examples.
Now we need to replicate this to all other Trainer-based examples under examples/pytorch/, the 3 changes are
1. importing datasets
2. using `training_args.get_node_log_level()` and setting log_level in 3 modules
3. replacing `datasets` object name with `raw_datasets`, since otherwise we have a conflict with `datasets` the module.
and the relevant diff is [here](https://github.com/huggingface/transformers/pull/12276/files?file-filters%5B%5D=.py#diff-09777f56cee1060a535a72ce99a6c96cdb7f330c8cc3f9dcca442b3f7768237a)
and of course since we don't quite have extensive tests for examples, you can just test with a staple cmd from the corresponding README.md with `--log_level=error` and check that almost all logs are gone.
This is open to all.
And thank you. | 06-22-2021 02:51:20 | 06-22-2021 02:51:20 | Can I take this?<|||||>Yes, of course! Thank you, @bhadreshpsavani <|||||>@bhadreshpsavani, once https://github.com/huggingface/transformers/pull/12309 gets merged, please rebase your branch and rename
```
- get_node_log_level()
+ get_process_log_level()
```
Thank you!<|||||>ok, it's merged now.<|||||>Thanks!<|||||>How can I contribute?<|||||>Please refer to https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md |
transformers | 12,294 | closed | [tests] multiple improvements | This PR:
1. splits the 2 groups of `TrainerIntegrationTest` tests into separate subclasses, since at the moment many run 2x training in `setUp` and then don't use the results - should make things a bit faster, and mainly this removes the weird unexpected logs when debugging tests.
2. introduce, use and document `require_torch_up_to_2_gpus` as we have a bunch of tests that can only run on up to 2 gpus and currently they weren't `skipped`, but hacked to return reporting `passed`!
3. fix `test_resume_training_with_randomness` to use `assertAlmostEqual` so that we get debug data when it fails - and also fix the comment to match the code that this test only works with 0 or 1 gpus - and use the marker.
@sgugger | 06-22-2021 00:09:18 | 06-22-2021 00:09:18 | |
transformers | 12,293 | closed | [tests] reset report_to to none, avoid deprecation warning | This PR fixes the warnings:
```
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none).
In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code
and make this info disappear :-).
```
by setting `report_to=[]`, which also makes the tests a tad faster by not doing any reporting, unless the test explicitly asks for it.
@sgugger | 06-21-2021 23:22:24 | 06-21-2021 23:22:24 | |
transformers | 12,292 | closed | Fix for the issue of device-id getting hardcoded for position-ids during Tracing for Flaubert | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to Flaubert.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.issues #5664 and #976
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Does not apply.
## Who can review?
@LysandreJik | 06-21-2021 21:09:33 | 06-21-2021 21:09:33 | |
transformers | 12,291 | closed | Trainer: adjust wandb installation example | Hi,
this is a very pedantic fix for the `wandb` installation example.
In the original version, `wandb login` will be executed, even when the previous command - `pip install wandb` - failed.
This can be solved by using the "and" operator.
@sgugger | 06-21-2021 20:52:48 | 06-21-2021 20:52:48 | |
transformers | 12,290 | closed | Fix for the issue of device-id getting hardcoded for position-ids during Tracing for Distillbert | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to Distillbert.
Fixes # (issue)
Registering a buffer for position_ids in the constructor and then resizing it in the forward method based on input-shape.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. issues #5664 and #976
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? does not apply.
## Who can review?
@LysandreJik . | 06-21-2021 19:21:35 | 06-21-2021 19:21:35 | |
transformers | 12,289 | closed | Fix TFWav2Vec2 SpecAugment | # What does this PR do?
This PR fixes the SpecAugment implementation for TFWav2Vec2. I had a lot of trouble with this during the original PR, so I'm not 100% if this is correct. I would love feedback because at this point there must be a knowledge gap.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/12264
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
| 06-21-2021 18:18:39 | 06-21-2021 18:18:39 | Based on the original paper the time mask should be applied on the time axis like

However, wav2vec2 is masking hidden states. In principle, it should work the same way and if correctly applied you should be able to see the zeroed span in the time dimension of the hidden states.<|||||>Hey @will-rice,
Thanks for fixing the bug. As I understand it, when applying spec_augment along the time axis we are actually not setting the values to zero but to a trained mask embedding vector (this is mostly because that's how it's used for pretraining actually).
IMO, to correctly implement this one has to make use of `tf.where`. A good way would be to expand the masked indices and `self.masked_spec_embed` to be of the same size as `hidden_states` (as explained above) ati use `tf.where`.
For spec_augment along the feature axis I would also suggest to use `tf.where` -> expand the feature indices (this time along the time axis (seq length) and then one can simply do:
```
hidden_states = tf.where(expanded_mask, hidden_states, 0)
```<|||||>Let me know if this is not understandable ;-) <|||||>> Hey @will-rice,
>
> Thanks for fixing the bug. As I understand it, when applying spec_augment along the time axis we are actually not setting the values to zero but to a trained mask embedding vector (this is mostly because that's how it's used for pretraining actually).
> IMO, to correctly implement this one has to make use of `tf.where`. A good way would be to expand the masked indices and `self.masked_spec_embed` to be of the same size as `hidden_states` (as explained above) ati use `tf.where`.
>
> For spec_augment along the feature axis I would also suggest to use `tf.where` -> expand the feature indices (this time along the time axis (seq length) and then one can simply do:
>
> ```
> hidden_states = tf.where(expanded_mask, hidden_states, 0)
> ```
"fixing" :stuck_out_tongue:. I believe I understand the way to do it now but will post questions here if I get stuck. Thank you for walking through this! |
transformers | 12,288 | closed | Add out of vocabulary error to ASR models | # What does this PR do?
This PR adds a check to Wav2Vec2 and Hubert models to ensure the labels do not contain values greater than the configured vocab size.
Fixes # (issue)
https://github.com/huggingface/transformers/issues/12270
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-21-2021 18:12:21 | 06-21-2021 18:12:21 | |
transformers | 12,287 | closed | Fix for the issue of device-id getting hardcoded for token_type_ids during Tracing for ConvBert | # What does this PR do?
This PR is part of a series of PRs that follows PR #11252 and applies similar changes to ConvBert.
Fixes # (issue)
Registering a buffer for token_type_ids in the constructor and then resizing it in the forward method based on input-shape.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. issues #5664 and #976
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests? Not required.
## Who can review?
@LysandreJik
| 06-21-2021 18:07:44 | 06-21-2021 18:07:44 | |
transformers | 12,286 | closed | Memory leak when using DistilBert for inference to extract [CLS] hidden state | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @drjosephliu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): DistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
I am attempting to extract all of the pooled outputs for each row in my dataset and return them as an array. My dataset consists of 14000 rows and the size of a single pooled output is (1,768). Therefore, I would expect my RAM usage to be ~(14000 * 768 * 4) bytes --> 43 MBs.
However, I notice that my RAM usage seems to increase exponentially as more and more iterations are executed. This occurs when using both the CPU and the GPU. When running on CPU, the google colab environment shows a huge jump in RAM-usage about 75% through my dataset.
Here is a screenshot of the RAM usage that illustrates this problem:

## To reproduce
Steps to reproduce the behavior:
1. Encode a dataset (sufficiently large; mine has 14k samples of 512 tokens)
2. Run it through my function (provided below) to extract the pooled output of each sample
`
def getPooledOutputs(model, encoded_dataset, batch_size = 32):
model.eval()
pooled_outputs = []
print("total number of iters ", len(encoded_dataset['input_ids'])//batch_size + 1)
for i in range(len(encoded_dataset['input_ids'])//batch_size + 1):
print(i)
up_to = i*batch_size + batch_size
if len(encoded_dataset['input_ids']) < up_to:
up_to = len(encoded_dataset['input_ids'])
input_ids = th.LongTensor(encoded_dataset['input_ids'][i*batch_size:up_to]).cuda()
attention_mask = th.LongTensor(encoded_dataset['attention_mask'][i*batch_size:up_to]).cuda()
with torch.no_grad():
embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True)['hidden_states'][-1][:,0] # Pooled output
pooled_outputs.extend(embeddings)
th.cuda.empty_cache()
return pooled_outputs
`
This is the error message (GPU):
> RuntimeError: CUDA out of memory. Tried to allocate 384.00 MiB (GPU 0; 15.78 GiB total capacity; 13.75 GiB already allocated; 260.75 MiB free; 14.21 GiB reserved in total by PyTorch)
On CPU my runtime just crashes.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
To have my function return an array of the [CLS] pooled output for every row in my dataset and to have my GPU ram usage roughly constant during the entirety of the function call. | 06-21-2021 17:10:32 | 06-21-2021 17:10:32 | Hi! You're keeping the results of your model forward call in memory when extending your `pooled_output` list so your memory is bound to take a hit as you iterate through your dataset<|||||>@LysandreJik Sorry I should have clarified that my dataset consists of 14000 rows and the size out of the output I am trying to extract for each one of them is (1,768). This thus corresponds to (14000 * 768 * 4) Bytes --> 43 megabytes. Unless there are undesired artifacts from the forward calls that are being stored?<|||||>Seems like I have fixed my problem by making `pooled_outputs` a pytorch tensor and not a list.
So that my function now looks like this:
```
def getPooledOutputs(model, encoded_dataset, batch_size = 32):
model.eval()
# pooled_outputs = []
pooled_outputs = torch.empty([0,768]).cuda()
print("total number of iters ", len(encoded_dataset['input_ids'])//batch_size + 1)
for i in range(len(encoded_dataset['input_ids'])//batch_size + 1):
print(i)
up_to = i*batch_size + batch_size
if len(encoded_dataset['input_ids']) < up_to:
up_to = len(encoded_dataset['input_ids'])
input_ids = th.LongTensor(encoded_dataset['input_ids'][i*batch_size:up_to]).cuda()
attention_mask = th.LongTensor(encoded_dataset['attention_mask'][i*batch_size:up_to]).cuda()
with torch.no_grad():
embeddings = model.forward(input_ids=input_ids, attention_mask=attention_mask, output_hidden_states=True)['hidden_states'][-1][:,0] # Pooled output
pooled_outputs = th.cat([pooled_outputs, embeddings],0)
th.cuda.empty_cache()
return pooled_outputs
```
Still do not know why having a list of tensors is a problem but I suppose that this does not concern Huggingface |
transformers | 12,285 | closed | [WIP][Flax] CLIP training example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-21-2021 14:53:09 | 06-21-2021 14:53:09 | closing this PR, messed up the commit history :(
Opened a new PR here #12491 |
transformers | 12,284 | closed | [FlaxClip] fix test from/save pretrained test | # What does this PR do?
Fixes `test_from_pretrained_save_pretrained` test for `FlaxClip` | 06-21-2021 13:13:39 | 06-21-2021 13:13:39 | |
transformers | 12,283 | closed | [TFWav2Vec2] Fix docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes TFwav2vec2 docs error
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-21-2021 10:08:10 | 06-21-2021 10:08:10 | Thanks for fixing the error @chenht2010 - could you run `make style` once to fix the check code quality test? The PyTorch test error seems unrelated.<|||||>`make style` should be run from the root folder |
transformers | 12,282 | closed | TFWav2Vec2ForCTC: Error when using padded batch and attention mask | ## Environment info
- `transformers` version: 4.8.0.dev0
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
@patrickvonplaten
@will-rice
Models:
- Wav2Vec2
-->
## Information
Model I am using: TFWav2Vec2ForCTC
The problem arises when using:
* Official example script of TFWav2Vec2ForCTC modified to use padded batch
## To reproduce
Steps to reproduce the behavior:
1. Install relevant libraries.
2. Run code snippet below
```python
import tensorflow as tf
from transformers import Wav2Vec2Processor, TFWav2Vec2ForCTC
from datasets import load_dataset
import soundfile as sf
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
def map_to_array(batch):
speech, _ = sf.read(batch["file"])
batch["speech"] = speech
return batch
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
ds = ds.map(map_to_array)
# Pad the speech file with zeros and create corresponding attention mask
speech_len = len(ds["speech"][0])
padded_speech = ds["speech"][0] + [0.0]*1000
attention_mask = tf.sequence_mask((speech_len,), maxlen=len(padded_speech), dtype=tf.float32)
input_values = processor(padded_speech, return_tensors="tf").input_values # Batch size 1
logits = model(input_values).logits
predicted_ids = tf.argmax(logits, axis=-1)
transcription = processor.decode(predicted_ids[0])
# compute loss
target_transcription = "A MAN SAID TO THE UNIVERSE SIR I EXIST"
# wrap processor as target processor to encode labels
with processor.as_target_processor():
labels = processor(transcription, return_tensors="tf").input_ids
loss = model(input_values, attention_mask=attention_mask, labels=labels).loss
```
## Outputs
```
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
<ipython-input-1-263e70f23fd3> in <module>
33
34
---> 35 loss = model(input_values, attention_mask=attention_mask, labels=labels).loss
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
~/code/transformers/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, labels, output_hidden_states, return_dict, training, **kwargs)
1553 )
1554
-> 1555 outputs = self.wav2vec2(
1556 input_values=inputs["input_values"],
1557 attention_mask=inputs["attention_mask"],
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
~/code/transformers/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, input_values, attention_mask, token_type_ids, position_ids, head_mask, inputs_embeds, output_attentions, output_hidden_states, return_dict, training, **kwargs)
1225 hidden_states = self._mask_hidden_states(hidden_states)
1226
-> 1227 encoder_outputs = self.encoder(
1228 hidden_states,
1229 attention_mask=attention_mask,
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer.py in __call__(self, *args, **kwargs)
1010 with autocast_variable.enable_auto_cast_variables(
1011 self._compute_dtype_object):
-> 1012 outputs = call_fn(inputs, *args, **kwargs)
1013
1014 if self._activity_regularizer:
~/code/transformers/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py in call(self, hidden_states, attention_mask, output_attentions, output_hidden_states, return_dict, training)
993
994 if attention_mask is not None:
--> 995 hidden_states = hidden_states * tf.expand_dims(attention_mask, -1)
996 attention_mask = _expand_mask(attention_mask)
997 else:
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in binary_op_wrapper(x, y)
1162 with ops.name_scope(None, op_name, [x, y]) as name:
1163 try:
-> 1164 return func(x, y, name=name)
1165 except (TypeError, ValueError) as e:
1166 # Even if dispatching the op failed, the RHS may be a tensor aware
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in _mul_dispatch(x, y, name)
1494 return sparse_tensor.SparseTensor(y.indices, new_vals, y.dense_shape)
1495 else:
-> 1496 return multiply(x, y, name=name)
1497
1498
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py in wrapper(*args, **kwargs)
199 """Call target, and fall back on dispatchers if there is a TypeError."""
200 try:
--> 201 return target(*args, **kwargs)
202 except (TypeError, ValueError):
203 # Note: convert_to_eager_tensor currently raises a ValueError, not a
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/math_ops.py in multiply(x, y, name)
516 """
517
--> 518 return gen_math_ops.mul(x, y, name)
519
520
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/ops/gen_math_ops.py in mul(x, y, name)
6066 return _result
6067 except _core._NotOkStatusException as e:
-> 6068 _ops.raise_from_not_ok_status(e, name)
6069 except _core._FallbackException:
6070 pass
/opt/audatic/venv/lib/python3.8/site-packages/tensorflow/python/framework/ops.py in raise_from_not_ok_status(e, name)
6860 message = e.message + (" name: " + name if name is not None else "")
6861 # pylint: disable=protected-access
-> 6862 six.raise_from(core._status_to_exception(e.code, message), None)
6863 # pylint: enable=protected-access
6864
/opt/audatic/venv/lib/python3.8/site-packages/six.py in raise_from(value, from_value)
InvalidArgumentError: Incompatible shapes: [1,547,768] vs. [1,544,1] [Op:Mul]
```
## Expected behavior
Code should run without errors and produce a loss equivalent to not using padded batch. Without the padded batch, the loss is:
```python
print(loss)
>>> <tf.Tensor: shape=(), dtype=float32, numpy=39.32432>
```
When using the padded batch and not specifying an attention_mask, the loss is:
```python
print(loss)
>>> <tf.Tensor: shape=(), dtype=float32, numpy=39.96655>
```
## Fix:
The bugfix should be quite easy. In line 1217 of transformers.models.wav2vec2.modeling_tf_wav2vec2.py, instead of:
```python
attention_mask = tf.sequence_mask(output_lengths, dtype=hidden_states.dtype)
```
It should be:
```python
max_output_length = self._get_feat_extract_output_lengths(inputs["input_values"].shape[-1])
attention_mask = tf.sequence_mask(output_lengths, maxlen=max_output_length, dtype=hidden_states.dtype)
```
| 06-21-2021 09:41:08 | 06-21-2021 09:41:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Think we can close this one |
transformers | 12,281 | closed | [WIP] Cogview | # What does this PR do?
Adds the [CogView](https://github.com/THUDM/CogView) model for text-to-image generation.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-21-2021 07:26:40 | 06-21-2021 07:26:40 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @patil-suraj, do you think it still makes sense to work on this and get it merged? |
transformers | 12,280 | closed | Rename detr targets to labels | # What does this PR do?
It fixes #12248. As the models accept "labels" as an argument, it's better to also use the term "labels" in the feature extractor instead of "target".
Note that if this PR gets merged, I'll need to update my demo notebooks (rename "target" to "labels").
I also improved the documentation a little more, and removed some unused variables from `DetrConfig`.
cc @LysandreJik | 06-21-2021 07:21:31 | 06-21-2021 07:21:31 | @LysandreJik if you want, I can also update the failing integration test for `DetrForSegmentation` in this PR (and perhaps rename the PR). And how can I create an alias for `DetrForSegmentation` (to be renamed to `DetrForImageSegmentation`)?<|||||>Let's merge this PR as-is and update the integration test in another PR.
For the alias you can simply do `DetrForImageSegmentation = DetrForSegmentation` |
transformers | 12,279 | closed | Better CI feedback | The "View on GitHub" link now redirects to the correct run. | 06-21-2021 06:51:56 | 06-21-2021 06:51:56 | |
transformers | 12,278 | closed | ViTFeatureExtractor.save_pretrained() generate "preprocessor_config.json" but not "config.json" | config.json is needed to use ViTForImageClassification.from_pretrained()
I made a pull-request | 06-21-2021 03:35:04 | 06-21-2021 03:35:04 | `ViTFeatureExtractor` is the feature extractor, not the model itself. The model itself requires the `config.json` file that specifies the architecture of the model, while the feature extractor requires its `preprocessor_config.json` file. These two are different files.<|||||>The problem is ViTForImageClassification.from_pretrained() not take "preprocessor_config.json" and you need to rename it as "config.json".
Thanks<|||||>`ViTForImageClassification` is not `ViTFeatureExtractor`, the first one is a model while the second one is a feature extractor. See some usage examples [here](https://huggingface.co/transformers/model_doc/vit.html#transformers.ViTForImageClassification)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,277 | closed | Update feature_extraction_utils.py | config.json is needed to use custom ViTForImageClassification.from_pretrained()
but FEATURE_EXTRACTOR_NAME = "preprocessor_config.json"
so ---> output_feature_extractor_file = os.path.join(save_directory, "config.json")
preprocessor_config.json
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-21-2021 03:14:03 | 06-21-2021 03:14:03 | @patrickvonplaten could you take a look at this?<|||||>Hey @daquarti,
This looks wrong to me -> the feature extractors don't save their parameters into `config.json`, but into `preprocessor_config.json`. Could you elaborate a bit more on the PR?<|||||>Thanks for help @patrickvonplaten , maybe may solution was not good because ViTFeatureExtractor.from_pretrained() needs "preprocessor_config.json"
The problem is the following:
When I use ViTFeatureExtractor.from_pretrained() "preprocessor_config.json" woks good.
but them, when I use ViTForImageClassification.from_pretrained() "config.json" is needed. If I rename "preprocessor_config.json" like "config.json" ViTForImageClassification.from_pretrained() works
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Note that in order to load the model `ViTForImageClassification` one needs a `config.json`. In order to load the feature extractor one needs `preprocessor_config.json`.<|||||>Those are actually two different configs for two different classes<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,276 | closed | [trainer + examples] set log level from CLI | As examples keep adding more more debug dumps as info (3 new dumps in `run_translation.py`) and repetitive logger warnings keep on growing - w/o being able to control the level of noise, this PR gives the noise control back to the user.
One of the main pros of this change is that now we can actually use `logger.debug` and put the less important info there, and only activate it when reporting issues or debugging something. Much better than having only `info` or `warning` toggle to logging.
This PR:
1. Introduces `--log_level` which has the normal 5 levels, plus "passive" which doesn't do anything special and let's the driver application do whatever it wants. if it's not `passive` it sets the log level to that arg's value asap.
2. Change Traner's `log_metrics` to be a non-logger print, since this is the whole point of the training/eval and thus IMHO should be always printed as its result. I can see where someone would say, but what if I don't want even this printed. This is fair, in which case I propose to add an explicit new arg `--print_results`.
3. As a single template to work on changes `run_translation.py` to also use this new CLI arg to do the log settings in its own and all sub-modules it uses, e.g. `datasets` here. e.g. previously `datasets` verbosity was on its own.
Questions/Notes to reviewers:
1. If this is accepted I propose to deprecate `training_args.should_log` since now it's no longer just `info` or `warn`, but provides a more refined control over log levels. And if it's passed to auto-set `--log_level=info`. I left the original logic there for now. The examples can still default to `warn` as they are now.
2. It's however possible that there should be `--train_log_level` and `--log_level` with the latter overriding the former if set, but most likely these should be in sync with everything.
3. Obviously if this is accepted once this example is polished - we will replicate the same for other examples in another PR.
4. I couldn't find how get rid of logger warnings at import time, but it's alright, as now there are only a few left.
5. Specific to Deepspeed integration I need to find a way to do the same there as it's *very* noisy. (in a follow up PR most likely if find a way).
I am very open to other ways of implementing it, naming it, etc. Not really attached to how, but when I develop/debug code I want to see only the messages that I need to focus on and not hundreds of lines of noise that it's always the same and makes it difficult to see the important things.
With this PR I get:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_train --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 500 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_train_batch_size $BS --predict_with_generate --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --val_max_target_length 128 --warmup_steps 50 --max_train_samples 50 --max_eval_samples 50 \
--log_level=critical
2021-06-20 19:17:40.705704: I tensorflow/stream_executor/platform/default/dso_loader.cc:49] Successfully opened dynamic library libcudart.so.11.0
{'loss': 3.0621, 'learning_rate': 6.000000000000001e-07, 'epoch': 0.25}
{'train_runtime': 1.3178, 'train_samples_per_second': 37.943, 'train_steps_per_second': 3.035, 'train_loss': 2.9988757967948914, 'epoch': 1.0}
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 4/4 [00:00<00:00, 7.43it/s]
***** train metrics *****
epoch = 1.0
train_loss = 2.9989
train_runtime = 0:00:01.31
train_samples = 50
train_samples_per_second = 37.943
train_steps_per_second = 3.035
```
Thank you.
@sgugger, @LysandreJik, @patrickvonplaten
| 06-21-2021 02:36:43 | 06-21-2021 02:36:43 | > but for the `should_print` (or whatever) argument, I would wait until a user actually requests it. It seems weird to me to run a training script and not even want to have the metrics outputted.
Totally agree!
wrt the rest of the logic, it looks like I didn't look close enough and `should_log` is a toggle on sort of whether this is a master or a slave node, but I don't know the sagemaker logic to tell that particular branch. So it has its function.
After experimenting with different approaches, it looks like to give users a full control they really should be able to set separately the log_level on the master and slave nodes. So I propose to further add another arg:
So training args:
```
def get_node_log_level(self):
default_log_level_master_node = logging.INFO
default_log_level_slave_node = logging.WARNING
log_level_master_node = default_log_level_master_node if self.log_level_master_node != -1 else self.log_level_master_node
log_level_slave_node = default_log_level_slave_node if self.log_level_slave_node != -1 else self.log_level_slave_node
return log_level_master_node if self.should_log else log_level_slave_node
```
and then the application side becomes much simpler:
```
log_level = training_args.get_node_log_level()
logger.setLevel(log_level)
datasets.utils.logging.set_verbosity(log_level)
transformers.utils.logging.set_verbosity(log_level)
```
no need for any logic there.
W/o the cli arg for the slave, the user then can't really set a high log level since the slave nodes will still be set to WARNING.
(After experiments with 64nodes any extra line gets multiplied many times. I wish I could control the exceptions too.)
I'm just not sure whether to call the 2 new args:
1.
```
--log_level
--log_level_slave
```
or 2.
```
--log_level_master
--log_level_slave
```
or 3.
```
--log_level_master_node
--log_level_slave_node
```
I'd say the 2nd one is more explicit and probably is a better choice in the long run. 3rd - may be even more clear.
Please let me know your thoughts on this modification to the new CLI args and the naming.
thank you!<|||||>Ah yes, `should_log` is a property that determines if a process is the local main or not (and also takes into account the `log_on_each_node` argument). I would keep `--log_level` for the log level of the main process (or main processes if `log_on_each_node` is true) since most users are not using distributed training, so it makes sense keeping it like that.
For the other argument, let's not use the slave terminology as many people have voiced it makes them uncomfortable. We can use main/replica and call the second argument `--log_level_replica` for instance. I think the default `"passive"` should then be warning if no `log_level` is passed and `log_level+1` if one is passed, does that make sense?
<|||||>> Ah yes, `should_log` is a property that determines if a process is the local main or not (and also takes into account the `log_on_each_node` argument). I would keep `--log_level` for the log level of the main process (or main processes if `log_on_each_node` is true) since most users are not using distributed training, so it makes sense keeping it like that.
+1
> For the other argument, let's not use the slave terminology as many people have voiced it makes them uncomfortable.
But we are using MASTER for torch.distributed, so those who are uncomfortable with master/slave are already uncomfortable.
> We can use main/replica and call the second argument `--log_level_replica` for instance.
This sounds very foreign, but logically it makes sense.
Do you know if someone has found a new standard pair for master/slave that is getting embraced by the industry?
> I think the default `"passive"` should then be warning if no `log_level` is passed and `log_level+1` if one is passed, does that make sense?
The assumption of `log_level+1` doesn't quite work here. What if the user wants to get `log.DEBUG` for the main process and still keep the slave nodes relatively quiet. +1 would force them all to `log.INFO` which would be too much. (and it'd be +10 here ;)
Therefore I propose to just stick to (please ignore the naming at this moment):
```
default_log_level_master_node = logging.INFO
default_log_level_slave_node = logging.WARNING
```
and only override each with the corresponding:
```
--log_level_master_node
--log_level_slave_node
```
<|||||>> But we are using MASTER for torch.distributed, so those who are uncomfortable with master/slave are already uncomfortable.
I may have missed something, but there should not be any master reference in the Transformers code base, so I'm not sure what you mean. We can't control how PyTorch names its arguments if you're referring to `--master_port` and `--master_addr`.
> Do you know if someone has found a new standard pair for master/slave that is getting embraced by the industry?
`main` and `replica` seem to be used, but I don't know if there are the new standard. I don't think there is one, but I may have missed something. I'm not attached to those names if you have better ideas.
> The assumption of log_level+1 doesn't quite work here.
That makes sense, let's go for defaults to INFO and WARNING respectively.<|||||>> > But we are using MASTER for torch.distributed, so those who are uncomfortable with master/slave are already uncomfortable.
>
> I may have missed something, but there should not be any master reference in the Transformers code base, so I'm not sure what you mean. We can't control how PyTorch names its arguments if you're referring to `--master_port` and `--master_addr`.
Indeed, that what I was referring to.
> > Do you know if someone has found a new standard pair for master/slave that is getting embraced by the industry?
>
> `main` and `replica` seem to be used, but I don't know if there are the new standard. I don't think there is one, but I may have missed something. I'm not attached to those names if you have better ideas.
I asked about this on the torch slack and was told that they dealt with **blacklists**:
https://github.com/pytorch/pytorch/search?q=blacklist&type=commits
replacing those with **blocklists**. But master/slave hasn't been dealt with yet. But I also don't see any `slave` in the pytorch source code - only in 3rd party code, so perhaps there is no need to.
And the best finding was the sharing of this resource:
https://en.wikipedia.org/wiki/Master/slave_(technology)#Replacements
So in our context, given that we are dealing with master and slave being identical, I think these 3 would be the most matching:
1. Primary/Replica
2. Master/Replica
3. Source/Replica
and personally I 2nd resonates the most, same as master weights for example.
Unless I'm missing something in the whole controversy, master on its own, with the slave connotation is OK, right? Otherwise we would have to kill the concept of mastering something - that would be super 1984. <|||||>It's not controversial as long it's use in the context of mastering something, which is not really what the master process is all about, so I would use main as GitHub now does, or primary if you prefer this terminology.
Like I said though, the argument for controlling the primary process for logging should just be `log_level` IMO as distributed training is not used by all users, and `log_level`, `log_level_replicas` is clear enough.<|||||>Sounds good, Sylvain. I was just using the opportunity to get clarity around this modern development to do the right thing in the future code.<|||||>No problem at all!<|||||>ok, so as discussed - added `--log_level_replica` (let me know if you prefer `--log_level_replicas`)
Though now as I'm thinking about it - I don't think the name is unambiguous since we have replicas that are on the same node and those replicas get very different treatment.
I also added a test and extended docs.
I moved `log_on_each_node` into the log_* group in training args.
There is a bit of an overlap / possible conflict in setting `transformers` log-level in the user code and also in trainer's init. The early setting in the app is important to catch those early logs and we hope that the app used the same log-level - otherwise trainer's init will reset user's log-level for `transformers` - but I think this is expected if `--log_level` is passed. Just don't pass it.
That's why I added the extended docs so that the whole domain of logging is discussed in one place.
Comments and suggestions for further improvements are welcome. Thank you!<|||||>If you don't mind having another look with my last changes at your convenience, @sgugger - thank you.<|||||>Looking good!<|||||>Noticing one potentially undesirable effect of this change, since the example now syncs `datasets`'s verbosity level, their `info` is a way too loud, and a lot of it is too much unimportant info. They seem to use warning as info and info as debug.
Filed an issue there: https://github.com/huggingface/datasets/issues/2543 - I'd change many warnings to infos and the previous infos to debug there. |
transformers | 12,275 | closed | Transformers-CLI not saving pytorch model after conversion | @patrickvonplaten, @LysandreJik
Hi, I'm using transformers CLI as per https://docs.aitextgen.io/gpt-2-simple/
But getting this error, where transformers is not saving the pytorch model. Not sure why. I tried in both Ubuntu and Windows. Two different system, but getting same error. So not sure what is the issue
Save PyTorch model to pytorch/pytorch_model.bin
Traceback (most recent call last):
File "c:\programdata\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\programdata\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\XXX\AppData\Roaming\Python\Python38\Scripts\transformers-cli.exe\__main__.py", line 7, in <module>
File "C:\Users\XXX\AppData\Roaming\Python\Python38\site-packages\transformers\commands\transformers_cli.py", line 51, in main
service.run()
File "C:\Users\XXX\AppData\Roaming\Python\Python38\site-packages\transformers\commands\convert.py", line 152, in run
convert_gpt2_checkpoint_to_pytorch(self._tf_checkpoint, self._config, self._pytorch_dump_output)
File "C:\Users\XXX\AppData\Roaming\Python\Python38\site-packages\transformers\models\gpt2\convert_gpt2_original_tf_checkpoint_to_pytorch.py", line 45, in convert_gpt2_checkpoint_to_pytorch
torch.save(model.state_dict(), pytorch_weights_dump_path)
File "c:\programdata\anaconda3\lib\site-packages\torch\serialization.py", line 376, in save
with _open_file_like(f, 'wb') as opened_file:
File "c:\programdata\anaconda3\lib\site-packages\torch\serialization.py", line 230, in _open_file_like
return _open_file(name_or_buffer, mode)
File "c:\programdata\anaconda3\lib\site-packages\torch\serialization.py", line 211, in __init__
super(_open_file, self).__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'pytorch/pytorch_model.bin'
## Environment info
- `transformers` version: 4.7.0
- Platform: Windows
- Python version: 3.8.3
- PyTorch version (GPU?): 1.9.0+cpu
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: no
## Information
Model I am using: GPT2
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run the command transformers-cli convert --model_type gpt2 --tf_checkpoint checkpoint/run1 --pytorch_dump_output pytorch --config checkpoint/run1/hparams.json
## Expected behavior
The script should save pytorch_model.bin but it seems it is not saving. Hence it is unable to load it.
| 06-20-2021 23:51:46 | 06-20-2021 23:51:46 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I did this
https://github.com/minimaxir/aitextgen/issues/141
On Mon, Nov 22, 2021 at 4:48 PM ryanhampton ***@***.***>
wrote:
> This is also not working for me - any fix?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/12275#issuecomment-975984018>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AFZRJKCPEX7W4YLIZHPKYETUNLCCDANCNFSM47ARIHXQ>
> .
> Triage notifications on the go with GitHub Mobile for iOS
> <https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
> or Android
> <https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
>
>
|
transformers | 12,274 | open | [performance] module init w/ `from_pretrained` skip storage allocation | # 🚀 Feature request
pt-1.9.0 added `torch.nn.utils.skip_init()` which (1) skips the module init (2) doesn't allocate any memory
https://pytorch.org/tutorials/prototype/skip_param_init.html
note: `torch.nn.utils.skip_init()` itself will be in 1.9.1, but the rest of the code should be in 1.9.0 (update: as 1.9.1 isn't planned, probably `s/1.9.1/1.10/`)
We already implemented part 1 (skipping the custom init) in https://github.com/huggingface/transformers/pull/11471.
We could further speed up the start up time and reduce CPU memory usage, by not allocating any storage for module init since `load_state_dict` will already have allocated `state_dict` from the pretrained weights (and some sub-modules that don't have pre-trained weights - will have to go through normal init). See https://pytorch.org/tutorials/prototype/skip_param_init.html#implementation-details
another note: currently deepspeed needs to have the module storage pre-allocated for its `zero.Init` gather/scatter, but if the initial model's weights aren't allocated, then we can probably get rid of `zero.Init` altogether https://github.com/huggingface/transformers/issues/12273 | 06-20-2021 15:52:49 | 06-20-2021 15:52:49 | |
transformers | 12,273 | open | [Deepspeed] [performance] inefficient load with `from_pretrained` w/ zero3 | # 🚀 Feature request
Currently under Deepspeed stage3 with `from_pretrained` we:
a. loop over each sub-module in zero.Init
1. init the sub-module
2. shard and scatter the shards
b. then to load pre-trained weights we loop over each sub-module:
1. gather the shards
2. `load_state_dict` for the one layer layer
3. shard and scatter the shards
c. any sub-module params that weren't in the pretrained state_dict
1. run the postponed `module_init` as it was done in https://github.com/huggingface/transformers/pull/11471
2. shard and scatter the shards XXX: I actually don't think `deepspeed.zero.GatheredParameters` was handled here. so these params don't get ZeRO'ed - need to fix that https://github.com/huggingface/transformers/issues/12272
Because we unnecessarily do scatter/gather/scatter, this takes much longer than just:
a. init the modules w/o allocating any storage as it has been implemented in pt-1.9.0/1.9.1 https://pytorch.org/tutorials/prototype/skip_param_init.html#implementation-details
b. for each sub-module with pretrained weights
1. load_state_dict
2. shard and scatter the shards
c. any sub-module params that weren't in the pretrained state_dict
1. materialize and module_init
2. shard and scatter the shards
Solving this will most likely require support from Deepspeed, https://github.com/microsoft/DeepSpeed/issues/1142 or perhaps we can just try to remove `zero.Init` if the weights aren't materialized during model creation. So the very first sharding will get postponed to the `load_state_dict` stage (and `module_init` for the sub-modules that don't have pre-trained weights). | 06-20-2021 15:49:30 | 06-20-2021 15:49:30 | |
transformers | 12,272 | open | [Deepspeed zero3] lazy weights init | I'm pretty sure we need to follow up to the lazy weights init feature https://github.com/huggingface/transformers/pull/11471
and add under zero3 `deepspeed.zero.GatheredParameters` here (or inside `_init_weights`):
https://github.com/huggingface/transformers/pull/11471/files#diff-6b72b98c4c2dcfc6cc606843917733f5d858374fbc22a735ff483bbc0c1e63eaR1275-R1276
plus need a test. | 06-20-2021 15:45:38 | 06-20-2021 15:45:38 | |
transformers | 12,271 | closed | [Flax] Add wav2vec2 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Adds Wav2Vec2 in Flax
- [x] FlaxCTCWav2Vec2
- [x] FlaxForPreTraining
- [x] FlaxWav2Vec2 random mask code
- [x] Clean-up
- [x] Write pretraining script | 06-20-2021 11:13:41 | 06-20-2021 11:13:41 | Anything I can help with ?
Looking forward to use Flax Wav2vec for the community event :)<|||||>Hey @ThomAub - that's very nice of you! I will need to spend a couple more hours on this and then add pretraining as well. One thing that would be very helpful would be to check how to do the GumbelSoftmax in Flax. *I.e.* how can we translate this: https://github.com/huggingface/transformers/blob/f2c4ce7e339f4a2f8aaacb392496bc1a5743881f/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L751 PyTorch module to Flax? This might be a bit difficult and require some googling to see if others have already implement gumbel softmax in jax/Flax or not. If you could take a look at this, it would be very useful! <|||||>I guess I wasn't fast enough ! Great work <|||||>@patil-suraj @sgugger - actually this is still WIP. Sorry for tagging you too early |
transformers | 12,270 | closed | Add error message to Wav2Vec2 & Hubert if labels > vocab_size | # 🚀 Feature request
Add better error message to `HubertForCTC`, `Wav2Vec2ForCTC` if labels are bigger than vocab size.
## Motivation
Following this issue: https://github.com/huggingface/transformers/issues/12264 it is clear that an error message should be thrown if any of the any of the labels are > `self.config.vocab_size` or else silent errors can sneak into the training script.
So we should modify: `Wav2Vec2ForCTC`, `TFWav2Vec2ForCTC`, and `HubertForCTC` to add a nice error message in this case.
## Your contribution
This is a first good issue and should be rather easy to accomplish. I'm happy to give more guidance if needed.
| 06-20-2021 10:49:02 | 06-20-2021 10:49:02 | I will create a PR to fix this.<|||||>@vasudevgupta7 @patrickvonplaten Is this fixed? If not, I will like to work on it. <|||||>Hey, this issue has been fixed in this PR: https://github.com/huggingface/transformers/pull/12288<|||||>Thanks for informing. I had seen it, but since the issue is still open, I thought something might be left. <|||||>Closing as fixed :) |
transformers | 12,269 | open | Add TFSpeech2Text | # 🚀 Feature request
Add TensorFlow implementation of Speech2Text model.
## Your contribution
I'll try to do this.
**Reviewers:** @patil-suraj | 06-20-2021 10:48:58 | 06-20-2021 10:48:58 | Is this still in progress? I didn't see a WIP PR, but I'd like to use the TensorFlow version if possible.<|||||>Hello @will-rice, I have this in progress. I'll try to finish some missing components tomorrow and then open the PR for a review :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,268 | closed | [Documentation] Example for LEDForConditionalGeneration does not work | The [documentation for LEDForConditionalGeneration](https://huggingface.co/transformers/model_doc/led.html#transformers.LEDForConditionalGeneration) appears to be incorrect. The same example is also used for [BartForConditionalGeneration](https://huggingface.co/transformers/model_doc/bart.html#bartforconditionalgeneration), where it works as intended. I believe that the example was just copied and not adapted, but perhaps I'm missing something?
```python
from transformers import LEDTokenizer, LEDForConditionalGeneration
tokenizer = LEDTokenizer.from_pretrained('allenai/led-base-16384')
TXT = "My friends are <mask> but they eat too many carbs."
model = LEDForConditionalGeneration.from_pretrained('allenai/led-base-16384')
input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
logits = model(input_ids).logits
```
Here, the last step fails with `ValueError: You have to specify either decoder_input_ids or decoder_inputs_embeds`, which as far as I can tell is not a bug, but expected, as no `decoder_input_ids`/`embeds` (or `labels`) are provided. (BART [silently generates the `decoder_input_ids` from the `input_ids`](https://github.com/huggingface/transformers/blob/master/src/transformers/models/bart/modeling_bart.py#L1151), which LED does not.)
I believe the example should look like this:
```python
input_ids = tokenizer([TXT], return_tensors='pt')['input_ids']
prediction = model.generate(input_ids)[0]
print(tokenizer.decode(prediction, skip_special_tokens=True))
# My friends are good at eating healthy but they eat too many carbs.
```
This is also a nice demonstration that LED generates more than just one token for the masked parts of the sequence.
Tagging @patrickvonplaten who contributed the model and the example. | 06-20-2021 09:43:02 | 06-20-2021 09:43:02 | |
transformers | 12,267 | closed | [WIP] Enable GPT2Model to handle 3d attention_mask | This PR solves the problem discussed at #12261.
@patrickvonplaten, @LysandreJik | 06-20-2021 07:58:21 | 06-20-2021 07:58:21 | Hello, thank you for offering a fix! It seems this proposal would be breaking for the GPT-2 double heads model; the following test fails: `test_gpt2_double_lm_head_model` with the following error:
```
_________________ GPT2ModelTest.test_gpt2_double_lm_head_model _________________
[gw1] linux -- Python 3.7.10 /usr/local/bin/python
self = <tests.test_modeling_gpt2.GPT2ModelTest testMethod=test_gpt2_double_lm_head_model>
def test_gpt2_double_lm_head_model(self):
config_and_inputs = self.model_tester.prepare_config_and_inputs()
> self.model_tester.create_and_check_double_lm_head_model(*config_and_inputs)
tests/test_modeling_gpt2.py:457:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests/test_modeling_gpt2.py:350: in create_and_check_double_lm_head_model
result = model(**inputs)
../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/gpt2/modeling_gpt2.py:1160: in forward
return_dict=return_dict,
../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/gpt2/modeling_gpt2.py:802: in forward
output_attentions=output_attentions,
../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/gpt2/modeling_gpt2.py:323: in forward
output_attentions=output_attentions,
../.local/lib/python3.7/site-packages/torch/nn/modules/module.py:1051: in _call_impl
return forward_call(*input, **kwargs)
src/transformers/models/gpt2/modeling_gpt2.py:258: in forward
attn_output, attn_weights = self._attn(query, key, value, attention_mask, head_mask)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = GPT2Attention(
(c_attn): Conv1D()
(c_proj): Conv1D()
(attn_dropout): Dropout(p=0.1, inplace=False)
(resid_dropout): Dropout(p=0.1, inplace=False)
)
query = tensor([[[[ 0.0202, -0.0295, -0.0911, ..., -0.1715, 0.0008, 0.0310],
[-0.0724, -0.0315, -0.0237, ..., 0... -0.0802],
[-0.1085, 0.0763, -0.0241, ..., -0.1154, 0.1063, -0.1542]]]],
grad_fn=<PermuteBackward>)
key = tensor([[[[ 2.2768e-02, -1.5131e-02, -3.1551e-02, ..., 1.2214e-01,
3.4581e-03, 1.2902e-01],
[...6e-01, 1.0495e-01, -8.1176e-02, ..., 1.5278e-01,
-1.6426e-01, 4.7595e-02]]]], grad_fn=<PermuteBackward>)
value = tensor([[[[ 0.0396, -0.0556, -0.0115, ..., 0.1020, 0.0598, 0.1249],
[ 0.1337, -0.0851, 0.0792, ..., 0... 0.0443],
[-0.1049, -0.0717, 0.1128, ..., 0.2006, 0.0411, -0.0256]]]],
grad_fn=<PermuteBackward>)
attention_mask = tensor([[[[-10000., -0., -10000., -10000., -0., -0., -0.],
[-10000., -0., -10000., -1000...00., -0., -0., -10000., -0.],
[ -0., -0., -10000., -0., -0., -10000., -0.]]]])
head_mask = None
def _attn(self, query, key, value, attention_mask=None, head_mask=None):
attn_weights = torch.matmul(query, key.transpose(-1, -2))
if self.scale_attn_weights:
attn_weights = attn_weights / (float(value.size(-1)) ** 0.5)
if not self.is_cross_attention:
# if only "normal" attention layer implements causal mask
query_length, key_length = query.size(-2), key.size(-2)
causal_mask = self.bias[:, :, key_length - query_length : key_length, :key_length].bool()
attn_weights = torch.where(causal_mask, attn_weights, self.masked_bias.to(attn_weights.dtype))
if attention_mask is not None:
# Apply the attention mask
> attn_weights = attn_weights + attention_mask
E RuntimeError: The size of tensor a (7) must match the size of tensor b (4) at non-singleton dimension 2
src/transformers/models/gpt2/modeling_gpt2.py:191: RuntimeError
```
Could you give it a look?<|||||>@LysandreJik Thank you for the comment! I inspected code carefully and found that it was because `input_ids` from `create_and_check_double_lm_head_model` has extra dimension for `num_choices` and is merged to `batch` dimension before pushed to model. I fixed this error by adding the condition that `batch_size` of `input_ids` is equal to `batch_size` of `attention_mask` or attention_mask's dimension of `num_choices` would be merged to `batch` dimension as `input_ids`. Also, I added the same code for openai gpt. Please check again!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,266 | closed | Causal Mask in BertGeneration | I cannot find the implementation of CausalMask in BertGeneration. Can you help me locate it? | 06-20-2021 06:44:39 | 06-20-2021 06:44:39 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It's here:
https://github.com/huggingface/transformers/blob/cabcc75171650f9131a4cf31c62e1f102589014e/src/transformers/modeling_utils.py#L243
:-)
Note that `is_decoder` has to be set to True <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,265 | closed | Mbart continue training with same training task on a specific language | Hello.
I am not sure if this is possible using the transformers library, but it isn't it would be nice to have.
Mbart was initially trained using span corruption and other training tasks on a corpus containing many languages. Since I am going to use it for specifically the Arabic language, I wish to solely finetune it on Arabic text with the same training tasks it had, only then at a third step to fine tune for specific text generation task.
I believe this is possible using fairseq, but having it here in the transformers library would better. Is that possible / useful per your opinion ? | 06-20-2021 01:52:28 | 06-20-2021 01:52:28 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>@LysandreJik Sorry, I will close the issue now.
However this might qualify as a useful feature request, don't you think? |
transformers | 12,264 | closed | TFWav2Vec2ForCTC & Wav2Vec2ForCTC gives different loss values | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.8.0.dev0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@will-rice @patrickvonplaten
## Information
Model I am using: `TFWav2Vec2ForCTC` & `Wav2Vec2ForCTC`
## To reproduce
Steps to reproduce the behavior:
```python
import tensorflow as tf
import torch
from transformers import Wav2Vec2ForCTC, TFWav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tf_model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tf_labels = tf.constant([[3, 54, 65, 76, 21], [32, 42, 434, 76, 231]])
labels = torch.from_numpy(tf_labels.numpy())
tf_speech = tf.random.uniform(shape=(2, 40000))
speech = torch.from_numpy(tf_speech.numpy()).float()
with torch.no_grad():
out = model(speech, labels=labels)
tf_out = tf_model(tf_speech, labels=tf_labels)
print(out["loss"], tf_out["loss"])
# -> 71.64 -> 16.92
```
## Expected behavior
Loss values from tensorflow & PyTorch model should be similar (Note: logits are perfectly same as expected). | 06-19-2021 20:01:39 | 06-19-2021 20:01:39 | try it with labels that are less than the vocab size. I was able to change it to this and they are pretty close.
```
import tensorflow as tf
import torch
from transformers import Wav2Vec2ForCTC, TFWav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tf_model = TFWav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")
tf_labels = tf.constant([[12, 10, 1, 2, 3], [4, 5, 6, 7, 8]])
labels = torch.from_numpy(tf_labels.numpy())
tf_speech = tf.random.uniform(shape=(2, 40000))
masks = tf.ones_like(tf_speech)
speech = torch.from_numpy(tf_speech.numpy()).float()
with torch.no_grad():
out = model(speech, labels=labels)
tf_out = tf_model(tf_speech, labels=tf_labels)
print(out["loss"].numpy(), tf_out["loss"].numpy())
```
88.34665 88.34598<|||||>Thanks for the quick reply. It will help. <|||||>It should actually throw an error if labels are > vocab_size! Will open an issue for this<|||||>@will-rice @patrickvonplaten
PyTorch & TensorFlow losses are becoming different if padding indices are set to -100. Checkout this small [Colab notebook](https://colab.research.google.com/drive/190NDNtAKg4y2a-jjMby-XNZ2EYScOm6m?usp=sharing).
This is happening because these [2 lines](https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1584) [1584 & 1585] should not be there in TensorFlow implementation. If we just remove them, PyTorch & TensorFlow loss will become same.
So:
```python
# we should remove these lines
flattened_labels = tf.boolean_mask(labels, labels_mask)
flattened_labels = tf.reshape(flattened_labels, [labels.shape[0], -1])
# rather replace it with
flattened_labels = labels
```
<|||||>There is one other bug also in TensorFlow implementation. `training` argument should be passed in this [line](https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/wav2vec2/modeling_tf_wav2vec2.py#L1225) because right now spec augmentation is not getting applied even when `config.apply_spec_augment = True`.
Now, if we pass training arg in above linked line, spec augmentation does't work & rather throws an error. This needs to be fixed as well, I think.<|||||>Good catches! I'm working on a fix for these. Were you still wanting to open a PR for the label error or would you like me to just roll that one into these?<|||||>I am fine if you are going to fix the label error messages.<|||||>Closing this issue as it is fixed in #12289 |
transformers | 12,263 | closed | Add VisualBERT demo notebook | In continuation with #10534, this PR adds demo for VisualBERT model.
I am planning to base it on the `LXMERT` examples, hence the copy-paste of files for now. | 06-19-2021 19:24:22 | 06-19-2021 19:24:22 | I have updated the demo. Turns out I didn't need to change a lot from the LXMERT demo. I have used the same files, just replaced the tokenizer, the model and the labels that are being used. Only `demo.ipynb` is different.
Requesting @LysandreJik @patil-suraj to review.<|||||>Thanks for approving and merging @patil-suraj @LysandreJik ^_^ |
transformers | 12,262 | open | [WIP] SMITH | # What does this PR do?
This PR adds SMITH encoder by Google Research and potentially closes #9526.
Potential reviewers:
@LysandreJik @patil-suraj | 06-19-2021 19:12:17 | 06-19-2021 19:12:17 | Hi @gchhablani
What is the state if this PR ?
Have you tried loading [the official SMITH checkpoint](https://github.com/google-research/google-research/tree/master/smith#pre-trained-model-checkpoint) ?
Amine<|||||>Hi @amineabdaoui
I had tried it a while back and it had worked. My focus switched to other things. I'll get back onto this PR this week. |
transformers | 12,261 | closed | GPT2Model cannot handle 3D attention_mask | When pretraining GPT2 model, it sometimes need to receive a 3d attention_mask. Let's take an example with "gpt2" model. The model would be trained with an instance consisting of two different document "I am a boy. <|endoftext|> you are rich.".
tokenizer convert the sentence into input_ids: [40, 716, 257, 2933, 13, 220, 50256, 345, 389, 5527, 13] and attention_mask: [1,1,1,1,1,1,1,1,1,1,1].
Since "I am a boy." and "you are rich." are from different documents, when predicting "you are rich." we do not want GPT to attend "I am a boy.", leading to the need for 3d attention_mask. Other models like BERT can handle this problem with the function "get_extended_attention_mask" from modeling_utils.
However, GPT2 model considers only 2d attention_mask now.
https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/gpt2/modeling_gpt2.py#L697
| 06-19-2021 07:57:40 | 06-19-2021 07:57:40 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,260 | closed | 353 duplicate tokens in GPT-2? | ## Environment info
- `transformers` version: 4.6.1
- Platform: Darwin-20.4.0-x86_64-i386-64bit
- Python version: 3.7.5
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
@LysandreJik
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): GPT-2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I have noticed that there are quite a few duplicate tokens in the tokenizer. Out of the vocab size of 50257 there are 353 duplicate tokens by my crude calculation. Am I doing anything wrong here?
```python
VOCAB_SIZE = 50257
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
all_tokens = set()
duplicates = []
for i in range(VOCAB_SIZE):
x = tokenizer.decode(i).encode('utf8')
if x not in all_tokens:
all_tokens.add(x)
else:
print(f'{i}\t\t {x}')
duplicates.append(x)
```
When I print `len(duplicates)` I get 353. The output from this loop is
```
95 b'\xef\xbf\xbd'
96 b'\xef\xbf\xbd'
97 b'\xef\xbf\xbd'
98 b'\xef\xbf\xbd'
99 b'\xef\xbf\xbd'
100 b'\xef\xbf\xbd'
101 b'\xef\xbf\xbd'
102 b'\xef\xbf\xbd'
103 b'\xef\xbf\xbd'
104 b'\xef\xbf\xbd'
105 b'\xef\xbf\xbd'
106 b'\xef\xbf\xbd'
107 b'\xef\xbf\xbd'
108 b'\xef\xbf\xbd'
109 b'\xef\xbf\xbd'
110 b'\xef\xbf\xbd'
111 b'\xef\xbf\xbd'
112 b'\xef\xbf\xbd'
113 b'\xef\xbf\xbd'
114 b'\xef\xbf\xbd'
115 b'\xef\xbf\xbd'
116 b'\xef\xbf\xbd'
117 b'\xef\xbf\xbd'
118 b'\xef\xbf\xbd'
119 b'\xef\xbf\xbd'
120 b'\xef\xbf\xbd'
121 b'\xef\xbf\xbd'
122 b'\xef\xbf\xbd'
123 b'\xef\xbf\xbd'
124 b'\xef\xbf\xbd'
125 b'\xef\xbf\xbd'
126 b'\xef\xbf\xbd'
127 b'\xef\xbf\xbd'
128 b'\xef\xbf\xbd'
129 b'\xef\xbf\xbd'
130 b'\xef\xbf\xbd'
131 b'\xef\xbf\xbd'
132 b'\xef\xbf\xbd'
133 b'\xef\xbf\xbd'
134 b'\xef\xbf\xbd'
135 b'\xef\xbf\xbd'
136 b'\xef\xbf\xbd'
137 b'\xef\xbf\xbd'
138 b'\xef\xbf\xbd'
139 b'\xef\xbf\xbd'
140 b'\xef\xbf\xbd'
141 b'\xef\xbf\xbd'
142 b'\xef\xbf\xbd'
143 b'\xef\xbf\xbd'
144 b'\xef\xbf\xbd'
145 b'\xef\xbf\xbd'
146 b'\xef\xbf\xbd'
147 b'\xef\xbf\xbd'
148 b'\xef\xbf\xbd'
149 b'\xef\xbf\xbd'
150 b'\xef\xbf\xbd'
151 b'\xef\xbf\xbd'
152 b'\xef\xbf\xbd'
153 b'\xef\xbf\xbd'
154 b'\xef\xbf\xbd'
155 b'\xef\xbf\xbd'
156 b'\xef\xbf\xbd'
157 b'\xef\xbf\xbd'
158 b'\xef\xbf\xbd'
159 b'\xef\xbf\xbd'
160 b'\xef\xbf\xbd'
161 b'\xef\xbf\xbd'
162 b'\xef\xbf\xbd'
163 b'\xef\xbf\xbd'
164 b'\xef\xbf\xbd'
165 b'\xef\xbf\xbd'
166 b'\xef\xbf\xbd'
167 b'\xef\xbf\xbd'
168 b'\xef\xbf\xbd'
169 b'\xef\xbf\xbd'
170 b'\xef\xbf\xbd'
171 b'\xef\xbf\xbd'
172 b'\xef\xbf\xbd'
173 b'\xef\xbf\xbd'
174 b'\xef\xbf\xbd'
175 b'\xef\xbf\xbd'
176 b'\xef\xbf\xbd'
177 b'\xef\xbf\xbd'
178 b'\xef\xbf\xbd'
179 b'\xef\xbf\xbd'
180 b'\xef\xbf\xbd'
181 b'\xef\xbf\xbd'
182 b'\xef\xbf\xbd'
183 b'\xef\xbf\xbd'
184 b'\xef\xbf\xbd'
185 b'\xef\xbf\xbd'
186 b'\xef\xbf\xbd'
187 b'\xef\xbf\xbd'
222 b'\xef\xbf\xbd'
223 b'\xef\xbf\xbd'
224 b'\xef\xbf\xbd'
225 b'\xef\xbf\xbd'
226 b'\xef\xbf\xbd'
227 b'\xef\xbf\xbd'
228 b'\xef\xbf\xbd'
229 b'\xef\xbf\xbd'
230 b'\xef\xbf\xbd'
231 b'\xef\xbf\xbd'
232 b'\xef\xbf\xbd'
233 b'\xef\xbf\xbd'
234 b'\xef\xbf\xbd'
235 b'\xef\xbf\xbd'
236 b'\xef\xbf\xbd'
237 b'\xef\xbf\xbd'
238 b'\xef\xbf\xbd'
239 b'\xef\xbf\xbd'
240 b'\xef\xbf\xbd'
241 b'\xef\xbf\xbd'
242 b'\xef\xbf\xbd'
243 b'\xef\xbf\xbd'
244 b'\xef\xbf\xbd'
245 b'\xef\xbf\xbd'
246 b'\xef\xbf\xbd'
247 b'\xef\xbf\xbd'
248 b'\xef\xbf\xbd'
249 b'\xef\xbf\xbd'
250 b'\xef\xbf\xbd'
251 b'\xef\xbf\xbd'
252 b'\xef\xbf\xbd'
253 b'\xef\xbf\xbd'
254 b'\xef\xbf\xbd'
255 b'\xef\xbf\xbd'
447 b'\xef\xbf\xbd'
764 b'.'
837 b','
1209 b'\xef\xbf\xbd'
1587 b' \xef\xbf\xbd'
1792 b'\xef\xbf\xbd'
2343 b' \xef\xbf\xbd'
2515 b'\xef\xbf\xbd'
2644 b'...'
4210 b'\xef\xbf\xbd'
5008 b'\xef\xbf\xbd'
5099 b'\xef\xbf\xbd'
5145 b'!'
5525 b' \xef\xbf\xbd'
5633 b'?'
6184 b' \xef\xbf\xbd'
6353 b'\xef\xbf\xbd\xef\xbf\xbd'
6408 b'\xef\xbf\xbd\xef\xbf\xbd'
6552 b'\xef\xbf\xbd'
7134 b'\xef\xbf\xbd\xef\xbf\xbd'
7377 b' \xef\xbf\xbd'
8008 b'\xef\xbf\xbd\xef\xbf\xbd'
8582 b'\xef\xbf\xbd'
8955 b'\xef\xbf\xbd\xef\xbf\xbd'
10253 b'\xef\xbf\xbd\xef\xbf\xbd'
10263 b' \xef\xbf\xbd'
10310 b'\xef\xbf\xbd'
10545 b' \xef\xbf\xbd'
11019 b' \xef\xbf\xbd'
11485 b'..'
11737 b'\xef\xbf\xbd'
11805 b'\xef\xbf\xbd\xef\xbf\xbd'
11976 b'\xef\xbf\xbd'
12466 b' \xef\xbf\xbd'
12520 b' \xef\xbf\xbd'
12859 b'\xef\xbf\xbd'
13305 b' \xef\xbf\xbd'
13328 b' \xef\xbf\xbd'
13783 b'\xef\xbf\xbd'
13945 b'\xef\xbf\xbd\xef\xbf\xbd'
14360 b' \xef\xbf\xbd'
14519 b' \xef\xbf\xbd'
14524 b' \xef\xbf\xbd'
15139 b' \xef\xbf\xbd'
15926 b'\xef\xbf\xbd'
16268 b' \xef\xbf\xbd'
17312 b'\xef\xbf\xbd'
17358 b'\xef\xbf\xbd'
17433 b' \xef\xbf\xbd'
17550 b' \xef\xbf\xbd'
17683 b'\xe3\x81\xae\xef\xbf\xbd'
17739 b'\xef\xbf\xbd'
17804 b' \xef\xbf\xbd'
17992 b'\xef\xbf\xbd'
18004 b'\xef\xbf\xbd'
18074 b' \xef\xbf\xbd'
18433 b'\xef\xbf\xbd\xef\xbf\xbd'
18796 b'\xef\xbf\xbd'
18872 b' \xef\xbf\xbd'
18923 b' \xef\xbf\xbd'
19021 b'\xef\xbf\xbd'
19153 b'??'
19424 b'....'
19469 b'\xef\xbf\xbd'
19526 b'\xef\xbf\xbd'
19567 b'\xef\xbf\xbd'
20004 b'........'
20015 b'\xef\xbf\xbd'
20046 b'\xef\xbf\xbd'
20543 b' \xef\xbf\xbd'
20724 b' \xef\xbf\xbd'
20998 b'\xef\xbf\xbd'
21253 b'\xef\xbf\xbd\xef\xbf\xbd'
22135 b'."'
22522 b'\xef\xbf\xbd'
22755 b'\xef\xbf\xbd'
22880 b'\xef\xbf\xbd'
22887 b'\xef\xbf\xbd'
23294 b' \xef\xbf\xbd'
23329 b'\xef\xbf\xbd\xef\xbf\xbd'
23596 b'\xef\xbf\xbd\xef\xbf\xbd'
23626 b'\xef\xbf\xbd'
23821 b' \xef\xbf\xbd'
23877 b'\xef\xbf\xbd'
24231 b'\xef\xbf\xbd'
24457 b'./'
24583 b'\xef\xbf\xbd'
24861 b'\xef\xbf\xbd'
24966 b' \xef\xbf\xbd'
25001 b'\xef\xbf\xbd'
25081 b'\xef\xbf\xbd\xef\xbf\xbd'
25370 b' \xef\xbf\xbd'
26193 b'\xef\xbf\xbd'
26292 b'\xef\xbf\xbd'
26344 b'\xef\xbf\xbd'
26486 b'\xef\xbf\xbd'
26534 b'\xef\xbf\xbd\xef\xbf\xbd'
27032 b'\xe3\x81\xae\xef\xbf\xbd'
27332 b' \xef\xbf\xbd'
27670 b'\xef\xbf\xbd'
27764 b'\xef\xbf\xbd'
27950 b'\xef\xbf\xbd'
28053 b' \xef\xbf\xbd'
28156 b'\xef\xbf\xbd'
28225 b' \xef\xbf\xbd'
28839 b'\xef\xbf\xbd'
28938 b'\xef\xbf\xbd'
29705 b'\xef\xbf\xbd'
29773 b'\xef\xbf\xbd'
29785 b'\xef\xbf\xbd'
29826 b'\xef\xbf\xbd'
30266 b'\xef\xbf\xbd'
30298 b'\xef\xbf\xbd'
30325 b' \xef\xbf\xbd'
30585 b'\xef\xbf\xbd'
31204 b'\xef\xbf\xbd\xef\xbf\xbd'
31479 b'\xef\xbf\xbd'
31619 b' \xef\xbf\xbd'
31965 b'\xef\xbf\xbd'
32003 b'\xef\xbf\xbd'
32368 b'\xef\xbf\xbd'
32391 b'\xef\xbf\xbd'
32432 b'\xef\xbf\xbd'
32518 b'\xef\xbf\xbd'
32573 b'\xef\xbf\xbd'
32849 b'\xef\xbf\xbd'
33176 b'\xef\xbf\xbd'
33232 b'\xef\xbf\xbd'
33426 b'\xe3\x81\xae\xef\xbf\xbd'
33566 b'\xef\xbf\xbd'
33699 b'\xef\xbf\xbd'
33768 b'\xef\xbf\xbd'
34402 b'\xef\xbf\xbd'
34460 b'\xef\xbf\xbd'
34504 b' \xe8\xa3\x8f\xef\xbf\xbd'
34650 b'\xef\xbf\xbd'
34719 b' \xef\xbf\xbd'
34754 b' \xef\xbf\xbd'
34913 b'???'
34932 b'\xef\xbf\xbd'
35050 b'\xef\xbf\xbd\xef\xbf\xbd'
35069 b'\xef\xbf\xbd\xef\xbf\xbd'
35266 b'\xef\xbf\xbd'
35705 b'\xef\xbf\xbd'
35707 b'\xef\xbf\xbd'
35713 b'..."'
35975 b'\xef\xbf\xbd'
36181 b'\xef\xbf\xbd'
36365 b'\xef\xbf\xbd'
36469 b' \xef\xbf\xbd'
36596 b'\xef\xbf\xbd\xef\xbf\xbd'
36685 b'\xef\xbf\xbd'
37239 b'\xef\xbf\xbd'
37345 b'\xef\xbf\xbd'
37605 b'\xef\xbf\xbd'
37772 b'\xef\xbf\xbd'
37863 b'\xef\xbf\xbd'
37867 b'!!'
38184 b'\xef\xbf\xbd'
38461 b'\xef\xbf\xbd'
39333 b'\xef\xbf\xbd\xef\xbf\xbd'
39355 b'\xef\xbf\xbd'
39611 b'\xef\xbf\xbd'
39820 b'\xe9\xbe\x8d\xef\xbf\xbd'
40367 b'\xef\xbf\xbd'
41340 b'\xef\xbf\xbd'
41349 b'?)'
41365 b'\xef\xbf\xbd\xef\xbf\xbd'
41585 b'\xef\xbf\xbd'
41678 b'\xef\xbf\xbd\xef\xbf\xbd'
41753 b'\xef\xbf\xbd'
41840 b'\xef\xbf\xbd'
42062 b'\xef\xbf\xbd\xef\xbf\xbd'
42164 b' \xef\xbf\xbd'
42314 b' \xef\xbf\xbd'
42527 b' \xef\xbf\xbd'
42911 b',"'
43074 b' \xef\xbf\xbd'
43102 b'\xef\xbf\xbd'
43297 b'\xef\xbf\xbd'
43380 b'\xef\xbf\xbd'
43518 b'\xef\xbf\xbd\xef\xbf\xbd'
43636 b'\xef\xbf\xbd'
43718 b'\xef\xbf\xbd'
43769 b'\xef\xbf\xbd\xef\xbf\xbd'
43889 b'\xef\xbf\xbd'
43897 b'\xef\xbf\xbd'
44165 b'\xef\xbf\xbd'
44293 b'\xef\xbf\xbd'
44713 b'................'
45250 b'\xef\xbf\xbd'
45379 b'\xef\xbf\xbd'
45433 b'\xef\xbf\xbd\xef\xbf\xbd'
45495 b'\xef\xbf\xbd'
45539 b'\xef\xbf\xbd\xef\xbf\xbd'
45617 b'\xef\xbf\xbd'
45739 b'\xef\xbf\xbd'
45784 b'\xef\xbf\xbd'
45865 b'\xef\xbf\xbd\xef\xbf\xbd'
45911 b'\xef\xbf\xbd'
46237 b'\xef\xbf\xbd'
46256 b'\xef\xbf\xbd'
46328 b'.)'
46349 b'\xef\xbf\xbd'
46479 b'\xef\xbf\xbd'
46695 b'\xef\xbf\xbd'
46763 b'\xef\xbf\xbd'
46788 b'\xef\xbf\xbd\xef\xbf\xbd'
47078 b'\xef\xbf\xbd'
47082 b'......'
47249 b'\xef\xbf\xbd'
47540 b'._'
47728 b'\xef\xbf\xbd'
47797 b'\xef\xbf\xbd'
47947 b'\xef\xbf\xbd'
47991 b'\xef\xbf\xbd'
48071 b'\xef\xbf\xbd'
48585 b'\xef\xbf\xbd\xef\xbf\xbd\xef\xbf\xbd'
48953 b'\xef\xbf\xbd\xef\xbf\xbd'
48958 b'\xef\xbf\xbd'
49035 b'\xef\xbf\xbd'
49149 b'\xe3\x81\xae\xef\xbf\xbd'
49426 b'\xef\xbf\xbd'
49694 b'\xef\xbf\xbd'
50159 b'\xef\xbf\xbd\xef\xbf\xbd'
50169 b' \xef\xbf\xbd'
```
## Expected behavior
I would expect that there are no duplicate tokens. I might be decoding tokens incorrectly which would explain why there are so many replacement character `b'\xef\xbf\xbd'` tokens but there are even duplicates in the normal utf8 characters such as `?` which occurs in tokens 30 and 5633:
```python
[i for i in range(VOCAB_SIZE) if tokenizer.decode(i) == '?']
```
| 06-18-2021 22:43:31 | 06-18-2021 22:43:31 | Never mind, realized the tokens are stored in https://huggingface.co/gpt2/resolve/main/vocab.json |
transformers | 12,259 | closed | `ValueError: Expected input batch_size to match target batch_size` occurs when training GPT2 with `Seq2SeqTrainer` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Linux-4.15.0-144-generic-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): 2.5.0 (True)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- trainer: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- trainer: @sgugger
## Information
Model I am using: `distilgpt2`
## To reproduce
I was following the tutorial [https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb](https://github.com/huggingface/notebooks/blob/master/examples/summarization.ipynb) to fine-tune `distilgpt2` on a Seq2Seq task. Here's how I run my training process.
My map function for preprocessing the datasets,
```
def tokenize(sample_batch, tokenizer):
src_text = []
batch_size = len(sample_batch["src_abstract"])
for i in range(batch_size):
src_text.append(" ".join(
[sample_batch["src_abstract"][i], sample_batch["text_before_explicit_citation"][i], sample_batch["text_after_explicit_citation"][i]]))
tgt_text = sample_batch["tgt_abstract"]
inputs = tokenizer(
src_text,
tgt_text,
add_special_tokens=True,
truncation="longest_first",
# padding="max_length",
max_length=750
)
labels = tokenizer(
sample_batch["explicit_citation"],
truncation="longest_first",
# padding="max_length",
max_length=128,
)
inputs["labels"] = labels["input_ids"]
return inputs
```
My training code,
```
model_name = "distilgpt2"
model = GPT2LMHeadModel.from_pretrained(model_name).to('cuda')
data_collator = DataCollatorForSeq2Seq(tokenizer, model=model, padding="max_length")
training_args = Seq2SeqTrainingArguments(
"./checkpoints",
learning_rate=2e-5,
weight_decay=0.01,
per_device_train_batch_size=2,
per_device_eval_batch_size=2,
save_strategy='steps',
evaluation_strategy='steps',
logging_strategy='steps',
save_total_limit=1,
logging_steps=500,
fp16=True,
predict_with_generate=True
)
trainer = Seq2SeqTrainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_dataset,
eval_dataset=dev_dataset,
compute_metrics=compute_metrics
)
trainer.train()
```
And the error log occurs,
```
ValueError: Caught ValueError in replica 0 on device 0.
Original Traceback (most recent call last):
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker
output = module(*input, **kwargs)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 972, in forward
loss = loss_fct(shift_logits.view(-1, shift_logits.size(-1)), shift_labels.view(-1))
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 1047, in forward
return F.cross_entropy(input, target, weight=self.weight,
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/functional.py", line 2693, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/home/guoao/anaconda3/envs/wga/lib/python3.8/site-packages/torch/nn/functional.py", line 2384, in nll_loss
raise ValueError(
ValueError: Expected input batch_size (2046) to match target batch_size (138).
```
It seems like the `input_ids` are padded to the model's `max_length`, but the `labels` are not (I also have a question on why the `batch_size` looks like `2046` instead of `batch_size * max_length = 2048`). I found similar errors in the forum [https://discuss.huggingface.co/t/how-to-use-seq2seqtrainer-seq2seqdatacollator-in-v4-2-1/3243](https://discuss.huggingface.co/t/how-to-use-seq2seqtrainer-seq2seqdatacollator-in-v4-2-1/3243), which says,
> The PR has been merged, so you should be able to use a similar workflow. Note that the processing that used to be done in Seq2SeqDataCollator is now done on the dataset directly.
But I'm not sure how it solves the problem. I'd really appreciate any kinds of help!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
<!-- A clear and concise description of what you would expect to happen. -->
| 06-18-2021 19:08:57 | 06-18-2021 19:08:57 | The problem is that you are using a decoder model (DistilGPT2) but have inputs and targets of different lengths, which is impossible for those models. You should use an encode-decoder (also called seq2seq) model for this kind of tasks, see the complete list [here](https://huggingface.co/transformers/model_summary.html#seq-to-seq-models).<|||||>@sgugger Ah that's exactly my mistake. Thank you very much for the answer. |
transformers | 12,258 | closed | [docs] performance | This PR imports the doc I started working on some months back https://github.com/huggingface/transformers/issues/9824 after syncing the code to master's API.
Surely a lot more work can and will be done, this is just a starting baseline.
Fixes: https://github.com/huggingface/transformers/issues/9824
@sgugger | 06-18-2021 18:49:21 | 06-18-2021 18:49:21 | |
transformers | 12,257 | closed | [DeepSpeed] don't ignore --adafactor | This PR adds a small improvement that checks that if `--adafactor` is passed and the DS config has an optimizer section, then we assert. Before that it was silently ignored, which was misleading to the user.
Fixes: https://github.com/huggingface/transformers/issues/11749
@sgugger | 06-18-2021 18:01:35 | 06-18-2021 18:01:35 | |
transformers | 12,256 | closed | [Flax] Fix flax test save pretrained | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Flax save/load test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-18-2021 17:48:50 | 06-18-2021 17:48:50 | @patil-suraj - it looks like Flax lip has a problem with the save/load test. Could you take a look?<|||||>@patrickvonplaten
#12284 shold fix this. |
transformers | 12,255 | closed | [Flax] [WIP] allow loading head model with base model weights | # What does this PR do?
Allows loading flax head model with base model weights.
Right now it's not possible to load a flax head model using weights from the base model as weights `dict` of the base model does not contain the `base_model_prefix` key. To reproduce
```python
from transformers import BertConfig, FlaxBertModel, FlaxBertForSequenceClassification
config = BertConfig(hidden_size=64, intermediate_size=128, max_position_embeddings=128, num_attention_heads=8, num_hidden_layers=8)
base_model = FlaxBertModel(config)
base_model.save_pretrained("base")
head_model = FlaxBertForSequenceClassification.from_pretrained("base")
``` | 06-18-2021 17:31:49 | 06-18-2021 17:31:49 | Thanks for fixing this! Could we maybe also add some tests that make sure that loading/saving works correctly for all models? |
transformers | 12,254 | closed | [Documentation Example] Task Summary - Start/End of Span in QA Example | ## Environment info
This bug pertains the [extractive QA example given in the documentation](https://huggingface.co/transformers/task_summary.html#extractive-question-answering) and will be reproducible across operating systems and library versions. I provide a fix for the PyTorch code below, the same can be applied to the TensorFlow example.
### Who can help
Probably @sgugger and/or @patrickvonplaten.
## Information
The way that the start and end of the extracted span are calculated means that it is possible that `end <= start`, which will lead to an empty answer. Instead, a joint probability distribution over start/end positions should be used with the probability of selecting (start, end) s.t. `end <= start` = 0.
## To reproduce
For reference, the original code:
```
for question in questions:
... inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
... input_ids = inputs["input_ids"].tolist()[0]
...
... outputs = model(**inputs)
... answer_start_scores = outputs.start_logits
... answer_end_scores = outputs.end_logits
...
... answer_start = torch.argmax(
... answer_start_scores
... ) # Get the most likely beginning of answer with the argmax of the score
... answer_end = torch.argmax(answer_end_scores) + 1 # Get the most likely end of answer with the argmax of the score
...
... answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(input_ids[answer_start:answer_end]))
...
... print(f"Question: {question}")
... print(f"Answer: {answer}")
```
I suggest modifying the code something akin to the below. I also modified the answer decoding to exclude special tokens.
```
for question in questions:
... inputs = tokenizer(question, text, add_special_tokens=True, return_tensors="pt")
... input_ids = inputs["input_ids"].tolist()[0]
...
... outputs = model(**inputs)
... # Get probabilities for start and end of the answer span.
... start_probs = torch.softmax(outputs.start_logits.squeeze(), 0)
... end_probs = torch.softmax(outputs.end_logits.squeeze(), 0)
... # Calculate joint probabilities.
... answer_span = torch.outer(start_probs, end_probs)
... # Mask out any pair (i, j) where j <= i.
... mask = torch.ones(answer_span.shape).tril()
... mask[mask == 1] = float("-inf")
... # Select span based on max joint probability.
... answer_start, answer_end = torch.where(answer_span == (answer_span + mask).max())
... answer_start, answer_end = answer_start[0], answer_end[0] + 1
... # Decode IDs within the span, ignoring special tokens.
... answer = tokenizer.decode(input_ids[answer_start:answer_end], skip_special_tokens=True)
...
... print(f"Question: {question}")
... print(f"Answer: {answer}")
```
Running this code gives the same answers for the example input (c.f. documentation), but avoids the issue of extracting spans with `end <= start` for other inputs. One such input would for instance be:
```
text = 'stairs down eventually lead to the invocation level. Performing the invocation ritual at the vibrating square opens the stairs to the Sanctum.\n With the Amulet of Yendor in hand, the adventurer may ascend from level 1 into the Plane of Earth; thence s/he may proceed through magic portals to the planes of Air, Fire, and Water, and thence to the Astral Plane. Offering the Amulet of Yendor on the correct high altar wins the game.\n Along the way, one will encounter these branches and special levels'
questions = ["how do i descend the dungeon"]
```
for which the output of the original, unmodified code will be (because it selects a span end that is before the selected start):
```
Question: how do i descend the dungeon
Answer:
```
and for the fixed code proposed above:
```
Question: how do i descend the dungeon
Answer: stairs down eventually lead to the invocation level
```
I'm happy to port the code to TensorFlow and submit a pull request with the updated code. | 06-18-2021 16:06:24 | 06-18-2021 16:06:24 | The example is just a quick demo on Question Answering and there are many, many things it doesn't do. I'd rather keep it simple since this page covers a lot of things already, and since we have the two [question answering](https://github.com/huggingface/transformers/tree/master/examples/pytorch/question-answering) scripts that are more complete.<|||||>I understand that it's a simple example but the point raised is not about things it doesn't do, but rather that it does this specific thing wrong. The code snippet assumes conditional independence which then leads to it potentially producing empty outputs. The example is placed very prominently within the documentation so I'd argue that it should be corrected. The question answering scripts seem to [make a similar assumption](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/utils_qa.py#L128) but then correct for it by ignoring invalid predictions.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,253 | closed | First hidden state of the last layer of Bert (french version : FlauBert) only prints vectors of 0 or -0 after using it !! | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.2
- Platform: linux
- Python version: 3.7.6
- PyTorch version (GPU?): 1.5.0a0
- Tensorflow version (GPU?):
- Using GPU in script?: no (for on small sample ~25 sentences) but yes ( for full sample ~70K sentences/paragraphs)
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. ! -->
@LysandreJik
@sgugger
# Library
python 3.7.6
numpy == 1.18.4
pandas == 1.0.3
transformers == 4.6.1
torch :1.5.0a0
scipy == 1.4.1
sklearn== 0.22.1
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
-->
## Information
Model I am using (FlauBERT ...):
The problem arises when using FlauBert model to get first hidden state (I do not know if it is an error but the hidden states produce by the model download from transformers library produce the same value (0 or -0) for all my data after applying the model. maybe it is what it is supposed to produce but the behaviour is recurrent for all the sentence of my dataset ( ~70K).
* [ ] the official example scripts: (give details below)
import torch
from transformers import FlaubertModel, FlaubertTokenizer
flaubert, log = FlaubertModel.from_pretrained(modelname, output_loading_info=True)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(modelname, do_lowercase=False)
sentence = "Le chat mange une pomme."
token_ids = torch.tensor([flaubert_tokenizer.encode(sentence)])
last_layer = flaubert(token_ids)[0]
print(last_layer.shape)
cls_embedding = last_layer[:, 0, :]
* [ ] my own modified scripts: (give details below)
def spliterate(buf, chunk):
for start in range(0, buf.size, chunk):
yield buf[start:start + chunk]
path_to_lge = "flaubert/flaubert_small_cased"
def get_flaubert_layer(texte, path_to_lge):
lge_size = path_to_lge[33:-1]
print("Embeddings bert model used.................... : ", lge_size, "\n")
flaubert = FlaubertModel.from_pretrained(path_to_lge)
flaubert_tokenizer = FlaubertTokenizer.from_pretrained(path_to_lge)
print(texte)
tokenized = texte.apply(
(lambda x: flaubert_tokenizer.encode(x, add_special_tokens=True, max_length=512, truncation=True)))
print("Exit after applying tokenizer :" , "\n", tokenized, "\n")
max_len = 0
for i in tokenized.values:
if len(i) > max_len:
max_len = len(i)
padded = np.array([i + [0] * (max_len - len(i)) for i in tokenized.values])
print("Exit after padding: ", "\n", padded, "\n")
last_layer_ = []
for tmp in spliterate(padded, 4000):
#print(tmp)
if len(tmp) != 0:
#print(len(tmp))
token_ids = torch.tensor(tmp)
print("Exit after torch transformation:" , "\n", token_ids, "\n")
attention_mask = np.where(tmp != 0, 1, 0)
attention_mask = torch.tensor(attention_mask)
print("Exit after torch transformation for attention_mask:", "\n", attention_mask, "\n")
with torch.no_grad():
layer = flaubert(token_ids, attention_mask=attention_mask)
layer = layer[0][:, 0, :].numpy()
print(" After applying model flaubert to get features :", "\n", layer, "\n")
last_layer_.append(layer)
else:
None
last_layer_np = np.array(last_layer_)
last_layer_np_array = np.concatenate(last_layer_np)
# print("Total of sentences :" , len(last_layer_np_array))
return last_layer_np_array, lge_size
#For getting hidden state (last layer)
#Xtest is a dataframe column and look like this :
0 si j’ai un problème, comment je remonte l’info...
1 des agents de maintenance ? Oui, oui. Enfin… I...
2 Il faudrait des tiroirs qui sortent / rentrent...
3 ROI, 5 à 10 ans. Si l’énergie explose, ça devi...
4 Je ne vois pas cela en conception de cuisine, ...
5 Les proverbes. C'est drôle parce qu'on a déjà ...
6 J'ai l'impression que ça peut-être utilisable ...
7 Ça reste… ça serait un réfrigérateur comme on ...
8 C’est en plastique souple et on arrive vraimen...
9 Parce que déjà, là, on évite de mettre un cong...
10 En rénovation, il n'est pas évident de rajoute...
Xtest_emb, s = get_hidden_state(Xtest, path_to_model_lge)
# What data(token_ids) looks after transforming it to tensor before giving to flaubert model
[0, 93, 106, ..., 0, 0, 0],
[ 0, 23, 2080, ..., 0, 0, 0],
[ 0, 59, 2961, ..., 0, 0, 0],
...,
[ 0, 55, 369, ..., 0, 0, 0],
[ 0, 6077, 53, ..., 0, 0, 0],
[ 0, 46, 41, ..., 0, 0, 0]])
#What layer[0] look like after using flaubert model on the data :
tensor([[[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.0932, 0.7302, 1.3671, ..., -2.5153, -2.0251, 1.2235],
[-0.4520, 2.3935, 1.0048, ..., -3.4742, -0.4194, 0.4428],
...,
[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, -0.0000, ..., -0.0000, -0.0000, -0.0000]],
[[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000],
[ 2.0035, 0.6900, -0.5092, ..., 0.0862, -1.6157, 0.6070],
[-0.3516, 2.5931, -1.6113, ..., -0.6265, 0.6559, -0.9409],
...,
[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., 0.0000, -0.0000, -0.0000]],
[[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000],
[-0.0040, -1.7643, -1.5588, ..., -1.8786, -0.4597, 0.3843],
[-3.4181, 0.1528, 0.6369, ..., -2.2618, -1.0742, -0.6097],
...,
[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000],
[-0.0000, -0.0000, 0.0000, ..., -0.0000, 0.0000, -0.0000]],
..,
[[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000],
[-1.3317, -0.3899, 0.2560, ..., -1.7550, -1.9626, 0.3821],
[-1.3053, -0.2642, 0.1691, ..., -1.8541, -2.1521, 0.6066],
...,
[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000],
[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000],
[-0.0000, 0.0000, -0.0000, ..., -0.0000, -0.0000, 0.0000]],
[[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000],
[-0.8575, -2.6781, 1.0530, ..., 0.7656, -2.3176, 0.6474],
[ 0.5465, 0.1727, -0.8362, ..., -0.1918, -1.5318, 1.0457],
...,
[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000],
[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000],
[-0.0000, -0.0000, -0.0000, ..., 0.0000, -0.0000, 0.0000]],
[[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000],
[-0.1900, -1.6420, -0.7254, ..., -1.5700, -1.1521, -0.0588],
[-0.7427, -2.5433, 0.6748, ..., -3.1792, -1.8242, 0.4684],
...,
[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000],
[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000],
[ 0.0000, -0.0000, 0.0000, ..., -0.0000, -0.0000, -0.0000]]])
What the result look like :
print(layer = layer[0][:, 0, :].numpy())
[[-0. -0. -0. ... -0. -0. -0.]
[-0. -0. 0. ... 0. -0. -0.]
[-0. -0. 0. ... -0. 0. -0.]
...
[-0. 0. -0. ... -0. -0. 0.]
[-0. -0. -0. ... 0. -0. 0.]
[ 0. -0. 0. ... -0. -0. -0.]]
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Extractiing features (get the hidden layer and pass it to a MLPerceptron from scikit learn for a classification task)
## To reproduce
Steps to reproduce the behavior:
1. Get the different libraries mentionned above in a virtual environnement
2. get the small example of Xtest given above in a dataframe column and pass it to the
function "get_hidden_state" and print the results
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I did not expect the value of the first hidden state to be zero values , so I maybe using the flaubert the wrong way or I did download the bad version
| 06-18-2021 15:36:46 | 06-18-2021 15:36:46 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,252 | closed | Tensorflow QA example | The new TF QA example! There were a couple of issues with the metrics that might need more investigation, but I confirmed they happened in the PyTorch version too. Possibly that was caused by evaluating on an untrained model, though.
Also, don't stress about reviewing this until after the weekend! | 06-18-2021 14:22:44 | 06-18-2021 14:22:44 | |
transformers | 12,251 | closed | [Flax] Add jax flax to env command | This PR adds jax/flax libs to the env command.
@lewtun - can you maybe try out running `transformers-cli env` after this command to see why you cannot import `FlaxBigBirdForMaskedLM` | 06-18-2021 14:15:21 | 06-18-2021 14:15:21 | If most of the tests pass, then can I use the changes made by the PR in the time being to check whether I can atleast start my training? |
transformers | 12,250 | closed | RAG with T5 in a multitask setting | I was trying to run RAG with t5 for multiple tasks i.e fact checking and QA. As i understand only with t5 you can do that by adding a unique prefix for the particular task before the input I was wondering what could be the best way to handle that ? Do i preprocess train.source, val.source so that they already have these prefixes or is their an easier way as well?
| 06-18-2021 14:14:34 | 06-18-2021 14:14:34 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>@sb1992
No, it doesn't matter whether you use T5 or BART. You can actually use a special token in front of every line in the target files.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,249 | closed | Got unexpected result when using BertTokenizer in Chinese | ## Environment info
- `transformers` version: 4.7.0
- Platform: Linux-5.4.0-74-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: False
- Using distributed or parallel set-up in script?: False
- `tokenizers` version: 0.10.3
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): bert-base-chinese
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
When I use BertTokenizer, I got unexpected results in some "strange" chars.
```python
text = "这是一个���中文句子"
tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
print(tokenizer.tokenize(text))
```
Output: ```['这', '是', '一', '个', '中', '文', '句', '子']```
And I got the same result using BertTokenizerFast too.
## Expected behavior
I think it may cause an error or affect performance when doing sequence labeling tasks, like NER or Word Segment.
And I noticed that I will get a more reasonable result when I downgrade the transformers to version 3.3.0 (with tokenizers==0.8.1rc2):
```['这', '是', '一', '个', '[UNK]', '中', '文', '[UNK]', '句', '子']```
why this happens?Is there any way to get the correct result in the new version of transformers?
| 06-18-2021 12:50:29 | 06-18-2021 12:50:29 | Pinging @JetRunner <|||||>This behavior seems to be related to `BertTokenizerFast`. I'm not very sure whether I know how to fix it but I can give it a look at `tokenizers`.<|||||>I wouldn't say the current tokenization is "wrong", but I would prefer `BertTokenizerFast` to be consistent with Google's tokenizer. I'll look into this and let's see what's the behavior of Google BERT.<|||||>
Here's what you get with the original Google BERT tokenizer. Kinda confusing..<|||||>@hxxxxh Given the chaotic output of different implementations, rather than try to "fix" one to be consistent with another, I would recommend you to clean the text before using it as an input.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,248 | closed | finding a bug in training the code of /src/transformers/models/detr/modeling_detr.py | finding a bug in training the code of https://github.com/huggingface/transformers/blob/f74655cd9b2e316af9d862968bc59c15d6849cad/src/transformers/models/detr/modeling_detr.py
https://github.com/huggingface/transformers/blob/f74655cd9b2e316af9d862968bc59c15d6849cad/src/transformers/models/detr/feature_extraction_detr.py#L616
should change encoded_inputs["targets"] to encoded_inputs["labels"]
because in https://github.com/huggingface/transformers/blob/f74655cd9b2e316af9d862968bc59c15d6849cad/src/transformers/models/detr/modeling_detr.py#L1350 we use variable of name labels in the function.
| 06-18-2021 08:36:55 | 06-18-2021 08:36:55 | Oh yeah, that makes sense, thanks for spotting it. I didn't have an issue with it as I never directly provided labels created by `DetrFeatureExtractor` to the model, but in case you do, you indeed will encounter an error.
Will update it! |
transformers | 12,247 | closed | [FlaxBart] few small fixes | # What does this PR do?
Typos and few small fixes | 06-18-2021 08:31:45 | 06-18-2021 08:31:45 | |
transformers | 12,246 | closed | Different Weights between google-bert (uncased_L-12_H-768_A-12) and Huggingface-bert (bert-base-uncased) | Environment: torch==1.8.1+cu111 transformers==4.3.3
```python
# google-bert (uncased_L-12_H-768_A-12)
model = BertForPreTraining.from_pretrained('uncased_L-12_H-768_A-12/', from_tf=True)
print(model.bert.embeddings.word_embeddings.weight)
```
```python
# google-bert (uncased_L-12_H-768_A-12) output
Parameter containing:
tensor([[-0.0314, -0.0045, 0.0182, ..., -0.0309, 0.0204, -0.0345],
[-0.0295, -0.0486, 0.0746, ..., -0.0363, 0.0262, -0.0108],
[-0.0328, -0.0582, -0.0149, ..., -0.0932, 0.0444, 0.0221],
...,
[-0.0337, -0.0518, -0.0280, ..., -0.0174, 0.0078, -0.0010],
[-0.0022, -0.0297, -0.0167, ..., -0.0472, -0.0006, 0.0128],
[-0.0631, -0.0144, -0.0232, ..., 0.0072, -0.0704, -0.0479]],
requires_grad=True)
```
```python
# Huggingface-bert (bert-base-uncased)
model = BertModel.from_pretrained('bert-base-uncased')
print(model.embeddings.word_embeddings.weight)
```
```python
# Huggingface-bert (bert-base-uncased) output
Parameter containing:
tensor([[-0.0102, -0.0615, -0.0265, ..., -0.0199, -0.0372, -0.0098],
[-0.0117, -0.0600, -0.0323, ..., -0.0168, -0.0401, -0.0107],
[-0.0198, -0.0627, -0.0326, ..., -0.0165, -0.0420, -0.0032],
...,
[-0.0218, -0.0556, -0.0135, ..., -0.0043, -0.0151, -0.0249],
[-0.0462, -0.0565, -0.0019, ..., 0.0157, -0.0139, -0.0095],
[ 0.0015, -0.0821, -0.0160, ..., -0.0081, -0.0475, 0.0753]],
requires_grad=True)
```
**Why the output of `embeddings.word_embeddings.weight` is different?**
| 06-18-2021 08:02:46 | 06-18-2021 08:02:46 | When checking the [model card]() of `google/bert_uncased_L-12_H-768_A-12`, it states the following:
> Note that the BERT-Base model in this release is included for completeness only; it was re-trained under the same regime as the original model.
Hence, `google/bert_uncased_L-12_H-768_A-12` is a retrained version of `bert-base-uncased`. So of course, the parameter values will differ.<|||||>@NielsRogge Is the `google/bert_uncased_L-12_H-768_A-12` equal to `https://storage.googleapis.com/bert_models/2020_02_20/uncased_L-12_H-768_A-12.zip` |
transformers | 12,245 | closed | TFBertForMaskedLM won't reload from saved checkpoint, shape mismatch issue | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.5.1-4.7
- Platform: Debian GNU/Linux 10 (buster)
- Python version: 3.9.2
- PyTorch version (GPU?): N/A
- Tensorflow version (GPU?): 2.5.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@Rocketknight1, @LysandreJik, @sgugger
## Information
Model I am using: TFBertForMaskedLM
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
I believe this issue also affects the official TFTrainer implementation
as the checkpoint restore snippet was adapted from it.
## To reproduce
Steps to reproduce the behavior:
1. Generate Masked Batch
2. initialize TF Model and assign CheckpointManager
3. Save model checkpoint
4. initialize new TF Model and assign CheckpointManager
5. restore from checkpoint
```
import numpy as np
from transformers import AutoTokenizer, TFAutoModelForMaskedLM, AutoConfig, TFAutoModelForCausalLM
import tensorflow as tf
random_sentences = ["You'll see the rainbow bridge after it rains cats and dogs.",
"They looked up at the sky and saw a million stars.",
"The bullet pierced the window shattering it before missing Danny's head by mere millimeters.",
"He was willing to find the depths of the rabbit hole in order to be with her."]
tok = AutoTokenizer.from_pretrained('bert-base-uncased')
input_ids = tok.batch_encode_plus(random_sentences,return_tensors='np',padding=True)['input_ids']
#Create masked tokens as labels
labels = np.ones_like(input_ids)*-100
mask = (np.random.uniform(size=input_ids.shape)<=0.2) & (input_ids != 0)
labels[mask]=tok.mask_token_id
batch= {'input_ids':tf.convert_to_tensor(input_ids),
'labels':tf.convert_to_tensor(labels)}
"""## Run model and save checkpoint"""
model = TFAutoModelForMaskedLM.from_pretrained('bert-base-uncased')
checkpoint = tf.train.Checkpoint(model=model)
model.ckpt_manager = tf.train.CheckpointManager(checkpoint, './', max_to_keep=1)
out = model(**batch)
print(out.loss.numpy())
model.ckpt_manager.save()
"""## Re-Initialize from config alone an load existing checkpoint"""
cfg = AutoConfig.from_pretrained('bert-base-uncased')
model2 = TFAutoModelForMaskedLM.from_config(cfg)
checkpoint2 = tf.train.Checkpoint(model=model2)
model2.ckpt_manager = tf.train.CheckpointManager(checkpoint2, './', max_to_keep=1)
latest_ckpt = tf.train.latest_checkpoint('./')
status = checkpoint2.restore(latest_ckpt)
status.assert_existing_objects_matched()
out = model2(**batch)
print(out.loss.numpy())
```
## Expected behavior
Expect to fully restore from checkpoint
## Current Behavior, error output
```---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-12-5ec2de12ee44> in <module>()
----> 1 out = model2(**batch)
2 out.loss
19 frames
/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py in set_shape(self, shape)
1238 raise ValueError(
1239 "Tensor's shape %s is not compatible with supplied shape %s" %
-> 1240 (self.shape, shape))
1241
1242 # Methods not supported / implemented for Eager Tensors.
ValueError: Tensor's shape (512, 768) is not compatible with supplied shape [2, 768]
```
## Link to colab
https://colab.research.google.com/drive/12pwo4WSueOT523hh1INw5J_SLpkK0IgB?usp=sharing
| 06-18-2021 06:50:51 | 06-18-2021 06:50:51 | This is an odd one - can you check if the problem still occurs when you use `model.save_pretrained()`? Just pass a path to that method to save, then load the model using `TFAutoModelForMaskedLM.from_pretrained()` with the same path.<|||||>Yeah `model.save_pretrained()` followed by `TFAutoModelForMaskedLM.from_pretrained()` works fine <|||||>@Rocketknight1 I think that this is due to the fact that the variable for the token_type_embeddings clashes with the position_embeddings for having the same name: if I assign a different name to position_embeddings here: https://github.com/huggingface/transformers/blob/2e5dbdf2db4599a6694d0974575a70f9bc3c978e/src/transformers/models/bert/modeling_tf_bert.py#L164
Say go from "embeddings" to "pos_embeddings", the issue disappears, it's weird because I would expect the name_scope to take precedence but apparently not. I imagine that if I were to proceed with that name change, that might cause issues during model conversion to the BertModel pytorch implementation.<|||||>Hey, thank you for that very helpful bit of diagnostic info! That links this with #11202, another issue we have caused by the same underlying problem. This is helpful because I'll probably need to make some breaking changes to fix that issue, and the fact that it's causing multiple downstream problems will increase the urgency there.<|||||>Cool! Glad that was helpful, thanks for looking into the issue.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,244 | closed | RoFormerTokenizerFast has a wrong result when setting "return_offsets_mapping=True" | I use roformer_chinese_char_base model, so there is no word-level problem. RoFormerTokenizerFast has this bug, but BertTokenizerFast doesn't. Here is the code:
```python
In [1]: from transformers import RoFormerTokenizerFast, BertTokenizerFast
In [2]: path = '/data/pretrained_models/roformer_chinese_char_base'
In [3]: tokenizer_roformer = RoFormerTokenizerFast.from_pretrained(path, add_special_tokens=False, do_lower_case=True)
In [4]: tokenizer_bert = BertTokenizerFast.from_pretrained(path, add_special_tokens=False, do_lower_case=True)
In [5]: text = '收到真的很喜欢,真的是大爱,平常上个网打个游戏,查个东西都非常的好,网速也很快,真是淘到宝了'
In [6]: tokenizer_bert(text, return_offsets_mapping=True, add_special_tokens=False)
Out[6]: {'input_ids': [2684, 691, 4395, 4334, 2100, 1134, 3223, 7675, 4395, 4334, 2798, 1465, 3898, 7675, 1975, 1960, 198, 223, 5079, 2388, 223, 3602, 2349, 7675, 2982, 223, 214, 6017, 6657, 7160, 1960, 4334, 1506, 7675, 5079, 6552, 260, 2100, 2148, 7675, 4395, 2798, 3554, 691, 1698, 270], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 1), (1, 2), (2, 3), (3, 4), (4, 5), (5, 6), (6, 7), (7, 8), (8, 9), (9, 10), (10, 11), (11, 12), (12, 13), (13, 14), (14, 15), (15, 16), (16, 17), (17, 18), (18, 19), (19, 20), (20, 21), (21, 22), (22, 23), (23, 24), (24, 25), (25, 26), (26, 27), (27, 28), (28, 29), (29, 30), (30, 31), (31, 32), (32, 33), (33, 34), (34, 35), (35, 36), (36, 37), (37, 38), (38, 39), (39, 40), (40, 41), (41, 42), (42, 43), (43, 44), (44, 45), (45, 46)]}
In [7]: tokenizer_roformer(text, return_offsets_mapping=True, add_special_tokens=False)
Out[7]: {'input_ids': [2684, 691, 4395, 4334, 2100, 1134, 3223, 7675, 4395, 4334, 2798, 1465, 3898, 7675, 1975, 1960, 198, 223, 5079, 2388, 223, 3602, 2349, 7675, 2982, 223, 214, 6017, 6657, 7160, 1960, 4334, 1506, 7675, 5079, 6552, 260, 2100, 2148, 7675, 4395, 2798, 3554, 691, 1698, 270], 'token_type_ids': [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], 'offset_mapping': [(0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1), (0, 1)]}
```
As you can see, "offset_mapping" is wrong at Out[7].
@JunnYu | 06-18-2021 06:01:08 | 06-18-2021 06:01:08 | @JaheimLee
**uncomment** 取消注释 L44-L53
##### this code slice normalized_string is too slow (6s) but test_alignement_methods can pass
https://github.com/huggingface/transformers/blob/e43e11260ff3c0a1b3cb0f4f39782d71a51c0191/src/transformers/models/roformer/tokenization_utils.py#L43-L53
If we use this code. `offset_mapping` is true but it will take a lot of processing time.
------------------------------------------------------------------------------------------------------------
and **comment** 注释 L56-L63
##### this code test_alignement_methods can't pass but fast (300ms)
https://github.com/huggingface/transformers/blob/e43e11260ff3c0a1b3cb0f4f39782d71a51c0191/src/transformers/models/roformer/tokenization_utils.py#L55-L63
If we use this code. `offset_mapping` is wrong but it will take very little processing time.
------------------------------------------------------------------------------------------------------------
If you use `char level` model , recommend you to use BertTokenizer. (the speed is very fast)
And if you use `word level` model like `roformer_chinese_base`, recommend you to use RoFormerTokenizer. (if you don't care `speed` and want to get true `offset_mapping`, you should **uncomment** L44-L53 and **comment** L56-L63 in transformers/src/transformers/models/roformer/tokenization_utils.py)
|
transformers | 12,243 | closed | GPT-J | **This is a work-in-progress focused on reconciling styles and may break without warning. If you want to use GPT-J with the HF interface, you can do that by installing transformers from [here](https://github.com/finetuneanon/transformers/tree/gpt-j). The purpose of this PR is to make progress on converting that repo to the style HF prefers.**
# What does this PR do?
This is my attempt to reconcile #12106 with the HF style guidelines as described by @sgugger. The original PR was created by @finetuneanon and @kurumuz.
This implementation has not been thoroughly tested yet, but I wanted to get something out as a starting point for continuing the conversation before too much momentum is lost. I need to reread HF documentation a bit more to figure out the things that are wrong, or hopefully one of you lovely people can help me out.
For comparison, a frozen version of the code in the original PR can be found [here](https://github.com/finetuneanon/transformers/tree/c0dcc7fad45e9ac07cdff525cbe7fb0ff76a1304).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. [Link](https://github.com/huggingface/transformers/pull/12106)
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj @sgugger | 06-18-2021 05:16:15 | 06-18-2021 05:16:15 | The main thing I'm uncertain about is how to handle unimplemented functionality. GPT-J uses the same tokenizer as GPT-2, so I removed the tokenizer definition. Is that correct, or no? Relatredly, there were many types of modeling that GPT-J was not designed for, and @finetuneanon's PR just deleted the boilerplate for them. Is this correct?<|||||>> Also, sorry to ask this again, but could we not modify generation in this PR, since it seems it's not related to GPT-J.
Damn. It looks like I messed something up.... this was supposed to not include @finetuneanon's commits. I might close this and create a replacement PR with the correct commit history.<|||||>Mmm, I was wondering how this has been going. I would love to try a stable version of this!<|||||>Hey @sualehasif
A stable version will be available in a week, stay tuned! <|||||>> Damn. It looks like I messed something up.... this was supposed to not include @finetuneanon's commits. I might close this and create a replacement PR with the correct commit history.
@StellaAthena any idea when would you be adding a new PR? We are also running some experiments so maybe we could help.<|||||>@mittalpatel
I'm taking over the PR. But feel free to post your findings :) <|||||>In #12106 @finetuneanon reports the results of some evaluations of the ported model on EleutherAI’s evaluation harness. The numbers were a little lower than what we had found using the original implementation, but both he and I felt this was likely due to FP16. I can now confirm that the ported model achieves the same performance as the original model when evaluated in FP32. The absolute difference in performance on lambada, HellaSwag, PiQA, and Winogrande are all less than 0.5% when done in FP32<|||||>Cool, that's good to know.<|||||>@patil-suraj can you mark this as a draft, as it is not ready to merge in its current state?<|||||>> Hey @sualehasif
>
> A stable version will be available in a week, stay tuned!
Hi, @patil-suraj thanks so much for working on this. Is there any progress on integration to huggingface transformers?<|||||>Just chiming in here: All of the .py files with dashes will not be importable :) So I'd suggest changing `gpt-j` to `gptj` or `gpt_j` in the .py file path names.<|||||>Any updates on this and any help required?<|||||>@patil-suraj What is the status of this?
I would really like to use this model, and I don't feel like messing around with forks to get this to work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I would still love to see this happen.<|||||>> I would still love to see this happen.
This is going to happen any day now, see #13022 |
transformers | 12,242 | closed | Can't load tokenizer for 'imxly/t5-pegasus'. | when I use t5-pegasus on huggingface.co, What's the problem?
<img width="1589" alt="截屏2021-06-18 上午11 10 34" src="https://user-images.githubusercontent.com/12478408/122500741-e76d4f00-d025-11eb-839c-96f259c89af0.png">
| 06-18-2021 03:11:24 | 06-18-2021 03:11:24 | The tokenizer specified for that model is `T5Tokenizer`, which is a sentencepiece-based tokenizer. However, the tokenizer file is `vocab.txt`, which is a BERT-like (WordPiece) file.
The T5 tokenizer expects as `spiece.model` file generated by the SentencePiece library, or a `tokenizer.json` file generated by the Tokenizers library.<|||||>> The tokenizer specified for that model is `T5Tokenizer`, which is a sentencepiece-based tokenizer. However, the tokenizer file is `vocab.txt`, which is a BERT-like (WordPiece) file.
>
> The T5 tokenizer expects as `spiece.model` file generated by the SentencePiece library, or a `tokenizer.json` file generated by the Tokenizers library.
so is it the fault of the uploaded model ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Can't load tokenizer for './models/t5-pegasus'
下载后使用也存在这个问题 |
transformers | 12,241 | open | Modify BERT encoder layers? | Hello I would like to modify the encoder layers of the BERT model, to insert FC and ReLu layers.
This idea allows you to reproduce the use of [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)
How to use an nn.module class to handle encoder outputs?
Example:
```
import torch.nn as nn
from transformers import BertModel
class CustomBERTModel(nn.Module):
def __init__(self):
super(CustomBERTModel, self).__init__()
self.bert = BertModel.from_pretrained("bert-base-uncased")
# add your additional layers here, for example a dropout layer followed by a linear classification head
self.dropout = nn.Dropout(0.3)
self.out = nn.Linear(768, 2)
def forward(self, ids, mask, token_type_ids):
sequence_output, pooled_output = self.bert(
ids,
attention_mask=mask,
token_type_ids=token_type_ids
)
# we apply dropout to the sequence output, tensor has shape (batch_size, sequence_length, 768)
sequence_output = self.dropout(sequence_output)
# next, we apply the linear layer. The linear layer (which applies a linear transformation)
# takes as input the hidden states of all tokens (so seq_len times a vector of size 768, each corresponding to
# a single token in the input sequence) and outputs 2 numbers (scores, or logits) for every token
# so the logits are of shape (batch_size, sequence_length, 2)
logits = self.out(sequence_output)
return logits
``` | 06-18-2021 02:40:01 | 06-18-2021 02:40:01 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks! |
transformers | 12,240 | closed | Depreciate pythonic Mish and support PyTorch 1.9 version of Mish | This PR removes the old pure pythonic version of [Mish](https://arxiv.org/abs/1908.08681) and now enables support for the [PyTorch 1.9 Mish version](https://pytorch.org/docs/stable/generated/torch.nn.Mish.html#torch.nn.Mish). It also removes isolated references of the function where it is not used.
| 06-18-2021 00:39:54 | 06-18-2021 00:39:54 | Done |
transformers | 12,239 | closed | [t5 doc] make the example work out of the box | This PR expands the training example to include the correct model type for the example to work, e.g. with `T5Model` this example will break.
Fixes: https://github.com/huggingface/transformers/issues/12238
@patrickvonplaten
| 06-17-2021 23:22:45 | 06-17-2021 23:22:45 | |
transformers | 12,238 | closed | [doc] t5 incomplete example | https://huggingface.co/transformers/model_doc/t5.html#training has:
```
input_ids = tokenizer('translate English to German: The house is wonderful.', return_tensors='pt').input_ids
labels = tokenizer('Das Haus ist wunderbar.', return_tensors='pt').input_ids
# the forward function automatically creates the correct decoder_input_ids
loss = model(input_ids=input_ids, labels=labels).loss
```
which is broken unless the right model type is used. And when none if specified it typically presumes `AutoModel`, which gives: `TypeError: forward() got an unexpected keyword argument 'labels'`
So probably need to use an explicit `T5ForConditionalGeneration` and then it works.
@patrickvonplaten, @patil-suraj
| 06-17-2021 23:12:50 | 06-17-2021 23:12:50 | A possible approach proposed here: https://github.com/huggingface/transformers/pull/12239 |
transformers | 12,237 | closed | BART fine-tuning doesn't work and produces a fixed output for each input | I'm getting stuck on fine-tuning BART model on reddit-tifu dataset. When I use a pre-trained model of BART, for example, `bart-large-xsum` without finetuning, it works fine and produces sort of sensible output for each input, but as I start finetuning it with BART, it starts to predict irrelevant text for each given input; as if it has been overfit to training data. Although, overfitting doesn't seem rational to me as the reddit has over 30k training samples. I'm wondering if there's any problem with my bash script or in the fine-tuning scripts? Since I've been using the instructions on https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization to this end. Following is my bash script for fine-tuning `bart-large-xsum` model.
```
DS_BASE_DIR=/home/code-base/user_space/packages/summarization_datasets/reddit_tifu/
python -m torch.distributed.launch --nproc_per_node=4 examples/pytorch/summarization/run_summarization.py \
--model_name_or_path facebook/bart-large-xsum \
--do_train \
--do_eval \
--train_file $DS_BASE_DIR/train.json \
--validation_file $DS_BASE_DIR/val.json \
--output_dir /home/code-base/user_space/saved_models/bart/reddit-1024-tuned \
--per_device_train_batch_size=2 \
--per_device_eval_batch_size=2 \
--overwrite_output_dir \
--predict_with_generate \
--num_train_epochs 15 \
--text_column text \
--summary_column summary \
--learning_rate 3e-5 \
--weight_decay 0.01 \
--adam_beta2 0.98 \
--warmup_steps 5000
```
I have used these hyperparams to match with the performance of https://arxiv.org/pdf/2008.03156v1.pdf
Outputs, after 1 training epoch:
input:
> so this happened when i was in like third grade but it continued to bother me throughout high school. i had actually forgotten about this till i read one of the other posts on here. the original fuck up happened when as i said we were playing football in the backyard. his backyard was surrounded by a metal fence. we had decided to use the top of a hill right before the fence should be plenty of leeway before the fence right? wrong. i was running in for a touchdown had just gotten past my friend for the touchdown when he jumped and tangled up my legs. i ended up sliding down the hill and fell tooth first into his fence. somehow even though 2/3rds of my tooth was in the fence i managed to avoid all nerves and felt no pain. i came up laughing so hard i was crying which i think made it worse because my friend goes dude your tooth is missing. which of course made me laugh even harder. his mom hears the commotion and comes out sees my missing tooth and me crying and starts freaking out. she partially blamed herself because she's the one that sent us out because before we were just inside playing video games. my dad comes to pick me up she apologizes profusely and i still didn't think it was a big deal. this was on a saturday so we eventually get the dentist to come in on sunday, that place was awesome, to fix the tooth. since i'm so young they only put a temporary cap on. now i also played hockey, soccer and later lacrosse. of course the temporary cap didn't last all that long and came off. this happened several times and there were hockey games i'd start with the cap on lose it halfway through and would confuse everyone. i always had fun with this but it was getting old, and expensive, so eventually the dentist put on a permanent cap. haven't had a problem since. if you guys want i'll see if i can find the young picture of me without the tooth. edit: found it
fine-tuned bart prediction:
> tried to impress a girl, ended up getting kicked out of the house by her dad and her mom for being late to a party.
input:
> hi reddit. typical disclaimer, this didn't actually happen today. it happened a couple months ago, but it's still impacting me today. my kids are typical kids, they don't pick up their stuff and they get scolded for it. i was getting pretty sick of seeing their pok\u00e9mon cards lying all over the place because to me it looked like all of the money that came out of my pocket getting slowly turned into trash. my wife on the other hand went crazy because of the mess. one night it all came to a head. after weeks of ineffectually threatening to take the stupid cards away if they left them all over the floor, and my wife demanding that they clean the room before bedtime, she lost it when going in to tuck them in. i got tired of hearing it, so i went in, saw all of the expensive pok\u00e9mon cards strewn about and lost it too. i immediately started grabbing up all the cards and piling them into boxes then left the room with both arms full. i went stomping angrily through the living room to put them away in the front bedroom that i use for storage. that's when the f u happened. earlier that evening, my older child had noticed my younger child smearing chapstick all over a section of wood laminate flooring...
fine-tuned bart prediction:
> tried to impress a girl, ended up getting kicked out of the house by her dad and her mom. i'm a dumbass.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0 dev0
- Platform: Linux Ubuntu 18.04
- Python version: 3.8
- PyTorch version (GPU?): 1.6.0
- Tensorflow version (GPU?): --
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes
@patrickvonplaten, @patil-suraj, @sgugger
## Information
Model I am using (Bert, XLNet ...): BART
The problem arises when using:
* [ x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ x] my own task or dataset: (give details below)
reddit_tifu dataset
## To reproduce
Steps to reproduce the behavior:
1. Running the above script which is taken from the official example
2. After a few training steps, the model learns to predict a specific fixed output for each given input text.
## Expected behavior
After fine-tuning for a few steps/epochs, I expect the model learn to generate at least different outputs for varying input texts.
@patrickvonplaten @patil-suraj | 06-17-2021 23:06:48 | 06-17-2021 23:06:48 | Hi,
following up on this @patrickvonplaten and @patil-suraj <|||||>Hey @sajastu,
It's pretty difficult for us to debug the script - from a first look, the hyperparameter settings look good to me.
An effective batch size of 8 (4 * 2) seems rather small to me, but you can see from your loss curves whether 8 is enough I guess.
Also note that the x-sum dataset has some rather special distribution which is not really the same as reddit data IMO. X-sum is extreme summarization and has very dense sentences as summaries. Not sure if this works well with reddit.<|||||>He @patrickvonplaten,
Thanks for your response!
The problem that I'm facing is that: when I'm running the generation phase of `facebook/bart-large-xsum` (i.e., without fine-tuning), I'm getting comparably high scores (22.43/ 7.21 / 17.65); however and interestingly, when I finetune it for a few training steps (let's say 10 training steps), and then run the fine-tuned model on the same test set, the scores get much much lower (15.32 / 2.35 / 9.78). This in fact doesn't make sense to me. Theoretically, I expect the scores to stay near to the main model, if not surpassing it, especially when it has been trained for a very few steps...
Do you have any thoughts on this? is this behaviour expectable?
Also, do you think that the model is overfitting, or get stuck in a local minimum that it's producing the same one output regardless of the input that it gets?<|||||>I struggle at the same point - the output of the generate-method in a fine-tuned BART seems to be independent of the input.
Interestingly, this holds only for the generate method. If I call the fine-tuned model directly, as with
`tokenizer.batch_decode(torch.argmax(model(input_ids = input_ids)[0], axis=-1))`
the output is perfectly related to the input, hence, it differs from input to input. Therefore, I assume there is a bug in the BART.generate()-method, or to be more precise with my assumption, in the specific `modeling_tf_bart.prepare_inputs_for_generation()`. I tried to verify my assumption ( I guess fine-tuning freezes somehow the past-/-cache-value which disconnects the output from the input), but I don't find the point which triggers this special generate-method-behavouir.<|||||>Hi @phhei,
I think the code is **probably** correct. Or if any flaw, it must exist in the tokenization module, since I'm not getting this "fixed" output on other datasets that I've been using to fine-tune BART. For my special case here, I changed the dataset (i.e., reddit_tifu), ran the same code, and finally able to get it working.
@patrickvonplaten might be of some help here.<|||||>Hi @sajastu,
thanks for your reply. However, if the tokenization-module would cause this behavior, then ` tokenizer.batch_decode(torch.argmax(model(input_ids = input_ids)[0], axis=-1))` (in which input_ids is generated by the tokenizer.encode-method - the same variable I use for the BART.generate(input_ids)-method) would also output always the same. I already investigated the raw tensor output of both approaches, and there is the same: the generate(input_ids)-method always produces the same tensor, `torch.argmax(model(input_ids = input_ids)[0], axis=-1)` depended on the input_ids.
I'm asking myself why changing the dataset (without anything else) would solve this issue. In my case, I have a non-huggingface-dataset, preprocessed by tokenizer-calls, so a bug in a huggingface-dataset is therefore not the point, too.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,236 | closed | [Flax] Add FlaxMBart | # What does this PR do?
This PR adds flax implementation of MBart.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj | 06-17-2021 21:55:07 | 06-17-2021 21:55:07 | @patil-suraj Thank you a lot for your suggestions. I fixed the order of attention and normalization layers and some other minor bugs. Also added some additional copy statements.
I also changed the `shift_tokens_right` method as this one looks to be different for the MBart models as they don't have a single `decoder_start_token_id` in contrast to other Bart-like models. => This difference of having no `decoder_start_token_id`, however, currently leads to some issues within the `generate` method. (I'll try to have a look what can be done here)
<|||||>@stancld I pushed a couple of commits to add the `layer_norm` in encoder and decoder. Now, all slow tests are passing.
@patrickvonplaten could you please take a final look? |
transformers | 12,235 | closed | can predict_with_generate (do_eval) work with sharded_ddp fairscale in 4.6.1+? | In 4.5.0, sharded_ddp won't work with predict_with_generate in seq2seq or clm model training during the eval step. Wonder whether it can work in the latest version. | 06-17-2021 20:24:35 | 06-17-2021 20:24:35 | Can I ask whether we have any progress on the model prediction (text generation) during the training with fairscale? Thanks/ cc @stas00 <|||||>most likely you'd want to ask @sgugger as I he did the fairscale integration.
you can ask me about the deepspeed integration if you try that instead.<|||||>`sharded_ddp` does not work for evaluation, as is mentioned in the documentation. I have mentioned that on the fairscale repository but did not get any update for the authors (same for the blocking aprt of Zero offload via fairscale) so I suggest you use DeepSpeed instead, where we have a much better support from the team at Microsoft.<|||||>and if fairscale solves the problem on their side and the work resumes in this direction, the key to making generate work might have to include the enabling `synced_gpus` here for fairscale (for zero3-like fairscale features that is):
https://github.com/huggingface/transformers/blob/fb65f65ea6175036f0cc8318145853e9c833f914/src/transformers/trainer_seq2seq.py#L164<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,234 | closed | Reconstructing Tokens from Bert Embedding? | Sorry if this was posted before but I couldn't it after a few searches.
My goal is to take a sentence, run it through BERT, perturb the contextualized embeddings from the output of BERT, and reconstruct the sentence text.
I'm currently using 'bert-base-uncased' as my tokenizer and model and a perturbed output torch tensor with each token embedding size 768. How do I reconstruct the sentence text?
Thanks! | 06-17-2021 19:08:33 | 06-17-2021 19:08:33 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks! |
transformers | 12,233 | closed | Add FlaxBigBird QuestionAnswering script | # What does this PR do?
This PR will add flax-bigbird QA script on `natural-questions` dataset.
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
@patrickvonplaten | 06-17-2021 18:45:13 | 06-17-2021 18:45:13 | @patrickvonplaten, this PR is ready for review & merge (tested all the code after porting here).
Failing test is unrelated to this PR.<|||||>Awesome merging for the sprint - we'll fix bugs on the go as it's under `research_projects` :-) |
transformers | 12,232 | closed | RobertaForMaskedLM.from_pretrained throwing some weights not initialized error when loading same model type | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.7.0
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.10
- PyTorch version (GPU?): 1.8.1+cu101 (False)
- Tensorflow version (GPU?): 2.5.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
## Information
I am using a pretrained RobertaForMaskedLM . When I try to load the model I get the following error:
Some weights of RobertaForMaskedLM were not initialized from the model checkpoint at mldmm/GlassBERTa and are newly initialized: ['lm_head.layer_norm.bias', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.dense.weight', 'lm_head.layer_norm.weight', 'lm_head.decoder.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
The config of the model is as follows:
```
{
"architectures": [
"RobertaForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"bos_token_id": 0,
"eos_token_id": 2,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dim": 96,
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "roberta",
"n_heads": 3,
"n_layers": 3,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 1,
"position_embedding_type": "absolute",
"transformers_version": "4.6.1",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 541
}
```
## To reproduce
Steps to reproduce the behavior:
```
from transformers import AutoModelForMaskedLM,AutoTokenizer
tok = AutoTokenizer.from_pretrained('mldmm/GlassBERTa')
mod = AutoModelForMaskedLM.from_pretrained('mldmm/GlassBERTa')
```
or
```
from transformers import RobertaForMaskedLM,AutoTokenizer
tok = AutoTokenizer.from_pretrained('mldmm/GlassBERTa')
mod = RobertaForMaskedLM.from_pretrained('mldmm/GlassBERTa')
```
Same goes for many other pre-trained models hosted ('beomi/kcbert-base','hfl/chinese-bert-wwm-ext')
## Expected behavior
Since I'm loading for the same architecture I am expecting a clean import without any errors, might lead to random output due to those newly initialized layers and can mess the output in fill-mask pipeline. | 06-17-2021 18:20:23 | 06-17-2021 18:20:23 | |
transformers | 12,231 | closed | Batch inference runtime slows down for inputs with different length sentences | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: Ubuntu 18.04.5 LTS
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8.1
- Tensorflow version (GPU?):
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik @patrickvonplaten (sorry if wrong tags, this is for Luke model, but it is not listed)
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): LukeForEntityPairClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. generate batched inputs for the LukeTokenizer with **identical** sentences in each batch (i.e. no padding required)
2. tokenize each batch by passing the batch to the tokenizer
3. run inference on each batch on GPU and notice that runtime is the same for each batch
4. generate batched inputs for the LukeTokenizer with sentences of **different length** in each batch (i.e. padding is required)
5. tokenize each batch by passing the batch to the tokenizer with `padding=True`
6. run inference on each batch on GPU and notice that runtime increases substantially for subsequent batches after first batch
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```python
import torch
from transformers import LukeForEntityPairClassification, LukeTokenizer
import time
text1 = "Beyoncé lives in Los Angeles."
entity_spans1 = [(0, 7), (17, 28)]
text2 = "Kevin Love has urged the Cleveland Cavaliers to fight to regain their form following LeBron James' move to the Los Angeles Lakers."
entity_spans2 = [(85, 97), (111, 129)]
# experiment 1 - sentence length is identical across the full batch
text = [[text1] * 10, [text2] * 10]
entity_spans = [[entity_spans1] * 10, [entity_spans2] * 10]
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenized_inputs = []
for text_batch, span_batch in zip(text, entity_spans):
inputs = tokenizer(text_batch, entity_spans=span_batch, return_tensors="pt", padding=True, truncation=True)
tokenized_inputs.append(inputs)
device = torch.device('cuda')
model.to(device)
model.eval()
for i, batch in enumerate(tokenized_inputs):
with torch.no_grad():
start = time.time()
batch.to(device)
outputs = model(**batch)
print(f"runtime batch {i}: ", time.time() - start)
# experiment 2 - sentence length alternates in length across the batch
text = [[text1, text2] * 10] * 2
entity_spans = [[entity_spans1, entity_spans2] * 10] * 2
model = LukeForEntityPairClassification.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenizer = LukeTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-tacred")
tokenized_inputs = []
for text_batch, span_batch in zip(text, entity_spans):
inputs = tokenizer(text_batch, entity_spans=span_batch, return_tensors="pt", padding=True, truncation=True)
tokenized_inputs.append(inputs)
device = torch.device('cuda')
model.to(device)
model.eval()
for i, batch in enumerate(tokenized_inputs):
with torch.no_grad():
start = time.time()
batch.to(device)
outputs = model(**batch)
print(f"runtime batch {i}: ", time.time() - start)
# results - Tesla T4
runtime batch 0: 0.028860092163085938
runtime batch 1: 0.03273129463195801
runtime batch 0: 0.028328895568847656
runtime batch 1: 0.09934639930725098
```
## Expected behavior
I expect the runtime to be the same for an identical batch of inputs
<!-- A clear and concise description of what you would expect to happen. -->
| 06-17-2021 17:25:08 | 06-17-2021 17:25:08 | Pinging @NielsRogge as he might have an idea of what's going on with LUKE<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This has not been resolved as far as I know. Please do not close this issue<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>There seems to be something odd happening indeed. Will investigate this.
Also cc'ing @ikuyamada.<|||||>This may be related to the asynchronous execution on GPU. When adding `torch.cuda.synchronize(device)` after calling `model(**batch)`, the runtime was approximately consistent across batches on my local machine.
```python
for i, batch in enumerate(tokenized_inputs):
with torch.no_grad():
start = time.time()
batch.to(device)
outputs = model(**batch)
torch.cuda.synchronize(device)
print(f"runtime batch {i}: ", time.time() - start)
```<|||||>thanks for the suggestion @ikuyamada ! I'll give it a shot<|||||>I tested with `torch.cuda.synchronize(device)` this isn't really the solution I want. I agree that it did seem to resolve the runtime issue, but it increased all runtimes to the longest runtime (now in the second example batch 0 and batch 1 both have execution time of ~0.1 s). This is the opposite of what I would hope to accomplish. The runtime of executing the first batch without `synchronize` is `0.03 s` so I am still not understanding why calling the exact same data a second time would result in a longer runtime. If this is because of async execution, can you please explain to me why?<|||||>@alexdauenhauer If I understand torch correctly, torch returns the result without completing the computation. When you use the result (e.g., `print(outputs)`), the *synchronization* happens and the result is computed. Therefore, the following code should give similar results to the code above.
```python
for i, batch in enumerate(tokenized_inputs):
with torch.no_grad():
start = time.time()
batch.to(device)
outputs = model(**batch)
print(outputs[0][0])
print(f"runtime batch {i}: ", time.time() - start)
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 12,230 | closed | Flax summarization script | # What does this PR do?
Adds Flax summarization example.
Beam search `generate` works like a charm on TPU, really fast! This makes it very easy and fast to use the `generate` in the eval loop. | 06-17-2021 16:45:10 | 06-17-2021 16:45:10 | |
transformers | 12,229 | closed | Add link to the course | # What does this PR do?
This PR adds a link to the Hugging Face course on the first page of the documentation. | 06-17-2021 15:12:05 | 06-17-2021 15:12:05 | Thank you 🤗 & @sgugger for doing this.
Just so if anyone is looking for the direct link to videos: https://huggingface.co/course/ |
transformers | 12,228 | closed | [Flax] FlaxAutoModelForSeq2SeqLM | # What does this PR do?
This PR adds `FlaxAutoModelForSeq2SeqLM`. | 06-17-2021 14:56:36 | 06-17-2021 14:56:36 | |
transformers | 12,227 | closed | [Blenderbot] Fix docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the docs.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 06-17-2021 12:07:35 | 06-17-2021 12:07:35 | |
transformers | 12,226 | closed | update desc for map in all examples | # What does this PR do?
Closes #11797. I've added the remaining the `desc` for `summarization`, `token-classification,` `translation`, `language-modeling ` and updated `requirements.txt` as well. I wasn't sure what to add in `desc` at some places so I've added `# not sure if it's right` comment there. Please let me know if that description looks good or should I replace it with something else?
## Who can review?
@stas00 @sgugger | 06-17-2021 11:57:19 | 06-17-2021 11:57:19 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.