repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 10,414 | closed | Add Ray Tune hyperparameter search integration test | # What does this PR do?
Currently, only Optuna HP search is tested in integration tests. This PR duplicates and adjusts the test for the Ray backend.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@amogkam
@sgugger
| 02-26-2021 11:38:22 | 02-26-2021 11:38:22 | Oh, that makes sense. Does this happen in the repo? Is this something I can help with, or do you have to configure it?<|||||>I'm currently working on these scheduled tests, I'll enable these while I do so. Thanks! |
transformers | 10,413 | closed | Update run_mlm.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-26-2021 10:56:49 | 02-26-2021 10:56:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,412 | closed | Trainer: Make `best_model_checkpoint` path in `trainer_state.json` relative to `args.output_dir` | # 🚀 Feature request
An enhancement of `best_model_checkpoint` for more robustness.
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Currently `Trainer.state.best_model_checkpoint` holds absolute path to the best checkpoint when `Trainer.args.load_best_model_at_end=True` is passed.
It can be useful if `Trainer.state.best_model_checkpoint` value is relative to `Trainer.args.output_dir`
## Motivation
**Absolute path hinders portability** of the trained models.
For example, if a user wants to continue _previous training_ using argument `resume_from_checkpoint` for `Trainer.train`, not having the `output_dir` exactly same as the _previous training_ (eg- renaming of any directory in the path) can break the `load_best_model_at_end` functionality due to the previous absolute paths being no longer valid.
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
## Your contribution
I can raise a PR if this is a useful change to have!
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
| 02-26-2021 10:36:41 | 02-26-2021 10:36:41 | I think this would a very welcome change indeed, so please work on that if you have time and if it's something you'd like to do!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,411 | closed | Problem using add_special_tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:3.4.0
- Platform:windows
- Python version:3.7.0
- PyTorch version (GPU?):1.7.0
- Tensorflow version (GPU?):2.4.1
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@n1t0, @LysandreJik
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Hi,I want to add some special tokens to bert tokenizer and these tokens are already part of the vocabulary.

So i use `add_special_tokens`
```
from transformers import BertTokenizer
tokenizer = BertTokenizer.from_pretrained('t5')
tokenizer.add_special_tokens({'extra_id_0':'[extra_id_0]'},)
```
But something went wrong
```
Traceback (most recent call last):
File "E:/github/Update_model/update_t5.py", line 10, in <module>
tokenizer.add_special_tokens({'extra_id_0':'[extra_id_0]'},)
File "D:\Anaconda\envs\hc\lib\site-packages\transformers\tokenization_utils_base.py", line 948, in add_special_tokens
assert key in self.SPECIAL_TOKENS_ATTRIBUTES, f"Key {key} is not a special token"
AssertionError: Key extra_id_0 is not a special token
```
And I have another question.
I want to show the special tokens in the Hosted inference API when generate texts.
But I dont know how to do this.
Thanks!
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 02-26-2021 10:28:51 | 02-26-2021 10:28:51 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,410 | closed | [WIP] RAG end-to-end retriever training (with ray workers) | # What does this PR do?
As mentioned in this [issue](https://github.com/huggingface/transformers/issues/9646), this PR adds the fine-tuning ability the Retriever in the original RAG implementation.
This PR first updates the ctx_encoder and then initializes the index using RAY workers.
@lhoestq
@amogkam | 02-26-2021 09:41:41 | 02-26-2021 09:41:41 | @lhoestq @patrickvonplaten
I have already started doing the above changes you have mentioned. Apart from that, I changed the following elements of the codebase.
1. I used a dedicated RAY worker to compute embeddings for the dataset with an updated ctx encoder in this version. However, the process of add_faiss_index gets very slow when running inside a ray worker (I think it is due to the need for multiprocessing threads). I tried to increase the number of CPU cores, but it is still very slow. The computing of embeddings is an embarrassingly parallel task, where we can share the dataset between GPUs and compute them very fast. Nevertheless, it is hard to work with RAY when it comes to multiple GPUs. So I utilize the DDP process to compute embeddings using N number of dedicated GPUs that only do the embeddings calculation task.
2. Then I did a minor thing. Pytorch lightning has removed the DDP accelerators in their latest installation. Nevertheless, we can easily use the **on_sanity_check_start** callback to initialize the index when using RAY. I feel it is a lot cleaner.
__________________________________________________________________________________________________
As per my experiments, at the moment end-to-end training process is stable. I would love to double-check the following parts with your help.
1. Re-loading of the updated index for the workers.
2. Re-initialization of the retrieval index.
Apart from them, I see **add_faiss_index** can get hours when the dataset consists of more than a million passages. My custom dataset has 8 million datasets. Is it normal, or should we able to improve it? If we can improve its speed, this whole process can be very engineering friendly.
Please let me know your ideas. I will quickly do the updated PR.
<|||||>For the records, we are discussing the indexing speed difference here: https://github.com/huggingface/datasets/issues/2046<|||||>@lhoestq
I and @elliott-wen updated the codebase. Now the embedding update happens with a parallel process and we use the stale gradients to update the entire model (pretty-much similar to REALM). <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@shamanez do you need any help to get the test working?<|||||>Hi, @patrickvonplaten I did change the code and got a stable end-to-end trainable RAG. I also did the changes you have mentioned. But I had to update the code with new pytorch lightning, especially they do not use plugging now. So is that ok to upload the code with a new end-to-end fine-tune script and lightning base?
@patrickvonplaten @lhoestq
I also added all the details to a blog and I am happy to share it with you two. It includes all the changes I did in the RAG. (I also included your names since you guys helped me a lot :) )<|||||>Hi, that's good news !
> is that ok to upload the code with a new end-to-end fine-tune script and lightning base?
I think it could be a good idea to make the code compatible with the latest pytorch-lightning yes :)
Especially since many things we used in lightning_base don't work anymore, and that we can now hope pytorch-lightning to not do such radical changes again.
pinging @patrickvonplaten to confirm it's ok<|||||>@lhoestq Thanks.
BTW I read this new paper named [Retrieval Augmentation Reduces Hallucination in Conversation](https://arxiv.org/abs/2104.07567), which kinds of highlights the importance of RAG-like models in LM modeling. So I do believe end-to-end fine-tuning can allow users to experiment with different components of the RAG architecture.
Since you guys helped me a lot in this process, is it okay to include your names in the blog? Here's a link to the unpublished draft blog post.
https://medium.com/@shamanesiriwardhana/end-to-end-rag-fine-tuning-with-huggingface-pytorch-lightning-and-ray-4b4385322552
Please let me know your thoughts.
<|||||>Good job with this blog post draft ! Sure it's fine to mention us, thanks<|||||>Thanks.
On Sat, May 8, 2021, 04:42 Quentin Lhoest ***@***.***> wrote:
> Good job with this blog post draft ! Sure it's fine to mention us, thanks
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/10410#issuecomment-834610009>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AEA4FGVY24QYGUQ6SR5T473TMQJ67ANCNFSM4YID3MKQ>
> .
>
<|||||>> Hi, that's good news !
>
> > is that ok to upload the code with a new end-to-end fine-tune script and lightning base?
>
> I think it could be a good idea to make the code compatible with the latest pytorch-lightning yes :)
> Especially since many things we used in lightning_base don't work anymore, and that we can now hope pytorch-lightning to not do such radical changes again.
> pinging @patrickvonplaten to confirm it's ok
So is that ok we create a folder in researc_project naming, RAG-end-to-end-Retriever training
<|||||>closing this with a new pull request.
https://github.com/huggingface/transformers/pull/11655<|||||>@lhoestq @patrickvonplaten could you please let me know if there is anything to change in the recent pull request. <|||||>Hey @shamanez,
Sorry for being so inactive here! I reviewed your newly opened PR :-)<|||||>Hey, it is totally fine. Thanks million times :). |
transformers | 10,409 | closed | [ci, flax] non-existing models are unlikely to pass tests | 02-26-2021 08:46:08 | 02-26-2021 08:46:08 | ||
transformers | 10,408 | closed | Question about the `decoder_input_ids` in `LEDForConditionalGeneration` forward method | https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337
Hi,
I have a question about the `LEDForConditionalGeneration` forward args.
The `decoder_input_ids` has a comment that `decoder_input_ids (torch.LongTensor of shape (batch_size, target_sequence_length), optional) – Provide for translation and summarization training. By default, the model will create this tensor by shifting the input_ids to the right, following the paper.`.
Form the forward method in `LEDForConditionalGeneration`, i can see that when not assigning the `decoder_input_ids` in the forward method of `LEDForConditionalGeneration` object , the `decoder_input_ids` will be generated by [shifting the `labels` value one token to right in the forward method](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337).
So my question is if i want to explictly pass the `decoder_input_ids` to the forward method, do i need to explictly shift it one token as the [code](https://github.com/huggingface/transformers/blob/17b6e0d474b797cdddf5225b0f51bf0e928091b9/src/transformers/models/led/modeling_led.py#L2337) shows before the forward pass?
| 02-26-2021 07:18:00 | 02-26-2021 07:18:00 | Hey @yww211,
actually there was a bug in the docstring, that I found thanks to you :-) The attached PR corrects this mistake.
To answer your question, you should not shift the `decoder_input_ids` explicitly by one when passing them & in fact you have to pass the `decoder_input_ids` if you want to use the forward method. |
transformers | 10,407 | closed | offline mode for firewalled envs | This PR implements the proposal from https://github.com/huggingface/transformers/issues/10379 to enable transformers to cache everything it needs and then run in the offline mode - e.g. in a firewalled environment.
This PR:
* [x] adds `is_offline_mode()` helper function that returns `True` when env var `TRANSFORMERS_OFFLINE` is set to `1/YES/ON`
* [x] automatically sets `local_files_only=True` in all 3 `from_pretrained()` methods
* [x] handles `ntlk` download dynamically in `run_seq2seq.py`
* [x] adds offline test (thanks to @lhoestq for the idea for mocking no network in the test)
* [x] adds doc
This is to match the recently added `HF_DATASETS_OFFLINE=1` in `datasets` (https://github.com/huggingface/datasets/pull/1976). Tested that both work well together.
So now we can run with the network:
```
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
and then with the same filesystem w/o the network or w/ a firewalled network:
```
HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 \
python examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --dataset_name wmt16 --dataset_config ro-en ...
```
and the latter succeeds since step 1 had all the data pre-fetched and cached.
@sgugger, @LysandreJik | 02-26-2021 02:46:33 | 02-26-2021 02:46:33 | > The constant should be documented in the install page, in the section about caching models I think (https://huggingface.co/transformers/installation.html#caching-models).
Great idea, @sgugger. Please kindly check the doc I added is good when you get a chance.
And also the kind of test I had to add is unorthodox too, so please see if it works for you. The original version couldn't have worked.
Thank you! |
transformers | 10,406 | closed | Ray Tune Integration Bug Fixes | # What does this PR do?
Fixes resource allocation and checkpointing bugs with the Ray Tune `hyperparameter_search` integration.
@sgugger @richardliaw @krfricke | 02-26-2021 01:12:56 | 02-26-2021 01:12:56 | @amogkam There is an `s` missing in line 202 in `src/transformers/integrations.py` in `{kwargs['keep_checkpoint_num']}` which should be `{kwargs['keep_checkpoints_num']}`, which is causing the logger to crash instead of just a warning. Thanks for the fixes btw!
|
transformers | 10,405 | closed | Problem running T5 (configuration) with text classification | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
### Who can help
Perhaps @patrickvonplaten, @patil-suraj could help?
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I'm trying to run the T5 base model. It seems that I use the correct model path (i.e., t5-base) and it finds and downloads the model, but crashes when it tries to instantiate it. The problem seems to be around the configuration class not being found. This is what I get:
```
File "../../../models/tr-4.3.2/run_puppets.py", line 279, in main
model = AutoModelForSequenceClassification.from_pretrained(
File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/transformers/models/auto/modeling_auto.py", line 1362, in from_pretrained
raise ValueError(
ValueError: Unrecognized configuration class <class 'transformers.models.t5.configuration_t5.T5Config'> for this kind of AutoModel: AutoModelForSequenceClassification.
Model type should be one of ConvBertConfig, LEDConfig, DistilBertConfig, AlbertConfig, CamembertConfig, XLMRobertaConfig, MBartConfig, BartConfig, LongformerConfig, RobertaConfig, SqueezeBertConfig, LayoutLMConfig, BertConfig, XLNetConfig, MobileBertConfig, FlaubertConfig, XLMConfig, ElectraConfig, FunnelConfig, DebertaConfig, GPT2Config, OpenAIGPTConfig, ReformerConfig, CTRLConfig, TransfoXLConfig, MPNetConfig, TapasConfig.
```
I dig a bit and I may have a hunch why this happens. The config file is there: https://github.com/huggingface/transformers/blob/master/src/transformers/models/t5/configuration_t5.py#L32
but it's not recorded here: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L514
So the check here fails: https://github.com/huggingface/transformers/blob/master/src/transformers/models/auto/modeling_auto.py#L1389
And the ValueError is raised.
I hope this is it. It looks like an easy fix :) Thanks!
PS: I'm running the same scripts/files with other models without problems. This seems to be something specific to T5.
| 02-25-2021 22:14:47 | 02-25-2021 22:14:47 | Hey,
even though T5 can be used very well for text-classification it remains a text-to-text only model. So you can only load the model via
```python
from transformers import AutoModelForConditionalGeneration
model = AutoModelForConditionalGeneration.from_pretrained("t5-small")
```<|||||>Got it, thanks!<|||||>@patrickvonplaten Hi does `from transformers import AutoModelForConditionalGeneration` still work? Returns me an error when i try to use it<|||||>Should work yes :-)<|||||>@patrickvonplaten I just upgraded transformers to the latest version (4.16) and when i run this:
```python
from transformers import AutoModelForConditionalGeneration
```
I get this error:
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
/tmp/ipykernel_20/1334627133.py in <module>
----> 1 from transformers import AutoModelForConditionalGeneration
ImportError: cannot import name 'AutoModelForConditionalGeneration' from 'transformers' (/opt/conda/lib/python3.7/site-packages/transformers/__init__.py)
```
If this is supposed to work I can open an issue (let me know who I should tag). See [kaggle notebook example](https://www.kaggle.com/xhlulu/transformers-automodelforconditionalgeneration) |
transformers | 10,404 | closed | Model Hub: Search by model size | # 🚀 Feature request
It would be great if the model cards for models would include the model size (i.e., the number of parameters) and then the model hub will allow searching for models by size.
## Motivation
Depending on the task/problem/context, smaller or larger models are more beneficial. It's hard to keep up with all the models out there. For example, if I'm interested in distilled/compressed/smaller BERTs, I may be able to remember DistilBERT, MobileBERT but maybe forget about SqueezeBERT, TinyBERT, etc. A search by size would make all these smaller models visible.
| 02-25-2021 21:18:15 | 02-25-2021 21:18:15 | Definitely a good idea<|||||>And since I started talking about model cards... :) I think it would be cool if you guys actually imposed some format. I think the original paper/idea had a format. Now "model cards" stands for "whatever the researcher had time to fill in that day" :) A few fields of interest: model size, training data, NLP tasks, language(s), paper, _maybe_ something about model of inspiration (e.g., TinyBERT is a modification of BERT by...). <|||||>I agree, there should at least be a template in my opinion. I hate to find models on the hub which don't provide any information. Moreover, all model cards look different, there's not really a structure.<|||||>There is a template we link to in the second question of https://huggingface.co/docs (=> https://github.com/huggingface/model_card), though we should make it more built-in/central at some point.<|||||>It would also be nice if the template also included details on tokenisation, what algorithm was used (BPE, Unigram, Word Piece) and the parameters (vocab size etc).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,403 | closed | Sagemaker Model Parallel tensoboard writing fix | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
Fixes # 10402
https://github.com/huggingface/transformers/issues/10402
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik @sgugger
| 02-25-2021 21:05:39 | 02-25-2021 21:05:39 | |
transformers | 10,402 | closed | SageMaker Model Parallel: cluttered tensorboard plots | ## Environment info
- `transformers` version: master
- Platform: SageMaker
- Python version: 3.6
- PyTorch version (GPU?): 1.6
- Using GPU in script?: Y
- Using distributed or parallel set-up in script?: Y
### Who can help
Models:All
Library:SageMaker Model parallel
## Information
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: MLM
## Expected behavior
Currently, SM trainer class inherits `is_world_process_zero` variables from main trainer class. In main trainer class, these variables are derived using `self.args.local_rank` or `dist.get_rank()`, which are not unique for SMMP. This cause multiple processes writing tensorboard summaries because of which there are loops in the tensorboard graph. The `is_world_process_zero` can be implemented in SageMakerTrainer as below. This makes sure that only single process is writing tensorboard summaries.
```python
def is_world_process_zero(self) -> bool:
"""
Whether or not this process is the global main process (when training in a distributed fashion on several
machines, this is only going to be :obj:`True` for one process).
"""
if self.is_model_parallel_enabled:
return smp.rank() == 0 and smp.local_rank() == 0 and smp.mp_rank() == 0 and smp.dp_rank() == 0
else:
return super.is_world_process_zero()
```
| 02-25-2021 20:02:32 | 02-25-2021 20:02:32 | Hello! Thank you for opening an issue and for offering a code sample!
Could you open a PR with your code changes?
Thank you!<|||||>@LysandreJik Please find the PR here: https://github.com/huggingface/transformers/pull/10403/files <|||||>Cool, thanks for fixing! Just merged the PR. |
transformers | 10,401 | closed | Fix run_glue evaluation when model has a label correspondence | # What does this PR do?
The `run_glue` script uses the correspondence id to label stored in a given model but when using
```
AutoModelForSequenceClassication.from_pretrained(xxx, num_labels=x)
```
that correspondence is reset. This PR fixes that, along with a few other bugs in the script. To confirm MNLI evaluation does take the correspondence in a model config
```bash
python examples/text-classification/run_glue.py --model_name_or_path roberta-large-mnli --task_name mnli --max_seq_length 128 --output_dir ~/tmp/test-mnli --do_eval
```
gices 90.6%/90.1% accuracy (matched/mismatched) after this PR, vs 4.28%/4.86% accuracy on current master. | 02-25-2021 19:02:46 | 02-25-2021 19:02:46 | |
transformers | 10,400 | closed | [Deepspeed] getting multiple prints of: Avoid using `tokenizers` before the fork if possible | on master when running with DeepSpeed I started getting multiple dumps of:
```
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
```
This script:
```
export BS=16; rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0,1 deepspeed --num_gpus=2 examples/seq2seq/run_seq2seq.py --model_name_or_path t5-small --output_dir output_dir --adam_eps 1e-06 --do_eval --do_train --do_predict --evaluation_strategy=steps --label_smoothing 0.1 --learning_rate 3e-5 --logging_first_step --logging_steps 1000 --max_source_length 128 --max_target_length 128 --num_train_epochs 1 --overwrite_output_dir --per_device_eval_batch_size $BS --per_device_train_batch_size $BS --predict_with_generate --eval_steps 25000 --sortish_sampler --task translation_en_to_ro --val_max_target_length 128 --warmup_steps 500 --max_train_samples 100 --max_val_samples 100 --max_test_samples 100 --dataset_name wmt16 --dataset_config ro-en --source_prefix "translate English to Romanian: " --deepspeed examples/tests/deepspeed/ds_config.json
```
prints it 15 times.
There are no 15 forks, it probably gets triggered by threads. The problem doesn't happen with DDP or DP.
Thank you.
@LysandreJik, @n1t0
| 02-25-2021 18:40:59 | 02-25-2021 18:40:59 | @chrissyjsartt, you probably accidentally subscribed/set to "Watching" the transformers repository which will now send you every comment on every Issue or PR.
So urgently go to https://github.com/watching and "Unwatch" this or any other repositories you may have set to Watch. Then you will stop getting these notifications.<|||||>@LysandreJik replied elsewhere to set `TOKENIZERS_PARALLELISM=false` and to read https://github.com/huggingface/tokenizers/issues/187#issuecomment-635692450 for the explanation of why this is needed.
But this could make things slow, so trying `=true` first is a better idea - if it doesn't hang then all is good.
Also Anthony shared:
> If the `tokenizer` wasn't used to encode before forking the process, it shouldn't happen. So just a new `encode_batch` somewhere before the fork happens can be enough to trigger this. |
transformers | 10,399 | closed | Make Barthez tokenizer tests a bit faster | # What does this PR do?
Currently, CI is pretty slow because of this:
```
93.44s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_add_special_tokens
77.20s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_pretokenized_inputs
77.00s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_maximum_encoding_length_single_input
76.66s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_maximum_encoding_length_pair_input
75.77s call tests/test_tokenization_barthez.py::BarthezTokenizationTest::test_internal_consistency
```
This is cause by the BarthezTokenizer conversion from slow to fast being pretty slow, so this PR saves the fast tokenizer to make those tests faster. To be even more efficient a new sentencepiece model with a mask token and a pad token should be added and used here. | 02-25-2021 16:14:01 | 02-25-2021 16:14:01 | |
transformers | 10,398 | closed | Does the synonym replacement tasks need Transformer? | Or traditional language processing toolkit (like WordNet) is enough? | 02-25-2021 15:39:49 | 02-25-2021 15:39:49 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Thanks! |
transformers | 10,397 | closed | Ignore unexpected weights from PT conversion | Some weights resulting from the conversion from a PyTorch model to a TensorFlow model are throwing an unnecessary warning.
To see for yourself, the following code throws a warning before the PR:
```py
from transformers import BertForPreTraining, BertConfig, TFBertForPreTraining
pt = BertForPreTraining(BertConfig())
pt.save_pretrained("here")
tf = TFBertForPreTraining.from_pretrained("here", from_pt=True)
```
Fix https://github.com/huggingface/transformers/issues/10348 | 02-25-2021 14:56:02 | 02-25-2021 14:56:02 | |
transformers | 10,396 | closed | how to freeze specific layers of TFbert model and just train a classifier? | Could someone help me to freeze say first 3 layers of transformers.TFDistilBertModel.from_pretrained('distilbert-base-multilingual-cased'). I've tried recommendations for TF model from [#400](https://github.com/huggingface/transformers/issues/400) but it seems to freeze all layers at once.
Thanks in advance! | 02-25-2021 14:31:02 | 02-25-2021 14:31:02 | I googled it, this should be working:
https://colab.research.google.com/drive/1EAVhQGdVvXbCu8gGq0lZ9dOnN4jJtvAj?usp=sharing<|||||>@NielsRogge thanks for the help! <|||||>> I googled it, this should be working:
>
> https://colab.research.google.com/drive/1EAVhQGdVvXbCu8gGq0lZ9dOnN4jJtvAj?usp=sharing
Thank you very much, this is very useful to me |
transformers | 10,395 | closed | RobertaTokenizerFast does not add special tokens | I'm not sure whether this should be a part of tokenizers or transformers, because it uses both. Classes that don't work are from `transformers` so I'm posting it here.
## Environment info
- `transformers` version: 4.3.3
- Platform: Colab
- PyTorch version (GPU?): n/a
- Tensorflow version (GPU?): n/a
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
- tokenizers: @n1t0, @LysandreJik
## Information
### Reproduction code
https://colab.research.google.com/drive/1iYLBLzXRkQpdPyVlIdi_qNCzfbD1uwGs?usp=sharing
When loading tokenizer trained using `tokenizers` from transformers, e.g.
```python
tfast = RobertaTokenizerFast.from_pretrained("./workdir/tokenizer", model_max_length=10)
```
it does not add special tokens
```python
tfast("asd", add_special_tokens=True)
```
```
{'input_ids': [400, 72], 'attention_mask': [1, 1]}
```
"Slow" version behaves correctly:
```python
tslow = RobertaTokenizer.from_pretrained("./workdir/tokenizer", model_max_length=10)
tslow("asd", add_special_tokens=True)
```
```
{'input_ids': [0, 400, 72, 2], 'attention_mask': [1, 1, 1, 1]}
```
## Expected behavior
Both tokenizers produce the same output.
| 02-25-2021 13:28:36 | 02-25-2021 13:28:36 | Hi, thanks for opening an issue. Indeed, I can reproduce. The conversion to IDs happens in the `encode_batch` method in `tokenizers` directly, which doesn't return the special tokens even though it correctly receives `add_special_tokens=True`. The tokenizer object in transformers also seems to have the correct `special_tokens_map`.
@n1t0, is this an issue from `tokenizers` side? If we're not correctly passing something to the Rust tokenizer when instantiating a it from files, happy to look into it.<|||||>Discussed the issue with @n1t0 and the issue comes from the fact that the special tokens must be added to the tokenizer via a [post-processor](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html?highlight=post#module-tokenizers.processors).
If it isn't done, then tokenizers cannot have their special tokens. The slow tokenizers having them anyway is linked to their initialization and not to the tokenizer you generated using the `tokenizers` library.
Here is a gist made from your colab showing how the post-processor should be used: https://gist.github.com/LysandreJik/04c7cfe3d2656ae1c4c388ce9cdd3ea4<|||||>Thanks @LysandreJik for the reply!
So it's more like a misconfiguration of the training pipeline on my side, not a bug per se?<|||||>Yes, I believe that is so. Tokenizers created with `tokenizers` need to have their post-processors/pre-tokenizers and other components defined to work correctly, otherwise it yields unexpected results as we have just seen!<|||||>Closing, but still seems odd that the behaviour for exact same files is different between those tokenizers... |
transformers | 10,394 | closed | DeepSpeedEngine object has no attribute 'no_sync' | ## Environment info
- `transformers` version: 4.3.0
- Platform: Linux-4.14.209-160.339.amzn2.x86_64-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.8
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: distributed
### Who can help
Library:
- deepspeed: @stas00
## Information
Deepspeed with single node, multi gpus breaking
```
Traceback (most recent call last):
File "training/run_training.py", line 273, in <module>
raise e
File "training/run_training.py", line 270, in <module>
remaining_args=remaining_args)
File "training/run_training.py", line 186, in main
trainer.train(model_path=model_path)
File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 937, in train
with model.no_sync():
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 779, in __getattr__
type(self).__name__, name))
torch.nn.modules.module.ModuleAttributeError: 'DeepSpeedEngine' object has no attribute 'no_sync'
```
```
deepspeed \
--num_gpus ${GPUS_ALLOWED} \
training/run_training.py \
--deepspeed ds_config.json \
--output_dir ${OUTPUT_BASE_PATH} \
--model_name_or_path ${MODEL_NAME_OR_PATH} \
--per_device_train_batch_size ${TRAIN_BATCH_SIZE} \
--per_device_eval_batch_size ${TEST_BATCH_SIZE} \
--gradient_accumulation_steps ${GRAD_STEP} \
--evaluation_strategy steps \
--eval_steps ${EVAL_STEP} \
--num_train_epochs ${EPOCH} \
--save_steps ${SAVE_STEP} \
--logging_steps ${LOG_STEP} \
--dataloader_num_workers ${DATALOADER_NUM_WORKERS} \
--load_best_model_at_end true \
--do_train true \
--do_eval true \
--fp16 true \
--dataloader_drop_last true \
--overwrite_output_dir true \
--use_lazy true \
--logging_first_step true || { echo 'training failed' ; exit 0; }
echo 'training successful'
```
```
ds_config.json
{
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"reduce_scatter": true,
"overlap_comm": true,
"contiguous_gradients": true,
"cpu_offload": true,
"allgather_bucket_size": 2e8,
"reduce_bucket_size": 2e8,
}
}
```
| 02-25-2021 13:19:38 | 02-25-2021 13:19:38 | Thank you for your report. This issue has already been fixed in `transformers` master. |
transformers | 10,393 | closed | NER Pipeline not working |
I just followed the token classification notebook and created a pipeline from the model I trained there. Here you can see the full notebook: https://colab.research.google.com/drive/1OzfFTgZwjxdIikbQ8lVJbA2SRF3IeJ5F?usp=sharing
The only thing that I changed (apart from creating the pipeline and calling it) s the number of training steps so the training is faster.
In the last cell you can see the error:
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
### Who can help
- pipelines: @LysandreJik
| 02-25-2021 11:16:09 | 02-25-2021 11:16:09 | Hello! The fix for this was merged a few days ago: https://github.com/huggingface/transformers/pull/10184
I recommend you install from source while no new version is available for Transformers (a new version should be out in ~2 weeks). Sorry for the inconvenience.<|||||>Ok thank you for the fast response. I always open the issues before checking master branch :sweat: <|||||>No worries, better to have too much issues reported than not enough! |
transformers | 10,392 | closed | Remove unused variable in example for Q&A | This PR removed unsed `text_tokens = tokenizer.convert_ids_to_tokens(input_ids)` from pytorch and tensorflow examples of Question Answering: https://huggingface.co/transformers/usage.html#extractive-question-answering | 02-25-2021 10:50:25 | 02-25-2021 10:50:25 | Thanks a lot for the fix!<|||||>What is this all about why am I attached can someone help me
On Thu, Feb 25, 2021, 8:19 AM Lysandre Debut <[email protected]>
wrote:
> Merged #10392 <https://github.com/huggingface/transformers/pull/10392>
> into master.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/10392#event-4376807276>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AS5YU5XRPNEC22Y7RAP4W6TTAZL7LANCNFSM4YGIRPLA>
> .
>
|
transformers | 10,391 | closed | some bugs about mbart50 for spanish | ```
from transformers import MBartForConditionalGeneration, MBart50TokenizerFast
model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-many-mmt")
article_en = "Hello World"
encoded_en = tokenizer(article_en, return_tensors="pt", padding='longest', truncation=True, max_length=1024)
generated_tokens = model.generate(**encoded_en, forced_bos_token_id=tokenizer.lang_code_to_id["es_XX"])
#tokenizer.batch_decode(generated_tokens, skip_special_tokens=True)
text_es = tokenizer.batch_decode(generated_tokens, skip_special_tokens=True, clean_up_tokenization_spaces=True)
```
text_es
['El Presidente (habla en inglés): Doy las gracias al representante de la República Islámica del Irán por su declaración.']
Looks like this model definitely has some bugs for spanish since this is an easy translation task. the correct answer is "Hola Mundo" | 02-25-2021 10:02:29 | 02-25-2021 10:02:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,390 | closed | Tokenizer not working | ## Environment info
- `transformers` version: 4.3.2
- Platform: Ubuntu 16.04.6 LTS
- Python version: 3.8.8
### Who can help
- tokenizers: @n1t0, @LysandreJik
## Information
To reproduce:
```
conda create --name=env1 python=3.8 jupyter transformers tokenizers -y -c conda-forge
conda activate env1
```
```
~$ python
Python 3.8.8 | packaged by conda-forge | (default, Feb 20 2021, 16:22:27)
[GCC 9.3.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", do_lower_case=False, strip_accents=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 395, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1788, in from_pretrained
return cls._from_pretrained(
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/guillem.garcia/.conda/envs/cosas/lib/python3.8/site-packages/transformers/models/bert/tokenization_bert_fast.py", line 199, in __init__
self.backend_tokenizer.normalizer = pre_tok_class(**pre_tok_state)
TypeError: PyBertNormalizer.__new__() got an unexpected keyword argument: do_lower_case
```
The most weird thing is that running that exact command in a jupyter notebook does not raise any error. Also `AutoTokenizer.from_pretrained("bert-base-cased", do_lower_case=False)` works so it seems to be something related to strip accents.
| 02-25-2021 09:32:00 | 02-25-2021 09:32:00 | Hi! We do not maintain the conda-forge versions of transformers and tokenizers. We maintain the versions that are on the `huggingface` channel.
I just tried with the `huggingface` channel and I get no such errors:
```
conda create --name=env1 python=3.8 jupyter transformers tokenizers -y -c huggingface && conda activate env1
```
See the result:
```
~ (🌟) 🤗 python (env1) 10:00:33 ~
Python 3.8.5 (default, Sep 4 2020, 07:30:14)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from transformers import AutoTokenizer
None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
>>> tokenizer = AutoTokenizer.from_pretrained("bert-base-cased", do_lower_case=False, strip_accents=False)
Downloading: 100%|█████████████████████████████████████████████████| 213k/213k [00:00<00:00, 2.71MB/s]
Downloading: 100%|█████████████████████████████████████████████████| 436k/436k [00:00<00:00, 4.67MB/s]
Ignored unknown kwargs option do_lower_case
>>>
```<|||||>I got a similar problem with [another BERT model](https://huggingface.co/ltgoslo/norbert)
Everything is OK if the `tokenizer_config.json` contains only this:
```
{
"do_lower_case": false
}
```
But as soon as another line is added:
```
{
"do_lower_case": false,
"do_basic_tokenize": false
}
```
the `AutoTokenizer.from_pretrained("ltgoslo/norbert")` ends in
`unexpected keyword argument: do_lower_case`,
which is weird, since the argument obviously is valid, if given alone.
I see the same problem even on the HuggingFace Model Hub itself:
`Can't load tokenizer using from_pretrained, please update its configuration: PyBertNormalizer.__new__() got an unexpected keyword argument: do_lower_case`
What is wrong with the `AutoTokenizer` + `do_basic_tokenize` combination? Locally, everything is fine if I use
`tokenizer = BertTokenizer.from_pretrained("ltgoslo/norbert")`
or
`tokenizer = AutoTokenizer.from_pretrained("ltgoslo/norbert", use_fast=False)`<|||||>Hi @akutuzov, the `do_basic_tokenize` is a python tokenizer only attribute, what behavior do you want from it?
You get the error because the `AutoTokenizer` tries to load a fast tokenizer by default.<|||||>Thanks @LysandreJik , this is my impression as well. But two questions then:
1. If the problem is with the `do_basic_tokenize`, why the warning says `unexpected keyword argument: do_lower_case`?
2. Is it possible to tell the `AutoTokenizer` **not** to load the fast tokenizer by default for a particular model? Anything I can put in the `config.json` or `tokenizer_config.json`? Since it seems that fast tokenizers sometimes lack the functionality which is there in the python tokenizers, it would be great to have some way to enforce using the python ones.
In our case, we need `do_basic_tokenize=False`, since we would like to avoid punctuation splitting.<|||||>I have just encountered the same problem. It appears only with Version 4.3.x. For now you could switch back to 4.2.x like me. In version 4.2.2 there is just this output:
`Ignored unknown kwargs option do_lower_case`
Everythink works as expected in 4.2.2!<|||||>@NebelAI I am using 4.2.2. It does not work as expected: the `do_basic_tokenize` parameter is silently ignored by the `AutoTokenizer`, which instead produces a strange warning about `do_lower_case`.
I see it as problematic behavior.
<|||||>@akutuzov You are right. My problem goes in the same direction but is not identical to yours, sorry.
I was capable of using `AutoTokenizer.from_pretrained(file, use_fast=True)` with `tokenizer.json` as file input for quite some time. After upgrading to 4.3.3 I was facing this weird exception you mentioned at the beginning. So my attempt only works if you are using tokenizer.json which has been created by tokenizers lib.
Bu still ... this error needs to be fixed.<|||||>Re-opening this as the issue isn't solved.<|||||>related to https://github.com/huggingface/transformers/issues/10121<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe this issue has been fixed by https://github.com/huggingface/transformers/pull/10686<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,389 | closed | GA: only run model templates once - from fork | 02-25-2021 00:33:34 | 02-25-2021 00:33:34 | ||
transformers | 10,388 | closed | GA: only run model templates once | 02-25-2021 00:29:26 | 02-25-2021 00:29:26 | ||
transformers | 10,387 | closed | loss.backward() TypeError seed issue for pretrained reformer model | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Google Colab
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.0+cu101
- Using GPU in script?: no, but error also occurs on GPU
- Using distributed or parallel set-up in script?: no
### Who can help
@patrickvonplaten
## Information
Model I am using: ReformerForSequenceClassification
The problem arises when using: Pretrained model
The tasks I am working on is:
* [x] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
```
from transformers import ReformerForSequenceClassification, ReformerTokenizerFast
test = ReformerForSequenceClassification.from_pretrained('google/reformer-crime-and-punishment')
tokenizer = ReformerTokenizerFast.from_pretrained('google/reformer-crime-and-punishment')
input = tokenizer("this is a test", return_tensors='pt')
out = test(**input, labels = torch.zeros((1,1), dtype=torch.long))
out.loss.backward()
```
Error message:
```
TypeError Traceback (most recent call last)
<ipython-input-155-db0c2d6dca2a> in <module>()
6 out = test(**input, labels = torch.zeros((1,1), dtype=torch.long))
7
----> 8 out.loss.backward()
5 frames
/usr/local/lib/python3.7/dist-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py in apply(self, *args)
87 def apply(self, *args):
88 # _forward_cls is defined by derived class
---> 89 return self._forward_cls.backward(self, *args) # type: ignore
90
91
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)
1673 head_mask=head_mask[len(layers) - idx - 1],
1674 attention_mask=attention_mask,
-> 1675 buckets=buckets,
1676 )
1677
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)
1527
1528 # set seed to have correct dropout
-> 1529 torch.manual_seed(self.feed_forward_seed)
1530 # g(Y_1)
1531 res_hidden_states = self.feed_forward(next_attn_output)
/usr/local/lib/python3.7/dist-packages/torch/random.py in manual_seed(seed)
30 `0xffff_ffff_ffff_ffff + seed`.
31 """
---> 32 seed = int(seed)
33 import torch.cuda
34
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
## Expected behavior
No error.
## Workaround
Calling the seed init functions fixes the issue:
```
from transformers import ReformerForSequenceClassification, ReformerTokenizerFast
test = ReformerForSequenceClassification.from_pretrained('google/reformer-crime-and-punishment')
tokenizer = ReformerTokenizerFast.from_pretrained('google/reformer-crime-and-punishment')
for l in [m for m in test.modules()][0].reformer.encoder.layers:
l._init_feed_forward_seed()
l._init_attention_seed()
input = tokenizer("this is a test", return_tensors='pt')
out = test(**input, labels = torch.zeros((1,1), dtype=torch.long))
out.loss.backward()
```
Also note that this doesn't occur when I use a custom config.
| 02-24-2021 22:57:03 | 02-24-2021 22:57:03 | duplicate of https://github.com/huggingface/transformers/issues/10370 more or less. Just need to do the same fixes as in the answer of #10370 |
transformers | 10,386 | closed | MNLI evaluation on pretrained models | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.dev / 4.3.3 / 4.3.2
- Platform: Ubuntu 18.04/ Windows 10
- Python version: 3.6.2
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> @patil-suraj , @sgugger, @LysandreJik
## Information
Model I am using (Bert, XLNet ...): huggingface/distilbert-base-uncased-finetuned-mnli - microsoft/deberta-v2-xxlarge-mnli - roberta-large-mnli - squeezebert/squeezebert-mnli - BERT-Base-MNLI....
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
I use run_glue.py on fine-tuned models to reproduce the evaluation result (only `--do_eval`). But the accuracy is about 7%. Other tasks like MRPC or STS-B are ok when I use their fine-tuned models.
## To reproduce
Steps to reproduce the behavior:
1. Run `python run_glue.py --model_name_or_path huggingface/distilbert-base-uncased-finetuned-mnli --task_name mnli --do_eval --max_seq_length 128 --output_dir temp/distill` or any other MNLI fine-tuned model. I even tried a model that I fine-tuned myself using V2.10.0 and that again results in 6%-7% accuracy.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
python run_glue.py --model_name_or_path huggingface/distilbert-base-uncased-finetuned-mnli --task_name mnli --do_eval --max_seq_length 128 --output_dir temp/distill
02/24/2021 11:38:34 - WARNING - main - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
02/24/2021 11:38:34 - INFO - main - Training/evaluation parameters TrainingArguments(output_dir=temp/distill, overwrite_output_dir=False, do_train=False, do_eval=True, do_predict=False, evaluation_strategy=EvaluationStrategy.NO, prediction_loss_only=False, per_device_train_batch_size=8, per_device_eval_batch_size=8, gradient_accumulation_steps=1, eval_accumulation_steps=None, learning_rate=5e-05, weight_decay=0.0, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, max_grad_norm=1.0, num_train_epochs=3.0, max_steps=-1, lr_scheduler_type=SchedulerType.LINEAR, warmup_steps=0, logging_dir=runs\Feb24_11-38-34_Ali_Workstation, logging_first_step=False, logging_steps=500, save_steps=500, save_total_limit=None, no_cuda=False, seed=42, fp16=False, fp16_opt_level=O1, fp16_backend=auto, local_rank=-1, tpu_num_cores=None, tpu_metrics_debug=False, debug=False, dataloader_drop_last=False, eval_steps=500, dataloader_num_workers=0, past_index=-1, run_name=temp/distill, disable_tqdm=False, remove_unused_columns=True, label_names=None, load_best_model_at_end=False, metric_for_best_model=None, greater_is_better=None, ignore_data_skip=False, sharded_ddp=False, deepspeed=None, label_smoothing_factor=0.0, adafactor=False, group_by_length=False, report_to=[], ddp_find_unused_parameters=None, dataloader_pin_memory=True, n_gpu=1)
02/24/2021 11:38:36 - WARNING - datasets.builder - Reusing dataset glue (C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4)
[INFO|configuration_utils.py:449] 2021-02-24 11:38:36,777 >> loading configuration file h***://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/config.json from cache at C:\Users\Ali/.cache\huggingface\transformers\240bd330b0e7919215436efe944c4073bfcc0bac4b7ed0a3378ab3d1793beb1a.acfb235b208288614b764ad50394132d4751a48a6c81fc382dc669e4d8a80a55
[INFO|configuration_utils.py:485] 2021-02-24 11:38:36,779 >> Model config DistilBertConfig {
“activation”: “gelu”,
“architectures”: [
“DistilBertForMaskedLM”
],
“attention_dropout”: 0.1,
“bos_token_id”: 0,
“dim”: 768,
“dropout”: 0.1,
“eos_token_ids”: 0,
“finetuning_task”: “mnli”,
“hidden_dim”: 3072,
“id2label”: {
“0”: “LABEL_0”,
“1”: “LABEL_1”,
“2”: “LABEL_2”
},
“initializer_range”: 0.02,
“label2id”: {
“LABEL_0”: 0,
“LABEL_1”: 1,
“LABEL_2”: 2
},
“max_position_embeddings”: 512,
“model_type”: “distilbert”,
“n_heads”: 12,
“n_layers”: 6,
“output_past”: true,
“pad_token_id”: 0,
“qa_dropout”: 0.1,
“seq_classif_dropout”: 0.2,
“sinusoidal_pos_embds”: false,
"tie_weights": true,
“transformers_version”: “4.3.2”,
“vocab_size”: 30522
}[INFO|configuration_utils.py:449] 2021-02-24 11:38:36,923 >> loading configuration file hs://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/config.json from cache at C:\Users\Ali/.cache\huggingface\transformers\240bd330b0e7919215436efe944c4073bfcc0bac4b7ed0a3378ab3d1793beb1a.acfb235b208288614b764ad50394132d4751a48a6c81fc382dc669e4d8a80a55
[INFO|configuration_utils.py:485] 2021-02-24 11:38:36,924 >> Model config DistilBertConfig {
“activation”: “gelu”,
“architectures”: [
“DistilBertForMaskedLM”
],
“attention_dropout”: 0.1,
“bos_token_id”: 0,
“dim”: 768,
“dropout”: 0.1,
“eos_token_ids”: 0,
“finetuning_task”: “mnli”,
“hidden_dim”: 3072,
“id2label”: {
“0”: “contradiction”,
“1”: “neutral”,
“2”: “entailment”
},
“initializer_range”: 0.02,
“label2id”: {
“contradiction”: “0”,
“entailment”: “2”,
“neutral”: “1”
},
“max_position_embeddings”: 512,
“model_type”: “distilbert”,
“n_heads”: 12,
“n_layers”: 6,
“output_past”: true,
“pad_token_id”: 0,
“qa_dropout”: 0.1,
“seq_classif_dropout”: 0.2,
“sinusoidal_pos_embds”: false,
“tie_weights_”: true,
“transformers_version”: “4.3.2”,
“vocab_size”: 30522
}
[INFO|tokenization_utils_base.py:1688] 2021-02-24 11:38:36,928 >> Model name ‘huggingface/distilbert-base-uncased-finetuned-mnli’ not found in model shortcut name list (distilbert-base-uncased, distilbert-base-uncased-distilled-squad, distilbert-base-cased, distilbert-base-cased-distilled-squad, distilbert-base-german-cased, distilbert-base-multilingual-cased). Assuming ‘huggingface/distilbert-base-uncased-finetuned-mnli’ is a path, a model identifier, or url to a directory containing tokenizer files.
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,946 >> loading file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/vocab.txt from cache at C:\Users\Ali/.cache\huggingface\transformers\3aa49bfb368cde995cea246a5c5ca4d75f769e74b3e6d450776805f998c78366.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,947 >> loading file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/tokenizer.json from cache at None
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,950 >> loading file htps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/added_tokens.json from cache at C:\Users\Ali/.cache\huggingface\transformers\603dca04f5c89cbdcdb8021ec21c4376c7334fa6393347c80a54c942a93e50cb.5cc6e825eb228a7a5cfd27cb4d7151e97a79fb962b31aaf1813aa102e746584b
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,951 >> loading file ht*ps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/special_tokens_map.json from cache at C:\Users\Ali/.cache\huggingface\transformers\dea17c39d149e23cb97e2a2829c6170489551d2454352fd18488f17bf90c54db.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d
[INFO|tokenization_utils_base.py:1786] 2021-02-24 11:38:37,952 >> loading file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/tokenizer_config.json from cache at C:\Users\Ali/.cache\huggingface\transformers\ce6fb0f339483f5ca331e9631b13bc5e9c842e64e9a40aa60defb3898b99dbed.11d9edb6b1301b5af13d33c1585ff45ff84dd55cc6915c2872f856d1ee2dc409
[INFO|modeling_utils.py:1027] 2021-02-24 11:38:38,148 >> loading weights file hps://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/resolve/main/pytorch_model.bin from cache at C:\Users\Ali/.cache\huggingface\transformers\16516ebd442e5f41cd8caf2de88c478fe8a3a0948e20eaf1fdae0bf2d4998be6.73881288e7255a28dacc8ad53661dde9248c11f6e2d10f3b6db193dddee2a2bc
[INFO|modeling_utils.py:1143] 2021-02-24 11:38:39,218 >> All model checkpoint weights were used when initializing DistilBertForSequenceClassification.
[INFO|modeling_utils.py:1152] 2021-02-24 11:38:39,221 >> All the weights of DistilBertForSequenceClassification were initialized from the model checkpoint at huggingface/distilbert-base-uncased-finetuned-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use DistilBertForSequenceClassification for predictions without further training.
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-0a88ac8e6b3bd378.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-e1993e6695981db0.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-133d62ae090971a5.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-497afbfcce3a8a9d.arrow
02/24/2021 11:38:39 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at C:\Users\Ali.cache\huggingface\datasets\glue\mnli\1.0.0\7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4\cache-7146b31017748988.arrow
02/24/2021 11:38:39 - INFO - main - Sample 335243 of the training set: {‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘hypothesis’: “Parents are busy and it’s sometimes hard to get them out.”, ‘idx’: 335243, ‘input_ids’: [101, 2017, 2113, 2043, 2037, 3008, 2272, 1998, 2009, 1005, 1055, 2524, 2000, 2131, 2068, 2041, 1998, 1037, 2843, 1997, 3008, 2031, 3182, 2000, 2175, 1998, 1998, 2477, 2066, 2008, 1998, 2009, 1005, 1055, 2397, 2012, 2305, 2061, 102, 3008, 2024, 5697, 1998, 2009, 1005, 1055, 2823, 2524, 2000, 2131, 2068, 2041, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘label’: 0, ‘premise’: “you know when their parents come and it’s hard to get them out and a lot of parents have places to go and and things like that and it’s late at night so”}.
02/24/2021 11:38:39 - INFO - main - Sample 58369 of the training set: {‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘hypothesis’: 'Where and what is art? ', ‘idx’: 58369, ‘input_ids’: [101, 2073, 2003, 2396, 1029, 102, 2073, 1998, 2054, 2003, 2396, 1029, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘label’: 1, ‘premise’: ‘Where is art?’}.
02/24/2021 11:38:39 - INFO - main - Sample 13112 of the training set: {‘attention_mask’: [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘hypothesis’: ‘The list says alcohol and injury are negatives facing staff.’, ‘idx’: 13112, ‘input_ids’: [101, 6544, 1998, 4544, 1010, 2004, 2092, 2004, 4766, 19388, 1010, 2024, 2006, 1996, 2862, 1012, 102, 1996, 2862, 2758, 6544, 1998, 4544, 2024, 4997, 2015, 5307, 3095, 1012, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0], ‘label’: 1, ‘premise’: ‘Alcohol and injury, as well as brief interventions, are on the list.’}.
[INFO|trainer.py:432] 2021-02-24 11:38:41,361 >> The following columns in the training set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
[INFO|trainer.py:432] 2021-02-24 11:38:41,362 >> The following columns in the evaluation set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
02/24/2021 11:38:41 - INFO - main - *** Evaluate ***
[INFO|trainer.py:432] 2021-02-24 11:38:41,366 >> The following columns in the evaluation set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
[INFO|trainer.py:1600] 2021-02-24 11:38:41,371 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-02-24 11:38:41,371 >> Num examples = 9815
[INFO|trainer.py:1602] 2021-02-24 11:38:41,372 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1227/1227 [00:10<00:00, 122.19it/s]
02/24/2021 11:38:52 - INFO - main - ***** Eval results mnli *****
02/24/2021 11:38:52 - INFO - main - eval_accuracy = 0.07865511971472236
02/24/2021 11:38:52 - INFO - main - eval_loss = 4.536623954772949
02/24/2021 11:38:52 - INFO - main - eval_runtime = 10.733
02/24/2021 11:38:52 - INFO - main - eval_samples_per_second = 914.471
[INFO|trainer.py:432] 2021-02-24 11:38:52,120 >> The following columns in the evaluation set don’t have a corresponding argument in DistilBertForSequenceClassification.forward and have been ignored: premise, hypothesis, idx.
[INFO|trainer.py:1600] 2021-02-24 11:38:52,124 >> ***** Running Evaluation *****
[INFO|trainer.py:1601] 2021-02-24 11:38:52,124 >> Num examples = 9832
[INFO|trainer.py:1602] 2021-02-24 11:38:52,125 >> Batch size = 8
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1229/1229 [00:10<00:00, 121.59it/s]
02/24/2021 11:39:02 - INFO - main - ***** Eval results mnli-mm *****
02/24/2021 11:39:02 - INFO - main - eval_accuracy = 0.08482506102522376
02/24/2021 11:39:02 - INFO - main - eval_loss = 4.487601280212402
02/24/2021 11:39:02 - INFO - main - eval_runtime = 10.127
02/24/2021 11:39:02 - INFO - main - eval_samples_per_second = 970.87
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
It seems all the weights are loaded in the correct place, but the accuracy is below 10% which should be above 80%.
```
[INFO|modeling_utils.py:1143] 2021-02-24 11:38:39,218 >> All model checkpoint weights were used when initializing DistilBertForSequenceClassification.
[INFO|modeling_utils.py:1152] 2021-02-24 11:38:39,221 >> All the weights of DistilBertForSequenceClassification were initialized from the model checkpoint at huggingface/distilbert-base-uncased-finetuned-mnli.
If your task is similar to the task the model of the checkpoint was trained on, you can already use DistilBertForSequenceClassification for predictions without further training.
```
| 02-24-2021 22:12:23 | 02-24-2021 22:12:23 | Hello! This may be because of labels being switched around for the MNLI task. See this thread https://github.com/huggingface/transformers/pull/10203 for more context.<|||||>Hello,
Many thanks for your response. Yes, that seems to be the source of my issue, and now I can get the accuracy.
Thanks!
<|||||>I think there is also a specific problem in `huggingface/distilbert-base-uncased-finetuned-mnli`: its labels seem wrongly coded. Using them specifically and evaluating gives me 34% accuracy.<|||||>Yes. But other models seem to work with the modification that I made https://github.com/huggingface/transformers/pull/10203#discussion_r582971857<|||||>@sgugger Can someone fix this, or remove the model from the model hub? This is a serious gotcha and cost me a couple weeks of confusion!<|||||>The model has been fixed a year ago, in [this commit](https://huggingface.co/huggingface/distilbert-base-uncased-finetuned-mnli/commit/0fadb1fe60cd119b3af82e2bf9cb98a59336d7bc)<|||||>Thank you for clarifying @sgugger! I think we had an old copy |
transformers | 10,385 | closed | DDP performing slightly worse in terms of loss and metrics than DP | Hi,
I am running Transformers 4.3.2 and am testing DistributedDataParallel vs DataParallel. The Huggingface Trainer in both instances have been untouched and slight modifications were made to run_clm.py to fit my specific use case.
This is using GPT-2 model trained from scratch.
I am consistently seeing much faster, but slightly worse results for DistributedDataParallel and was wondering if there are any possible reasons this could occur. Convergence still occurs but the evaluation loss is often slightly worse and alternative metrics we use are also slightly worse as well. The worse results are consistent through many runs and when keeping hyper-parameters the same (learning rate, # gpus, batch size, etc.)
The DistributedDataParallel code is being launch using "-m torch.distributed.launch" as has been recommended.
I load my data into a Datasets object from the huggingface/Datasets library.
Things I have checked:
1. Padding, padding is being treated the same in both models
2. gradient averaging, some information online suggested the gradients may be summed when using DataParallel vs averaged when using Distributed but I found this was not the case looking at the code.
Apologies if this too abstract a question, but I felt I would raise it as I have not seen any discussion of possible regression when switching to DDP.
Thanks! | 02-24-2021 21:19:33 | 02-24-2021 21:19:33 | Hi! This thread https://github.com/huggingface/transformers/issues/10223 might be useful, it sheds light over the issue you mention.<|||||>Hi thank you for linking, it is actually worse in terms of loss, not speed. DDP is significantly faster than DP, which I believe it is supposed to be.<|||||>Oh, sorry about that, I read a bit too fast. Pinging @sgugger who might know what's up.<|||||>I have not experienced this, so no idea of what might be causing the issue. It's also a bit vague and with no reproducer, so very hard to investigate further.<|||||>Thank you and I know it is very vague and will work on something that is reproducible.
Do you have any thoughts as to why this could occur? From my knowledge I would think these would be identical as the gradients from all GPUs are being averaged in both and there is no batch-norm which would be implemented differently.
I'd appreciate any thoughts and will work to find something more reproducible.
Thank you for the help.<|||||>This might be coming from the distributed sampler setting the random seed at each epoch, so trying to set it the same way with a run in DP might help (a bad seed could explain a small difference). It might also interfere with the random masking somehow, but that's far-fetched.
I also have no idea of how much difference you observed in the loss, one thing to try for debugging would also to double check the evaluation loss is the same in DP/DDP for a given model. There might a bug in the way one of them is computed.<|||||>The DistributedSampler looks it sets the seed in a way that would not effect the seed, its first lines of the __iter__ method are:
```
def __iter__(self) -> Iterator[T_co]:
if self.shuffle:
# deterministically shuffle based on epoch and seed
g = torch.Generator()
g.manual_seed(self.seed + self.epoch)
indices = torch.randperm(len(self.dataset), generator=g).tolist() # type: ignore
```
In that case the seed is set in both cases to the default huggingface seed of 42.
For our task we don't have random masking, as we are just doing CausalLM.
The difference in eval loss for one run is around the best score being .17 for DP and .175 for DDP. So slight but consistent across many runs. There are additional task specific metrics in which it performs significantly worse, which is the main reason I are trying to fix the issue. The eval_loss is both using the default GPT2WithLMHead, no change to the way it calculates loss.<|||||>> The DistributedSampler looks it sets the seed in a way that would not effect the seed
Correct if you use the most recent version of PyTorch, but it was not always the case.<|||||>Ah, great, thanks for the clarification. Yes, using pytorch 1.7.1 so that shouldn't be an issue<|||||>If the following suggestion might help, one way I approach such problems is logging the loss on every step with its count and noticing when numbers start to diverge - and checking if there is a significant event happening around that time.
For example with DeepSpeed's fp16 default dynamic loss scale enabled odd things were starting to happen around step 20 - until I learned that the scheduler was getting skipped until the loss scale value was small enough and only then it'd kick in - which typically happened around step 20. I'm certain this is not relevant to your situation, but I'm just giving an example. <|||||>Thank you for the suggestion! I will try this<|||||>Hi, after back-tracking on this we found that the padding issue hadn't been fully investigated and that did turn out to be the issue. I was able to use pytorch's all_reduce to communicate the max sequence length across gpus and pad to that amount. I'll paste the code here for the prepare_inputs method in case it helps anyone else bridge the gap between DDP and DP, its written for a bs per gpu of 1, would need a tweak to the torch.cat for larger batch sizes.
```
def _prepare_inputs(self, inputs: Dict[str, Union[torch.Tensor, Any]]) -> Dict[str, Union[torch.Tensor, Any]]:
world_size = dist.get_world_size()
self.gpu_group = dist.new_group(list(range(world_size)))
self.world_size = world_size
max_len = torch.tensor(max([len(i) for i in inputs['input_ids']]))
max_len = max_len.to(self.args.device)
dist.all_reduce(max_len, op=dist.ReduceOp.MAX, group=self.gpu_group)
max_len = max_len.cpu()
for k, v in inputs.items():
if isinstance(v, torch.Tensor):
if k in ['input_ids', 'labels']:
v = torch.cat((v, torch.ones(1, max(0, max_len - len(v[0])))*self.tokenizer.pad_token_id), axis=1).long()
#print(f'{k}-{self.args.local_rank}-{self.state.global_step}: {len(v[0])}')
elif k in ['attention_mask']:
v = torch.cat((v, torch.ones(1, max(0, max_len - len(v[0])))), axis=1).long()
inputs[k] = v.to(self.args.device)
if self.args.past_index >= 0 and self._past is not None:
inputs["mems"] = self._past
return inputs
```
I'd suggest that to be a first try for people trying to debug. We found that padding materially improved our results and not by just reducing the loss. One can also use all_reduce in place of the DDP model wrapper as in here: https://pytorch.org/tutorials/intermediate/dist_tuto.html. It may be helpful for someone else investigating DDP.
Thanks for the suggestions and apologies for misleading on the original post.<|||||>Interesting, thanks for the info and the code! |
transformers | 10,384 | closed | Fine-tune pretrained Wav2Vec2 on a small custom dataset | I am wondering how to **fine-tune** a pre-trained model on a small speech/audio dataset. I have 10 hours of audio with their transcript.
I would like to fine-tune a model and then use it as described here:
[](https://huggingface.co/facebook/wav2vec2-large-960h)
Thanks! | 02-24-2021 20:47:39 | 02-24-2021 20:47:39 | Patrick is working on it, see #10145 <|||||>I am also searching for fine-tuning of this model.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,383 | closed | Run GA on every push even on forks | This PR updates the Github Actions YAML file so that PRs opened from forks run the GA tests on every commit.
Fixes https://github.com/huggingface/transformers/issues/10065 | 02-24-2021 20:46:49 | 02-24-2021 20:46:49 | |
transformers | 10,382 | closed | Run GA on forks (Attempt #2) | 02-24-2021 20:40:29 | 02-24-2021 20:40:29 | ||
transformers | 10,381 | closed | Option to output "test predictions" text file with each checkpoint in run_seq2seq.py | Further to this discussion:
https://discuss.huggingface.co/t/how-to-output-test-generations-txt-with-run-seq2seq-py/3825
The prior incarnation of this script would output test generations at each checkpoint, which was very useful for understanding the progress of model training.
The current script...
https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py
Seems to only output this text file once, at the end of the last epoch.
If there was a way to enable the previous behavior, I am guessing that would be widely useful.
thanks | 02-24-2021 18:32:26 | 02-24-2021 18:32:26 | May be of interest to @patil-suraj @stas00 @sgugger <|||||>Yes, as I replied in the forums, this functionality was dropped - not sure why it was done, as I wasn't part part of the planning discussion.
I think it was not intentional, the devs were probably unaware it was used and given that the example tests were dropped too it's not surprising it was missed. I propose the dropped examples tests are restored (which will require porting to the new script) which will expose some of the functionality that was removed with it.
Practically, let's identify what else might have been removed and create separate issues besides this one and may be ask the community to help restore/backport the previously working things to the new script(s)?
e.g. one such important thing is the tests that were moved to legacy, so this script is no longer being tested.
p.s. this should be of help restoring/porting the example tests https://github.com/huggingface/transformers/issues/10036<|||||>@bhadreshpsavani, please let us know if you're inspired to take care of this in:
https://github.com/huggingface/transformers/issues/10337#issuecomment-785938863
Thank you.<|||||>Sure @stas00,
I can take care of this with a separate PR or if possible in the same PR,
Thanks<|||||>Correction, as I was refactoring `run_seq2seq.py` I can see now that the code wasn't removed - it's exactly the same. Someone decided to rename the resulting file instead. So the feature hasn't been removed, just renamed.
I'm not attached to either,
1. the original was saving it as "test_generations.txt"
2. the new one as "test_preds_seq2seq.txt"
I think the original name is the most intuitive one.
@sgugger, do you have an opinion here?<|||||>@bhadreshpsavani, so please hold a moment while we are re-modelling `run_seq2seq.py` and then I will update you when the model example is ready to be synced. Thank you!<|||||>PR to restore the original functionality: https://github.com/huggingface/transformers/pull/10428<|||||>OK, the original name has been restored as it used to be, @kingpalethe
As I mentioned in https://github.com/huggingface/transformers/pull/10428 if you'd like to request a new feature to do this on each check point please don't hesitate to make such request.<|||||>@stas00 thanks -- apologies, you are correct. I had hallucinated this behavior. I made a new issue: https://github.com/huggingface/transformers/issues/10439<|||||>All is good.
and now I see that my PR made that script inconsistent with other scripts, but perhaps all scripts should use the same filename for `test_generations.txt`. I can't quite see the point of it having a different name in each script. |
transformers | 10,380 | closed | Trainer.train() gets stuck when executed on K8 pods | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.15.0-76-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@LysandreJik @sgugger
## Information
Model I am using BertForSequenceClassification
The problem arises when using:
* [ ] my own modified scripts: (give details below)
When i try to start the training of the model in K8 pod (in kubeflow env) with ubuntu 18.04 image. There is no output or error shown by the function even after 30mins of runtime. GPU usage doesn't change as well
```
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2... On | 00000000:07:00.0 Off | 0 |
| N/A 28C P0 56W / 300W | 1514MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 Tesla V100-SXM2... On | 00000000:0A:00.0 Off | 0 |
| N/A 27C P0 56W / 300W | 1134MiB / 32510MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
+-----------------------------------------------------------------------------+
```
## Expected behavior
Model to train and output the results | 02-24-2021 17:51:36 | 02-24-2021 17:51:36 | Hi there. Unless you tell us what script you are using and how you are launching it, there is nothing we can do to help.<|||||>@sgugger i'm trying to execute this example script
https://github.com/huggingface/notebooks/blob/master/examples/text_classification.ipynb
This notebook works fine in a docker container but not on K8s pod
Script is launched in jupyter server that is hosted on kubeflow<|||||>@sgugger any information on how i can solve the issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,379 | closed | [firewalled env] OFFLINE mode | This is done - we now have:
* `HF_DATASETS_OFFLINE=1`
* `TRANSFORMERS_OFFLINE=1`
Documented: [here](https://huggingface.co/transformers/master/installation.html#offline-mode)
The transformers-specific issue is here:
-------------------
Similar to `datasets` https://github.com/huggingface/datasets/issues/1939 `transformers` needs to have an OFFLINE mode where it can work w/o ever making a network call to the outside world.
This issue comes from a need to be able to run `transformers` in a firewalled env, which currently makes the software hang until it times out, as it's unable to complete the network calls.
We assume `DATASETS_OFFLINE=1` will already deal with datasets and metrics as I proposed at https://github.com/huggingface/datasets/issues/1939, so this issue is specific to `transformers` only.
I propose the following approach to solving this problem, using the example of `run_seq2seq.py` as a sample program. There are 2 possible ways to going about it.
## 1. Manual
manually download model files, that is transfer to the firewalled instance and run:
```
TRANSFORMERS_OFFLINE=1 run_seq2seq.py --model_name_or_path ./t5-small-local ...
```
`transformers` must not make any network calls and if there is a logic to do that and something is missing it should assert that this or that action requires network and therefore it can't proceed.
## 2. Automatic
In some clouds one can prepare a data storage ahead of time with a normal networked environment but which doesn't have gpus and then one switches to the gpu instance which is firewalled, but it can access all the cached data. This is the ideal situation, since in this scenario we don't have to do anything manually, but simply run the same application twice:
1. on the non-firewalled instance:
```
run_seq2seq.py --model_name_or_path t5-small ...
```
which should download and cached everything.
2. and then immediately after on the firewalled instance, which shares the same filesystem:
```
TRANSFORMERS_OFFLINE=1 run_seq2seq.py --model_name_or_path t5-small ...
```
and the model should be cached by the invocation number 1 and any network calls be skipped and if the logic is missing data it should assert and not try to fetch any data from online.
## Specifics
1. We already have `local_files_only=True` for all 3 `.from_pretrained()` calls which make this already possible, but this requires editing software between invocation 1 and 2 in the Automatic scenario which is very error-prone. Thus I propose that `TRANSFORMERS_OFFLINE=1` will turn these flags True from the ouside of the system.
2. There are other issues to check, for example in some `examples` scripts we have:
```
with FileLock(".lock") as lock:
nltk.download("punkt", quiet=True)
```
which also issues a network call and under `TRANSFORMERS_OFFLINE=1` it should be skipped and replaced with a check that the corresponding nltk data is already available.
Thanks.
@julien-c, @sgugger, @LysandreJik | 02-24-2021 17:24:07 | 02-24-2021 17:24:07 | This is done. |
transformers | 10,378 | closed | AttributeError: 'QAModel' object has no attribute 'automatic_optimization' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: google colab
- `pytorch-lightning` version (GPU?): 1.2.0
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people. -->
####Models:
MT5ForConditionalGeneration ('google/mt5-base')
Library:
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Model I am using (MT5ForConditionalGeneration ('google/mt5-base')):
The problem arises when using:
pl.LightningDataModule with T5ForConditionalGeneration
* [ ] my own modified scripts: (give details below)
```
class QAModel(pl.LightningDataModule):
[def __init__(self):
super().__init__()
self.model = MT5ForConditionalGeneration.from_pretrained(MODEL_NAME, return_dict=True)
def forward(self, input_ids, attention_mask, labels=None):
output = self.model(
input_ids=input_ids,
attention_mask=attention_mask,
labels=labels
)
return output.loss, output.logits
def training_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('train_loss', loss, prog_bar=True, logger=True)
return loss
def validation_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('val_loss', loss, prog_bar=True, logger=True)
return loss
def test_step(self, batch, batch_idx):
input_ids = batch['input_ids']
attention_mask = batch['attention_mask']
labels = batch['labels']
loss, outputs = self(input_ids, attention_mask, labels)
self.log('train_loss', loss, prog_bar=True, logger=True)
return loss
def configure_optimizers(self):
print('done')
return AdamW(self.parameters(), lr=0.0001)]
model = QAModel()
trainer.fit(model, data_module)
```
That is my colab code
https://colab.research.google.com/drive/1wRYnuQhkO8UvE2CtsJ09dGHy4R_nVTPd?usp=sharing
Thanks a lot!
**** | 02-24-2021 15:21:06 | 02-24-2021 15:21:06 | Hello! This issue seems to be with PyTorch Lightning rather than with Transformers.<|||||>> Hello! This issue seems to be with PyTorch Lightning rather than with Transformers.
Well, I've tried to pass MT5ForConditionalGeneration directly to fit() function, but got following error:
ModuleAttributeError: 'MT5ForConditionalGeneration' object has no attribute 'automatic_optimization'<|||||>As you'll see in your error:
```
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/connectors/model_connector.py in copy_trainer_model_properties(self, model)
```
this originates from a PyTorch-Lightning error. Transformers has no `automatic_optimization` parameter, our models are plain PyTorch models.
Thank you.<|||||>thank you a lot |
transformers | 10,377 | closed | Training LongformerForQuestionAnswering on TriviaQA | Hello,
Could you please give a short explanation on how to retrain `LongformerForQuestionAnswering` in order to receive these weights: `allenai/longformer-large-4096-finetuned-triviaqa` (https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa/tree/main).
Thank you,
Sapir | 02-24-2021 14:56:47 | 02-24-2021 14:56:47 | Pinging @ibeltagy and @patrickvonplaten who would know better.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,376 | closed | UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte | ## Description
I have fine tunned a transformers model ('bert-base') and saved the checkpoint. Now I want to use somewhere with the checkpoints I saved. But If I still use bert-base on my new env it will not retrieve my checkpoints, so I want to know the proper way to make my model (Pytorch Lightning model) understand that I want to retrieve my checkpoint
## Relavant info on my model class
```py
class NER_Model(pl.LightningModule):
def __init__(self, hyperparams, model_parameters, dataset_infos, extra_infos):
super(NER_Model, self).__init__()
self.model_name = "/checkpoins_folder/epoch=2-step=167-v1.ckpt"
self.model = AutoModelForTokenClassification.from_pretrained(
self.model_name,
num_labels=7,
output_attentions = False,
output_hidden_states = False
)
def predict(self, X: str):
self.step = "Deployment"
self.test_pred_tags = []
batch = self.tokenizer.encode_plus(X, return_tensors="pt")
batch["attention_masks"] = torch.ones_like(batch["input_ids"])
batch = dict((key, input.to( self.device)) for key, input in batch.items())
return self.test_step(batch, None)
## Call
model = NER_Model.load_from_checkpoint(
checkpoint_path = "/checkpoins_folder/epoch=2-step=167-v1.ckpt",
map_location={"cuda":"cpu"},
hyperparams=hyperparams,
model_parameters=model_parameters,
dataset_infos=dataset_infos,
extra_infos=extra_infos,
)
```
##Error:
```out
---------------------------------------------------------------------------
UnicodeDecodeError Traceback (most recent call last)
<ipython-input-26-c4ae8a2442c3> in <module>()
6 model_parameters=model_parameters,
7 dataset_infos=dataset_infos,
----> 8 extra_infos=extra_infos,
9 )
7 frames
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/saving.py in load_from_checkpoint(cls, checkpoint_path, map_location, hparams_file, strict, **kwargs)
154 checkpoint[cls.CHECKPOINT_HYPER_PARAMS_KEY].update(kwargs)
155
--> 156 model = cls._load_model_state(checkpoint, strict=strict, **kwargs)
157 return model
158
/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/saving.py in _load_model_state(cls, checkpoint, strict, **cls_kwargs_new)
196 _cls_kwargs = {k: v for k, v in _cls_kwargs.items() if k in cls_init_args_name}
197
--> 198 model = cls(**_cls_kwargs)
199
200 # give model a chance to load something
<ipython-input-10-ed914cb098e8> in __init__(self, hyperparams, model_parameters, dataset_infos, extra_infos)
61 num_labels=len(self.tags_infos_dict["tag2idx"]),
62 output_attentions = self.output_attentions,
---> 63 output_hidden_states = self.output_hidden_states
64 )
65
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/modeling_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *model_args, **kwargs)
1648 if not isinstance(config, PretrainedConfig):
1649 config, kwargs = AutoConfig.from_pretrained(
-> 1650 pretrained_model_name_or_path, return_unused_kwargs=True, **kwargs
1651 )
1652
/usr/local/lib/python3.7/dist-packages/transformers/models/auto/configuration_auto.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
366 {'foo': False}
367 """
--> 368 config_dict, _ = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
369
370 if "model_type" in config_dict:
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in get_config_dict(cls, pretrained_model_name_or_path, **kwargs)
425 )
426 # Load config dict
--> 427 config_dict = cls._dict_from_json_file(resolved_config_file)
428
429 except EnvironmentError as err:
/usr/local/lib/python3.7/dist-packages/transformers/configuration_utils.py in _dict_from_json_file(cls, json_file)
508 def _dict_from_json_file(cls, json_file: Union[str, os.PathLike]):
509 with open(json_file, "r", encoding="utf-8") as reader:
--> 510 text = reader.read()
511 return json.loads(text)
512
/usr/lib/python3.7/codecs.py in decode(self, input, final)
320 # decode input (taking the buffer into account)
321 data = self.buffer + input
--> 322 (result, consumed) = self._buffer_decode(data, self.errors, final)
323 # keep undecoded input until the next call
324 self.buffer = data[consumed:]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
```
| 02-24-2021 14:33:10 | 02-24-2021 14:33:10 | You're using `AutoModelForTokenClassification` but you're specifying a `ckpt` file. It needs a folder with a `config.json`, and a state dict. Here it's expecting a PyTorch state dict named `pytorch_model.bin` since you're using the PyTorch version.
If you've retrieved your checkpoint from the original BERT repository, I recommend you take a look at the following documentation [Converting Tensorflow Checkpoints](https://huggingface.co/transformers/converting_tensorflow_models.html).
Also, you're using a PyTorch model's `from_pretrained` model, so I point you to the documentation of that method [here](https://huggingface.co/transformers/main_classes/model.html#transformers.PreTrainedModel.from_pretrained).<|||||>@LysandreJik first of all thanks for replying.
Is it the proper way to retrieve a model from a checkpoint generated by me? When I pass `self.model=bert-base` on my new env it doesnt retrieve my checkpoints, so when I call `model.precit('any text ')` it gives me wrong results [maybe because it gets original weights from bert-base instead of my finne tunned weights].
So following your suggestion, if changing `self.model` for my finne tunned model is the correct way, how do I generate `config.json` and `pytorch_model.bin` from my **trained model**<|||||>Well you should use the `save_pretrained` method on your model. I haven't used PyTorch Lightning but if your module is named `model` and that the transformers is the `model` attribute of that module, it would be something like:
```py
model.model.save_pretrained("directory")
```<|||||>I did it and I worked. But it stills giving wrong results when I call the predict. On my model I have inserted the path to the created dir instead of bert-base.
self.model = AutoModelForTokenClassification.from_pretrained(
"/content/drive/MyDrive/CityZen/Explorations/BERT/weights/binaries",
num_labels=len(self.tags_infos_dict["tag2idx"]),
output_attentions = self.output_attentions,
output_hidden_states = self.output_hidden_states
) |
transformers | 10,375 | closed | DPR decode_best_spans include spans from title | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.1.1
### Who can help
@lhoestq, @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I believe there is a bug on the following line
```python
passage_offset = sequence_ids.index(self.sep_token_id, 2) + 1 # second sep id
```
It is on file `src/transformers/models/dpr/tokenization_dpr.py`. Next some context to this line:
```python
class CustomDPRReaderTokenizerMixin:
...
def decode_best_spans(...) -> List[DPRSpanPrediction]:
...
for doc_id in sorted_docs:
...
# assuming question & title information is at the beginning of the sequence
passage_offset = sequence_ids.index(self.sep_token_id, 2) + 1 # second sep id
...
return nbest_spans_predictions[:num_spans]
```
The comments make me think that `passage_offset` refers to the start of the passage, after question and title. I feel that the intent behind `sequence_ids.index(self.sep_token_id, 2) ` was to select the second position where `self.sep_token_id` appears, but this doesn't happen, as this is selecting the first ocurrence of sep_token_id starting on token number 2.
I believe an easy fix would be:
```python
title_offset = sequence_ids.index(self.sep_token_id) + 1 # first sep id
passage_offset = sequence_ids.index(self.sep_token_id, title_offset) + 1 # second sep id
```
| 02-24-2021 13:43:08 | 02-24-2021 13:43:08 | Yes that's totally true. Good catch !
Could you open a PR to fix that please ?
Maybe this could have affected the performance of the DPR Reader a bit, but probably not significantly though since the logits of the tokens in the title have very low values. The model was trained to return answers from the passage, not from the title.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,374 | closed | Fix None in add_token_positions - issue #10210 | Related to the issue #10210
I fix the error in that way, can you confirm is right?
@joeddav @sgugger
Kind regards,
Andrea | 02-24-2021 13:36:43 | 02-24-2021 13:36:43 | Hello @joeddav,
Yes the proposed change works!
I have updated the commit.
Thank you for the attention,
Andrea |
transformers | 10,373 | closed | [Documentation issue] Sequence to sequence models | Most models have their example docstrings appended using the `add_code_sample_docstrings` method. This method checks the name of the architecture, and according to its suffix, adds the corresponding sequence.
However, this isn't perfect with the difference between sequence-to-sequence and enc/dec only models, which can have the same suffix: `BertModel` and `MarianModel` have the same suffixes, but should not have the same docstrings, the latter needing `decoder_input_ids` in order to work.
I propose to return a different code sample according to the architecture type (seq-to-seq vs enc/dec). The signature of the method that updates the code sample is the following:
https://github.com/huggingface/transformers/blob/2d458b2c7d6fb1dd5b2361938d1b5bd4c2106479/src/transformers/file_utils.py#L884-L886
There are in my opinion two ways to go about it:
- We can infer the type from the `output_type`. We've mentioned some while ago that having inheritance with model outputs and being able to identify categories of outputs from their classes would make sense for a potential pipeline-v2 implementation, this could be implemented here to detect if a model is seq-2-seq or not.
- We could add another argument to the signature too mention if it's seq-2-seq or not. However, this isn't very future-proof and can result in increased complexity if we ever need an additional separation across architectures.
@patrickvonplaten @sgugger @patil-suraj looking forward to your feedback. I tend to prefer the first option even if it requires a bit more work.
Related issue: https://github.com/huggingface/transformers/issues/10368 | 02-24-2021 13:33:41 | 02-24-2021 13:33:41 | Option 1) seems the way to go for me as well here!<|||||>Said the same in private but realized I didn't put it here. So option 1 it is!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Don't close this one robot!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,372 | closed | deprecated reference `tokenizer.max_len` in glue.py (PR #10220) | There is a deprecated reference to `tokenizer.max_len` with `tokenizer.model_max_length` - similar to [issue 8739](https://github.com/huggingface/transformers/issues/8739) and [PR 8604](https://github.com/huggingface/transformers/pull/8604).
See error example [in Colab here](https://colab.research.google.com/gist/poedator/f8776349e5c625ce287fc6fcd312fa1e/tokenizer-max_len-error-in-transformers_glue.ipynb). it causes `AttributeError: 'BertTokenizer' object has no attribute 'max_len'`
The error happens when `glue_convert_examples_to_features()` is called without `max_length` parameter specified. In that case [line 119](https://github.com/huggingface/transformers/blob/master/src/transformers/data/processors/glue.py#L119) with wrong reference gets called.
I submitted a [simple PR #10220](https://github.com/huggingface/transformers/pull/10220). It should be able to fix this issue. | 02-24-2021 10:49:09 | 02-24-2021 10:49:09 | Thanks for opening a PR which fixes the issue! I just merged it.
|
transformers | 10,371 | open | Load pretrained model except the head layer for a specific downstream task | # 🚀 Feature request
It would be nice to have a flag for `from_pretrained` method that indicates whether to load last layer or not. This feature is needed for transfer learning.
## Motivation
I have trained a model with a specific dataset for a downstream task. Now, I need to train another model that needs to be trained on a similar dataset with different labels. I know that previous model have learned the features from the previous dataset and the new model doesn't need to start from scratch. When I try to load the first model with `from_pretrained` method, it returns size mismatch error due to last layer that has different shape for different number of labels. If there is a flag to load/not to load the last layer, I can initialize last layer randomly and go on my training with transfer learning.
| 02-24-2021 09:50:17 | 02-24-2021 09:50:17 | for now, how can I load pretrained models that have different prediction heads? thanks!<|||||>For now you can discard the head by passing through a base model:
```py
from transformers import AutoModelForSequenceClassification, AutoModel
pretrained_with_head = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased-distilled-squad")
pretrained_with_head.save_pretrained(directory)
# Model saved in directory has the head
pretrained_no_head = AutoModel.from_pretrained(directory)
pretrained_no_head.save_pretrained(directory)
# Model saved in directory no longer has the head
pretrained_with_head = AutoModelForSequenceClassification.from_pretrained(directory)
# Loaded model has the full transformer, but the head is randomly initialized
```<|||||>Hi @LysandreJik,
Is this issue being addressed elsewhere?
If not, would like to work on it. <|||||>@vimarshc this issue has not been addressed elsewhere. Feel free to draft a proposal in an issue/PR so that we can take a look and discuss! Thank you!<|||||>Hi @LysandreJik is this still available for contribution? If yes, I would love to work on it. It would be helpful if you could add a reference draft proposal. Thanks!<|||||>This has been somewhat addressed by https://github.com/huggingface/transformers/pull/12664 |
transformers | 10,370 | closed | ReformerForQuestionAnswering : int() argument must be a string, a bytes-like object or a number, not 'NoneType' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): Reformer
The problem arises when using:
* [ ] my own modified scripts: performing a backward() after passing the query and text to the `ReformerForQuestionAnswering` model.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: a subset of SQuAD
## To reproduce
Steps to reproduce the behavior:
Performing backward on the loss throwing an error.
Minimal code to reproduce the error.
```
from transformers import ReformerTokenizer, ReformerForQuestionAnswering
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Error Traceback
```
create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):
/usr/local/lib/python3.7/dist-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
--> 132 allow_unreachable=True) # allow_unreachable flag
133
134
/usr/local/lib/python3.7/dist-packages/torch/autograd/function.py in apply(self, *args)
87 def apply(self, *args):
88 # _forward_cls is defined by derived class
---> 89 return self._forward_cls.backward(self, *args) # type: ignore
90
91
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)
1673 head_mask=head_mask[len(layers) - idx - 1],
1674 attention_mask=attention_mask,
-> 1675 buckets=buckets,
1676 )
1677
/usr/local/lib/python3.7/dist-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)
1527
1528 # set seed to have correct dropout
-> 1529 torch.manual_seed(self.feed_forward_seed)
1530 # g(Y_1)
1531 res_hidden_states = self.feed_forward(next_attn_output)
/usr/local/lib/python3.7/dist-packages/torch/random.py in manual_seed(seed)
30 `0xffff_ffff_ffff_ffff + seed`.
31 """
---> 32 seed = int(seed)
33 import torch.cuda
34
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
From debugging, I believe that the error was caused because the `self.feed_forward_seed` in `ReformerLayer` class is `None`.
I have tried the same code with Longformer and it was working perfectly.
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
`loss.backward()` running properly. | 02-24-2021 08:29:58 | 02-24-2021 08:29:58 | Hey @harikc456,
The problem is that the model is not put into training mode. If you run the following code:
```python
from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
# change to position embeddings to prevent error
model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
```
you can see that the code runs without error.<|||||>@patrickvonplaten
Hello, I've just come across the same issue.
I tried the code below,
``` python
from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
# change to position embeddings to prevent error
model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
```
and got the following error message.
```
Some weights of the model checkpoint at google/reformer-crime-and-punishment were not used when initializing ReformerForQuestionAnswering: ['lm_head.bias', 'lm_head.decoder.weight', 'lm_head.decoder.bias']
- This IS expected if you are initializing ReformerForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing ReformerForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of ReformerForQuestionAnswering were not initialized from the model checkpoint at google/reformer-crime-and-punishment and are newly initialized: ['reformer.encoder.layers.0.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.0.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.1.attention.self_attention.self_mask_value_float16', 'reformer.encoder.layers.1.attention.self_attention.self_mask_value_float32', 'reformer.encoder.layers.1.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.1.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.2.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.2.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.3.attention.self_attention.self_mask_value_float16', 'reformer.encoder.layers.3.attention.self_attention.self_mask_value_float32', 'reformer.encoder.layers.3.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.3.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.4.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.4.attention.self_attention.mask_value_float32', 'reformer.encoder.layers.5.attention.self_attention.self_mask_value_float16', 'reformer.encoder.layers.5.attention.self_attention.self_mask_value_float32', 'reformer.encoder.layers.5.attention.self_attention.mask_value_float16', 'reformer.encoder.layers.5.attention.self_attention.mask_value_float32', 'qa_outputs.weight', 'qa_outputs.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/path/to/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/nn/modules/container.py:435: UserWarning: Setting attributes on ParameterList is not supported.
warnings.warn("Setting attributes on ParameterList is not supported.")
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-1-60eb084822c0> in <module>
16 outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
17 loss = outputs.loss
---> 18 loss.backward()
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph)
219 retain_graph=retain_graph,
220 create_graph=create_graph)
--> 221 torch.autograd.backward(self, gradient, retain_graph, create_graph)
222
223 def register_hook(self, hook):
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables)
128 retain_graph = create_graph
129
--> 130 Variable._execution_engine.run_backward(
131 tensors, grad_tensors_, retain_graph, create_graph,
132 allow_unreachable=True) # allow_unreachable flag
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/autograd/function.py in apply(self, *args)
87 def apply(self, *args):
88 # _forward_cls is defined by derived class
---> 89 return self._forward_cls.backward(self, *args) # type: ignore
90
91
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in backward(***failed resolving arguments***)
1666
1667 # backprop
-> 1668 output = layer.backward_pass(
1669 next_attn_output=output.attn_output,
1670 hidden_states=output.hidden_states,
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in backward_pass(self, next_attn_output, hidden_states, grad_attn_output, grad_hidden_states, attention_mask, head_mask, buckets)
1527
1528 # set seed to have correct dropout
-> 1529 torch.manual_seed(self.feed_forward_seed)
1530 # g(Y_1)
1531 res_hidden_states = self.feed_forward(next_attn_output)
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/torch/random.py in manual_seed(seed)
30 `0xffff_ffff_ffff_ffff + seed`.
31 """
---> 32 seed = int(seed)
33 import torch.cuda
34
TypeError: int() argument must be a string, a bytes-like object or a number, not 'NoneType'
```
I first tried to use:
```python
tokenizer = AutoTokenizer.from_pretrained("google/reformer-crime-and-punishment")
model = AutoModelForSequenceClassification.from_pretrained(
"google/reformer-crime-and-punishment", return_dict=True
)
```
It failed, then I found this issue and added:
```
# change to position embeddings to prevent error
model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
```
However, the same error occurs.
- `transformers` version: 4.1.1
- Platform: Linux-4.15.0-135-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Maybe the problem is that the version of Transformers I am using for this is old?
Thank you in advance.<|||||>It seems that the same issue occurs when I updated the transformers to the latest stable version via pip.
- `transformers` version: 4.4.1
- Platform: Linux-4.15.0-135-generic-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
Is the problem depending on the version of some other library?<|||||>Excuse me for my frequent posting.
Instead of overwriting `position_embeddings`,
inserting `model.train()` seems to work (but with another issue).
```python
from transformers import ReformerTokenizer, ReformerForQuestionAnswering
from transformers.models.reformer.modeling_reformer import PositionEmbeddings
import torch
tokenizer = ReformerTokenizer.from_pretrained('google/reformer-crime-and-punishment')
model = ReformerForQuestionAnswering.from_pretrained('google/reformer-crime-and-punishment')
# # change to position embeddings to prevent error
# model.reformer.embeddings.position_embeddings = PositionEmbeddings(model.config)
model.train()
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
inputs = tokenizer(question, text, return_tensors='pt')
start_positions = torch.tensor([1])
end_positions = torch.tensor([3])
outputs = model(**inputs, start_positions=start_positions, end_positions=end_positions)
loss = outputs.loss
loss.backward()
```
The different error message is shown, but it seems can be treated by just doing padding.
```
~/.pyenv/versions/anaconda3-2020.07/lib/python3.8/site-packages/transformers/models/reformer/modeling_reformer.py in forward(self, position_ids)
154
155 if self.training is True:
--> 156 assert (
157 reduce(mul, self.axial_pos_shape) == sequence_length
158 ), "If training, make sure that config.axial_pos_shape factors: {} multiply to sequence length. Got prod({}) != sequence_length: {}. You might want to consider padding your sequence length to {} or changing config.axial_pos_shape.".format(
AssertionError: If training, make sure that config.axial_pos_shape factors: (512, 1024) multiply to sequence length. Got prod((512, 1024)) != sequence_length: 28. You might want to consider padding your sequence length to 524288 or changing config.axial_pos_shape.
```
I'm now trying padding the input, and it seems working.
```
tokenizer.pad_token = tokenizer.eos_token
inputs = tokenizer(question, text, padding='max_length', truncation=True, max_length=524288, return_tensors='pt')
```
I apologize if this is not an appropriate solution.<|||||>We could maybe add a better error message that fires when Reformer is not in training mode, but one runs `.backward()`. @forest1988 if you want feel free to open a PR :-)<|||||>@patrickvonplaten
Thanks, I'll open a PR!
I'm a little busy right now, but I'll make time to work on it soon.<|||||>Hi @patrickvonplaten,
Sorry to be late. I've just opened PR #11117 regarding this issue. All checks have passed.
Could you please have a look at it when you have time? |
transformers | 10,369 | closed | Why should `attn_weights` be reshaped twice in BartAttention ? | Can anybody help to understand that? https://github.com/huggingface/transformers/blob/3437d12134893dd7b45737e422e105e511341297/src/transformers/models/bart/modeling_bart.py#L238-L244
| 02-24-2021 08:19:06 | 02-24-2021 08:19:06 | @patrickvonplaten excuse me, could you share any idea please<|||||>See this PR for more information: https://github.com/huggingface/transformers/pull/8747<|||||>> See this PR for more information: #8747
Almost understand, although it's still weird. As #8747 explained,
```
This ensures that the returned hidden state tensors lie upstream in the graph from the model outputs (allowing their gradients to be computed)
```
Thanks! |
transformers | 10,368 | closed | TFMarianModel from_pretrained can't load weights | ## Environment info
- `transformers` version: 4.3.2
- Platform: Windows-7-6.1.7601-SP1
- Python version: 3.6.6
- PyTorch version (GPU?): 1.5.1+cpu (False)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...): TFMarianMT
The problem arises when using:
* [+] the official example scripts: (give details below)
## To reproduce
```python
from transformers import MarianTokenizer, TFMarianModel
model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
```
Steps to reproduce the behavior:
1. Run the above code
2. Get an error:
> Exception has occurred: OSError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
> Can't load weights for 'Helsinki-NLP/opus-mt-en-de'. Make sure that:
>
> - 'Helsinki-NLP/opus-mt-en-de' is a correct model identifier listed on 'https://huggingface.co/models'
>
> - or 'Helsinki-NLP/opus-mt-en-de' is the correct path to a directory containing a file named one of tf_model.h5, pytorch_model.bin.
>
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\transformers\modeling_tf_utils.py", line 1219, in from_pretrained
> raise EnvironmentError(msg)
> File "C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\Unsupervised\translation\paraphrases_translation.py", line 14, in <module>
> model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
> File "C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\Unsupervised\translation\run_locally.py", line 1, in <module>
> from paraphrases_translation import run
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 96, in _run_module_code
> mod_name, mod_spec, pkg_name, script_name)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 263, in run_path
> pkg_name=pkg_name, script_name=fname)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 193, in _run_module_as_main (Current frame)
> "__main__", mod_spec)
>
## Expected behavior
No error
| 02-24-2021 08:03:29 | 02-24-2021 08:03:29 | Thanks for the issue!
I uploaded the TF weights https://huggingface.co/Helsinki-NLP/opus-mt-en-de/commit/1a8c2263da11e68e50938f97e10cd57820bd504c. Should be fixed now - could you try again?<|||||>Thanks, this method working now. Please upload a model 'Helsinki-NLP/opus-mt-ru-en'. And I think other models are not working as well.
But now I can not call the model:
```python
from transformers import MarianTokenizer, TFMarianModel
import tensorflow as tf
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')
model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
```
I am getting an error:
> Exception has occurred: ValueError (note: full exception trace is shown but execution is paused at: _run_module_as_main)
> You have to specify either decoder_input_ids or decoder_inputs_embeds
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\transformers\models\marian\modeling_tf_marian.py", line 924, in call
> raise ValueError("You have to specify either decoder_input_ids or decoder_inputs_embeds")
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 985, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\transformers\models\marian\modeling_tf_marian.py", line 1137, in call
> training=inputs["training"],
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 985, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\transformers\models\marian\modeling_tf_marian.py", line 1232, in call
> training=inputs["training"],
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 985, in __call__
> outputs = call_fn(inputs, *args, **kwargs)
> File "C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\Unsupervised\translation\paraphrases_translation.py", line 16, in <module>
> outputs = model(inputs)
> File "C:\Users\FA.PROJECTOR-MSK\Google Диск\Colab Notebooks\PoetryTransformer\Unsupervised\translation\run_locally.py", line 1, in <module>
> from paraphrases_translation import run
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 96, in _run_module_code
> mod_name, mod_spec, pkg_name, script_name)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 263, in run_path
> pkg_name=pkg_name, script_name=fname)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 85, in _run_code
> exec(code, run_globals)
> File "C:\Program Files (x86)\Microsoft Visual Studio\Shared\Python36_64\Lib\runpy.py", line 193, in _run_module_as_main (Current frame)
> "__main__", mod_spec)<|||||>There are quite a lot of other models to upload, so this will take some time.
I'll start writing a script to automate this process...
Until then you can make use of this easy fix:
```python
from transformers import MarianTokenizer, TFMarianModel
import tensorflow as tf
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-ru-de')
model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-ru-en', from_pt=True)
inputs = tokenizer("Hello, my dog is cute", return_tensors="tf")
outputs = model(inputs)
```<|||||>Thanks, the model is loading. But I am still not able to call it. Should I report this bug separately ?<|||||>Hi! I think there's an error in the `TFMarianModel` docstring, it should be similar to the `MarianModel` docstring. You can't call encoder-decoder models with only input IDs (with a few exceptions like BART), you also need to provide decoder input IDs.
In your case this should work:
```py
from transformers import MarianTokenizer, TFMarianModel
import tensorflow as tf
tokenizer = MarianTokenizer.from_pretrained('Helsinki-NLP/opus-mt-en-de')
model = TFMarianModel.from_pretrained('Helsinki-NLP/opus-mt-en-de')
input_ids = tokenizer("Studies have been shown that owning a dog is good for you", return_tensors="tf").input_ids # Batch size 1
decoder_input_ids = tokenizer("<pad> Studien haben gezeigt dass es hilfreich ist einen Hund zu besitzen", return_tensors="tf", add_special_tokens=False).input_ids # Batch size 1
outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids)
last_hidden_states = outputs.last_hidden_state
```<|||||>Thanks, it works<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,367 | closed | device-side assert triggered Error while doing inference on Distilbert and Bert | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Colab
- Python version: 3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using Distilbert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
[colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/DistilbertPerformance.ipynb)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD v2
* [ ] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
1. Get Model and Tokenizer
2. Get SQUAD2 datasets
3. Performance Inference on the validation dataset with GPU
4. get result on SQUAD V2 Metrix
Run the [colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/DistilbertPerformance.ipynb) to reproduce.
## Expected behavior
It should give results in the below format without error on SQUAD V2 Metrix
```python
{'exact': 79.4660153288975, 'f1': 82.91266052065696, 'total': 11873, 'HasAns_exact': 77.64844804318489, 'HasAns_f1': 84.55162253066118, 'HasAns_total': 5928, 'NoAns_exact': 81.27838519764508, 'NoAns_f1': 81.27838519764508, 'NoAns_total': 5945, 'best_exact': 79.4660153288975, 'best_exact_thresh': 1.0, 'best_f1': 82.91266052065693, 'best_f1_thresh': 1.0}
```
Note: This code is working fine for longformer model. I found this issue in Distilbert and Bert Model while doing inference on GPU
Tagging SMEs: @LysandreJik
| 02-24-2021 07:06:39 | 02-24-2021 07:06:39 | If you have a CUDA device-side error, it is advised to run your code on CPU, because then you will receive a more informative error message.<|||||>Hi @NielsRogge,
In CPU its working fine, Error is coming while using GPU only
<|||||>Hi @bhadreshpsavani, this can't work on CPU. You're sending a sequence that is too long to the model so it cannot handle it.
Please replace
```py
inputs = tokenizer(example['question'], example['context'], return_tensors="pt")
```
by
```py
inputs = tokenizer(example['question'], example['context'], return_tensors="pt", truncation=True)
```
This truncates the sequences that are too long.
Your colab should work then.<|||||>Thanks @LysandreJik,
It worked!<|||||>Glad we could help, closing! |
transformers | 10,366 | closed | can't allocate memory error with wav2vec2 | I am trying out the wav2vec2 model for ASR from the huggingface library. Here, I am passing a 7 min(~15 MB file) long wav file having a conversation(english) to the wav2vec2 model. I am getting "can't allocate memory" error. I found that the model uses all 64 GB of the available RAM. Can anyone help with this.
- `transformers` version: 4.3.2
- Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.8.3
- PyTorch version (GPU?): 1.7.1 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: (NA)
- Using distributed or parallel set-up in script?: (NA)
Code
```
import os
import librosa
import soundfile as sf
from pydub import AudioSegment
def convert_audio_segment(fp, upload_dir_path):
"""Convert audio file"""
USER_UPLOAD_DIR = upload_dir_path
formats_to_convert = ['.m4a']
dirpath = os.path.abspath(USER_UPLOAD_DIR)
if fp.endswith(tuple(formats_to_convert)):
(path, file_extension) = os.path.splitext(fp)
file_extension_final = file_extension.replace('.', '')
file_handle = ''
try:
track = AudioSegment.from_file(fp,
file_extension_final)
print("track", track)
wav_path = fp.replace(file_extension_final, 'wav')
file_handle = track.export(wav_path, format='wav')
except Exception:
print("ERROR CONVERTING " + str(fp))
return file_handle
else:
print("No file format conversion required " + str(fp))
return fp
def load_wav2vec_100h_model():
tokenizer = Wav2Vec2Tokenizer.from_pretrained("facebook/wav2vec2-base-100h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-100h")
return tokenizer, model
def correct_sentence(input_text):
sentences = nltk.sent_tokenize(input_text)
return (' '.join([s.replace(s[0],s[0].capitalize(),1) for s in sentences]))
def asr_transcript(tokenizer, model, input_file):
speech, fs = sf.read(input_file)
if len(speech.shape) > 1:
speech = speech[:,0] + speech[:,1]
if fs != 16000:
speech = librosa.resample(speech, fs, 16000)
input_values = tokenizer(speech, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.decode(predicted_ids[0])
return correct_sentence(transcription.lower())
if __name__ == "__main__":
tokenizer_100h, model_100h = load_wav2vec_100h_model()
wav_input = 'Recording_biweu.wav'
fp = wav_input
processed_file = convert_audio_segment(str(fp), str(data_dir))
text = asr_transcript(tokenizer_100h,model_100h,processed_file)
print(text)
```
I am adding more details about my wav file here
```
General
Complete name : Recording_biweu.wav
Format : Wave
File size : 13.8 MiB
Duration : 7 min 30 s
Overall bit rate mode : Constant
Overall bit rate : 256 kb/s
Track name : Recording_biweu
Recorded date : 2021
Writing application : Lavf57.83.100
Audio
Format : PCM
Format settings : Little / Signed
Codec ID : 1
Duration : 7 min 30 s
Bit rate mode : Constant
Bit rate : 256 kb/s
Channel(s) : 1 channel
Sampling rate : 16.0 kHz
Bit depth : 16 bits
Stream size : 13.8 MiB (100%)
```
Error
```
Some weights of the model checkpoint at facebook/wav2vec2-base-100h were not used when initializing Wav2Vec2ForCTC: ['wav2vec2.mask_time_emb_vector']
- This IS expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing Wav2Vec2ForCTC from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Traceback (most recent call last):
File "asr_wav2vec2.py", line 130, in <module>
text = asr_transcript(tokenizer_100h,model_100h,processed_file)
File "asr_wav2vec2.py", line 96, in asr_transcript
logits = model(input_values).logits
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 795, in forward
outputs = self.wav2vec2(
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 646, in forward
encoder_outputs = self.encoder(
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 457, in forward
hidden_states, attn_weights = layer(hidden_states, output_attentions=output_attentions)
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 392, in forward
hidden_states, attn_weights, _ = self.attention(hidden_states, output_attentions=output_attentions)
File "/home/joel/pyvenv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/joel/pyvenv/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 286, in forward
attn_weights = torch.bmm(query_states, key_states.transpose(1, 2))
RuntimeError: [enforce fail at CPUAllocator.cpp:65] . DefaultCPUAllocator: can't allocate memory: you tried to allocate 24373495488 bytes. Error code 12 (Cannot allocate memory)
```
| 02-24-2021 07:04:57 | 02-24-2021 07:04:57 | Hi! You seem to be passing all of the file at once to the model. This can be extremely expensive from a memory point of view, as the number of samples (and therefore your batch size) can be very big.
I would advocate for you to do a custom batching here, by only passing some of the values in yout `input_values` at a time, rather than everything at once.
I can't tell exactly because I don't have your files handy, but I would guess this is the issue and how to resolve it. If it doesn't help, do you mind opening a colab where I can reproduce the issue?<|||||>Thanks @LysandreJik for looking into it. I couldn`t figure out how to apply custom batching for audio data. Is there a batch_size param that can be used?
link to the audio file [here](https://easyupload.io/fwhf6v)
link to the [colab notebook](https://drive.google.com/file/d/1V_u5XKOLQXXg-94KQiBcrShy_eaYHWFj/view?usp=sharing)<|||||>I've requested access for your notebook!<|||||>Sorry for the delay, I thought it required it is accessible outside. I have given you access. <|||||>Okay, so the issue isn't in the number of samples as I thought previously: there seems to be a single audio stream in your recording.
However, the issue here is that it's a 7 minutes and 30 seconds long recording, which really is very very long. I talked about it with @patrickvonplaten, and he mentions that Wav2Vec2 was trained on ~40 seconds of recording maximum. What one could do here is split the recording in 30 seconds chunks. You're using `librosa` and you can do that easily with `librosa.stream`.
Here for example your method to retrieve the transcript is the following:
```py
def asr_transcript(tokenizer, model, input_file):
speech, fs = sf.read(input_file)
if len(speech.shape) > 1:
speech = speech[:,0] + speech[:,1]
if fs != 16000:
speech = librosa.resample(speech, fs, 16000)
input_values = tokenizer(speech, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.decode(predicted_ids[0])
return correct_sentence(transcription.lower())
```
I've updated it to the following (please note that it's the first time I've used `librosa` myself so the parameters I put for the stream values may be wrong):
```py
def asr_transcript(tokenizer, model, input_file):
transcript = ""
# Ensure that the sample rate is 16k
print(librosa.get_samplerate(input_file))
# Stream over 30 seconds chunks rather than load the full file
stream = librosa.stream(
input_file,
block_length=30,
frame_length=16000,
hop_length=16000
)
for speech in stream:
if len(speech.shape) > 1:
speech = speech[:, 0] + speech[:, 1]
input_values = tokenizer(speech, return_tensors="pt").input_values
logits = model(input_values).logits
predicted_ids = torch.argmax(logits, dim=-1)
transcription = tokenizer.decode(predicted_ids[0])
transcript += correct_sentence(transcription.lower())
return transcript
```
With this I seem to obtain sensible results! This could probably be improved in the following ways:
- Ensure that the parameters passed to `librosa.stream` are correct. Changing these seem to have a very big impact on the transcript.
- Patrick mentions that an advanced solution would be to use a Voice Activity detector to see where there is no speech and chunk there, for example finding a sequence of 100 values very close to zero, and cutting there. Little performance would be lost then.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,365 | open | Knowledge Retrieval missing from BlenderBot Implementation | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
The original Blenderbot [paper](https://arxiv.org/pdf/2004.13637.pdf) considered three transformer based models (Retrieval, Generator and RetNRef), however from what I can see only the generator model is implemented within this repository: [transformers/src/transformers/models/blenderbot/modeling_blenderbot.py](https://github.com/huggingface/transformers/blame/master/src/transformers/models/blenderbot/modeling_blenderbot.py).
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
As part of my academic work I am generating topic bound conversations and wish to compare Blenderbot as well as make modifications to its knowledge retrieval component. It would be useful if someone could point me towards this (if it is already implemented) or inform me if this feature is planned to be added in the future.

## Contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
Prior to making a contribution I want to confirm that feature is not implemented and there is no intention of implementing this in the near future.
Thanks,
Alex
@patrickvonplaten @patil-suraj | 02-24-2021 05:44:08 | 02-24-2021 05:44:08 | |
transformers | 10,364 | closed | Loading mBART Large 50 MMT (many-to-many) is slow | ## Environment info
I'm installing the library directly from `master` and running it in a kaggle notebook.
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.4.0.dev0
- Platform: Linux-5.4.89+-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.7.0 (False)
- Tensorflow version (GPU?): 2.4.1 (False)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- bart: @patrickvonplaten, @patil-suraj
Library:
- text generation: @patrickvonplaten
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): mBART-Large 50 MMT (many-to-many)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
After caching the weights of the model, load it with `from_pretrained` is significantly slower compared with `torch.load`.
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Machine Translation
## To reproduce
Here's the [kaggle notebook](https://www.kaggle.com/xhlulu/reproducing-speed-issues-with-mbart-large-50) reproducing the issue. Here's a [colab notebook](https://colab.research.google.com/drive/1fKuLG_U6uw4x8LqcIQFEFjQYjnc1nBzQ?usp=sharing) showing essentially the same thing.
Steps to reproduce the behavior:
1. Load model with `model = MBartForConditionalGeneration.load_pretrained("facebook/mbart-large-50-many-to-many-mmt")`
2. Save model with `model.save_pretrained('./my-model')`
3. Save model with `torch.save(model, 'model.pt')`
4. Reload and time with `MBartForConditionalGeneration.load_pretrained('./my-model')`
5. Load with `torch.load('model.pt')`
The step above can be reproduced inside a kaggle notebook:
```python
model = MBartForConditionalGeneration.load_pretrained("facebook/mbart-large-50-many-to-many-mmt")
model.save_pretrained('./my-model/')
torch.save(model, 'model.pt')
%time model = MBartForConditionalGeneration.from_pretrained("./my-model/")
%time torch_model = torch.load('model.pt')
```
We will notice that loading with `from_pretrained` (step 4) is significantly slower than `torch.load` (step 5); the former takes over 1 minute and the latter just a few seconds (or around 20s if it hasn't been previously loaded in memory; see [notebook](https://www.kaggle.com/xhlulu/use-saved-torch-model)).
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
The model should take less than 1 minute to load if it has already been cached (see step 1)
| 02-24-2021 02:54:51 | 02-24-2021 02:54:51 | Related: https://github.com/huggingface/transformers/issues/9205<|||||>Thanks. I'll rerun the benchmarks once patrick makes the changes.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Has there been an updated to https://github.com/huggingface/transformers/issues/9205's timeline?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,363 | closed | [trainer] move secondary methods into a separate file | We are trying to keep `trainer.py` to a manageable size and as of recent it has been getting new helper methods which should remain methods, but aren't really important for understanding how the Trainer works, so we propose to move them into the utils file and then import them using this nifty idea presented at https://stackoverflow.com/a/47562412/9201239 where instead of subclassing and mixing in, we import the desired methods into the class.
See if you like it.
And if yes please let me know if there are any other candidates to move.
@sgugger | 02-24-2021 01:54:28 | 02-24-2021 01:54:28 | |
transformers | 10,362 | closed | [Trainer/Deepspeed] handle get_last_lr() before first step() | with deepspeed's fp16 and dynamic loss scale enabled the optimizer/scheduler steps may not run for the first few dozens steps while loss is overflowing, so `get_last_lr` will fail if called during that warm up stage, so this PR tries to catch that special warm situation and handle it by returning a fake LR=0, which is a good default because since there is no stepping it's effectively a 0.
I'm just not sure if I should warn - it ends up emitting like 20-30 of those: if the user picks `--logging_steps=`, e.g:
```
2021-02-23 12:53:06,798] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 4294967296
[WARNING|trainer.py:1142] 2021-02-23 12:53:06,799 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 11.0, 'learning_rate': 0, 'epoch': 0.0}
[2021-02-23 12:53:06,990] [INFO] [stage2.py:1357:step] [deepscale] OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0
[WARNING|trainer.py:1142] 2021-02-23 12:53:06,992 >> tried to get lr value before scheduler/optimizer started stepping, returning lr=0
{'loss': 10.9922, 'learning_rate': 0, 'epoch': 0.0}
```
* [x] added a test too.
I first thought it should be handled by DeepSpeed https://github.com/microsoft/DeepSpeed/issues/782 but then realized that since pytorch optimizers won't be aware of this, we have to handle this in the trainer since we are the ones calling `get_last_lr()` sort of prematurely - (yet, we don't have a way to know that it's premature as we can't even called `lr_scheduler.step()` as it's being handled opaquely by DeepSpeed.
@sgugger
Fixes: #https://github.com/huggingface/transformers/issues/10330#issuecomment-784457460 | 02-23-2021 21:01:32 | 02-23-2021 21:01:32 | Yes, that's a good idea. I'm just not clear on whether you suggest to make a function just for the deepspeed segment of the branch or wrap up the whole getting lr function?
Plus, the code needs the `trainer` object, so I'm not sure how to put it in utils.
I propose to put it as a separate Trainer method instead.
```
logs["learning_rate"] = self._get_learning_rate()
```
Please check if the proposed change in the next commit looks good to you - made it into a method and put it at the end of the file so it's out of the way.
<|||||>> Yes, that's a good idea. I'm just not clear on whether you suggest to make a function just for the deepspeed segment of the branch or wrap up the whole getting lr function?
The whole thing, as you did.
> Plus, the code needs the trainer object, so I'm not sure how to put it in utils.
It doesn't need the whole Trainer, just the `lr_scheduler` and the `args` (to detect if deepspeed is activated).<|||||>Can someone explain to me why I am suscribed to this
On Tue, Feb 23, 2021, 7:43 PM Stas Bekman <[email protected]> wrote:
> Merged #10362 <https://github.com/huggingface/transformers/pull/10362>
> into master.
>
> —
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/10362#event-4368294070>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AS5YU5U6O456ILN65VM72ELTARKSPANCNFSM4YDGTCNA>
> .
>
<|||||>Hi @chrissyjsartt
We won't know, since only you can do it. Perhaps you hit [Subscribe] by mistake?
But if you pay close attention to the email you received it tells you how to unsubscribe at the end of it:
> You are receiving this because you are subscribed to this thread.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/pull/10362#event-4368294070>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AS5YU5U6O456ILN65VM72ELTARKSPANCNFSM4YDGTCNA> |
transformers | 10,361 | closed | denoising objective for pretraining | Hi
Denoising objective is used in T5 and BART models, could you please add it in pretraining language models?
For now, if you could I appreciate sharing some advice how I can implement it. is there a piece of codes in huggingface I could start from?
thanks | 02-23-2021 21:00:01 | 02-23-2021 21:00:01 | @patil-suraj @patrickvonplaten please help. thanks <|||||>Hey @dorooddorood606,
could you please make use of the forum: https://discuss.huggingface.co/ for such questions. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,360 | closed | Rag Use Your Knowledge dataset | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform: Colab
- Python version: 3.7.10
- PyTorch version (CPU): 1.7.0
### Who can help
Models:
rag: @patrickvonplaten, @lhoestq
Library:
- tokenizers: @n1t0, @LysandreJik
## Information
The model I am using (Rag):
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create a csv file with title and text
2. Load the dataset
3. Map the split_document function
4. Map the embed function (error)
```
def embed(
documents: dict,
ctx_encoder: DPRContextEncoder,
ctx_tokenizer: DPRContextEncoderTokenizerFast
):
"""Compute the DPR embeddings of document passages"""
input_ids = ctx_tokenizer(
documents["title"], documents["text"], truncation=True,
padding="longest", return_tensors='pt'
)
embeddings = ctx_encoder(
input_ids["input_ids"],
return_dict=True).pooler_output
return {'embeddings': embeddings.detach().cpu().numpy()}
# And compute the embeddings
ctx_encoder = DPRContextEncoder.from_pretrained(
'facebook/dpr-ctx_encoder-multiset-base'
)
ctx_tokenizer = DPRContextEncoderTokenizerFast.from_pretrained(
'facebook/dpr-ctx_encoder-multiset-base'
)
new_fts = Features(
{
'text': Value('string'),
'title': Value('string'),
'embeddings': Sequence(Value('float32'))
}
) # optional, save as float32 instead of float64 to save space
dataset = dataset.map(
partial(embed, ctx_encoder = ctx_encoder, ctx_tokenizer=ctx_tokenizer),
features = new_fts
)
```
### Error
```
ArrowInvalid Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in _map_single(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, new_fingerprint, rank, offset, update_data)
1412 if update_data:
-> 1413 writer.finalize() # close_stream=bool(buf_writer is None)) # We only close if we are writing in a file
1414 except (Exception, KeyboardInterrupt):
22 frames
ArrowInvalid: Could not convert [-0.007409881334751844, 0.0715881809592247, -0.130095437169075, 0.08213236927986145, -0.06481412053108215, 0.219411239027977, 0.2758248746395111, -0.24343284964561462, -0.17551296949386597, -0.16576780378818512, -0.19957710802555084, 0.513848602771759, -0.2469034492969513, -0.27209365367889404, -0.019221562892198563, 0.3769649565219879, 0.47224175930023193, -0.5267099142074585, -0.3105331361293793, -0.3371395170688629, -0.2927161753177643, -0.7542601227760315, -0.17370374500751495, -0.024053143337368965, 0.14522959291934967, 0.2945793867111206, 0.03297216817736626, -0.0938640609383583, -0.34509730339050293, 0.3848630487918854, -0.1607687622308731, 0.08243361860513687, 0.036992475390434265, -0.5837609767913818, -0.057669747620821, 0.33589160442352295, -0.6164276003837585, 0.22745771706104279, 0.2599221467971802, 0.021962007507681847, 0.38935932517051697, 0.0007948490092530847, -0.71791011095047, 0.008848031982779503, -0.2997898459434509, -0.17859186232089996, -1.5019792318344116, 0.151197612285614, -0.5586768984794617, -0.008638408035039902, -0.49596720933914185, 0.4330417513847351, 0.16217979788780212, 0.27230459451675415, -0.20549386739730835, 0.24903732538223267, -0.18732021749019623, -0.6536538004875183, 0.09260211139917374, -0.49740439653396606, -0.007311557419598103, 0.3489222824573517, -0.14408843219280243, 0.3663439154624939, -0.09016768634319305, 0.7361327409744263, -0.013332066126167774, 0.241610586643219, -0.779755353927...
During handling of the above exception, another exception occurred:
ArrowInvalid Traceback (most recent call last)
/usr/local/lib/python3.7/dist-packages/pyarrow/error.pxi in pyarrow.lib.check_status()
ArrowInvalid: Could not convert [-0.007409881334751844, 0.0715881809592247, -0.130095437169075, 0.08213236927986145, -0.06481412053108215, 0.219411239027977, 0.2758248746395111, -0.24343284964561462, -0.17551296949386597, -0.16576780378818512, -0.19957710802555084, 0.513848602771759, -0.2469034492969513, -0.27209365367889404, -0.019221562892198563, 0.3769649565219879, 0.47224175930023193, -0.5267099142074585, -0.3105331361293793, -0.3371395170688629, -0.2927161753177643, -0.7542601227760315, -0.17370374500751495, -0.024053143337368965, 0.14522959291934967, 0.2945793867111206, 0.03297216817736626, -0.0938640609383583, -0.34509730339050293, 0.3848630487918854, -0.1607687622308731, 0.08243361860513687, 0.036992475390434265, -0.5837609767913818, -0.057669747620821, 0.33589160442352295, -0.6164276003837585, 0.22745771706104279, 0.2599221467971802, 0.021962007507681847, 0.38935932517051697, 0.0007948490092530847, -0.71791011095047, 0.008848031982779503, -0.2997898459434509, -0.17859186232089996, -1.5019792318344116, 0.151197612285614, -0.5586768984794617, -0.008638408035039902, -0.49596720933914185, 0.4330417513847351, 0.16217979788780212, 0.27230459451675415, -0.20549386739730835, 0.24903732538223267, -0.18732021749019623, -0.6536538004875183, 0.09260211139917374, -0.49740439653396606, -0.007311557419598103, 0.3489222824573517, -0.14408843219280243, 0.3663439154624939, -0.09016768634319305, 0.7361327409744263, -0.013332066126167774, 0.241610586643219, -0.779755353927...
```
## Expected behavior
Return the dataset embedding so I can index it and run inference .
<!-- A clear and concise description of what you would expect to happen. -->
| 02-23-2021 20:22:32 | 02-23-2021 20:22:32 | Hi ! It looks like you `embed` functions expects a batch of documents as input.
Can you try to set `batched=True` in your call to `dataset.map` ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,359 | closed | Security Bug found - looking for contact for responsible disclosure | Hi,
I found a security bug on software related to you.
Can you please tell me how to contact you for a responsible disclosure?
Thanks. | 02-23-2021 20:15:34 | 02-23-2021 20:15:34 | Hi – can you send an email to `tech at huggingface.co`? Thanks.<|||||>I did send it.<|||||>closing as will handle over email. Thanks! |
transformers | 10,358 | closed | BART Summarization : Torchscript Export / Inference Triton Server | # 📚 Migration
@sshleifer maybe you can help ?? (thanks for all your work bud!)
## Information
**Objective** : Performance gain, clocking 1-1.5 sec per transaction at the moment, target : under 100 ms. It seems exporting model via TorchScript & running on Triton Server may be plausible solution.
I am exporting BART Large CNN (for generating Summaries) using TorchScript. I have fine-tuned the model with localized data, but I am unclear on how to use **model.generate(input)** which seems to wrap **model(input)**, whereas **model(input)** is being triggered default at the time of inference from the exported model. For simplicity & ability to reproduce, I pasting the issue/code details as if the model is vanilla pre-trained (and not fine-tuned)
Model: **facebook/bart-large-cnn**
Language: **English**
The problem arises when using: **torch.jit.trace(<model>, <dummy_input>)**
## Details
1. model.pt gets generated without any issues
2. However, the generated trace is producing output for mask-filling like model(input), what I am hoping to generate is model.**generate**(input).
3. Not sure, how to handle this, at the time of export or later during inference. Can you please help.
Code Block Below:
**Step 1**: Generate model.pt file
```
import torch
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
AutoConfig
)
dummy_input = torch.tensor([512 * [1]])
BART_CNN_PATH = 'facebook/bart-large-cnn'
BART_CNN_MODEL = AutoModelForSeq2SeqLM.from_pretrained(BART_CNN_PATH)
BART_CNN_MODEL.eval()
traced_model = torch.jit.trace(BART_CNN_MODEL, dummy_input)
traced_model.save("exportedModelsForTritan/bart_large_cnn_fl/1/model.pt")
```
**Step2**: Inference **(+ Error Details)**
```
BART_CNN_TOKENIZER = AutoTokenizer.from_pretrained(BART_CNN_PATH)
input_tokenized = BART_CNN_TOKENIZER.encode(input_text, return_tensors="pt", max_length=512, truncation=True, padding='max_length')
## Test Inference, If I do not use .generate() code works fine ...
## but then it would attempt mask-filling instead of summaries?...
## With model.generate(input), it returns [1xn] where n is the length of summary
## where-as with mode(input), it's generating tuple with length 3 perhaps logits and possibly hidden state weights..
## which I do not know is of significance for summaries ...
model_output = traced_model.generate(input_tokenized)
```
**Error**: ModuleAttributeError: 'RecursiveScriptModule' object has no attribute 'generate'
**Working Code Prior Export as Reference: <Notice the use of model.generate()>**
```
BART_CNN_PATH = 'facebook/bart-large-cnn'
BART_CNN_MODEL = AutoModelForSeq2SeqLM.from_pretrained(BART_CNN_PATH)
BART_CNN_TOKENIZER = AutoTokenizer.from_pretrained(BART_CNN_PATH)
def bart_cnn_summarize_automl(input_text, num_beams=4, num_words=50):
input_text = str(input_text)
input_tokenized = BART_CNN_TOKENIZER.encode(input_text, return_tensors="pt", max_length=512)
summary_ids = BART_CNN_MODEL.generate(input_tokenized,
max_length=100,
min_length=40,
length_penalty=2.0,
num_beams=num_beams,
early_stopping=True)
output = [BART_CNN_TOKENIZER.decode(id, skip_special_tokens=True, clean_up_tokenization_spaces=False) for id in summary_ids]
return str(output[0])
```
## Environment info
- Python version:Python 3.6.9
- PyTorch version (GPU?): GPU (T4), 1.7.1
- Docker Image: huggingface/transformers-pytorch-gpu:4.2.1
| 02-23-2021 18:47:54 | 02-23-2021 18:47:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>It seems I have to mimic GenerationMixin.generate() -- advisable? any detailed documentation on 'beam_search' method ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@anshoomehra Were you able to run BART on Triton?<|||||>@anshoomehra @moise-g
Are you able to run BART with Triton, if yes can you please share the details? |
transformers | 10,357 | closed | tokenization_marian.py: use current_spm for decoding | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #10294
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [X] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-23-2021 17:59:42 | 02-23-2021 17:59:42 | Hi @patil-suraj!
Thanks for your review. As you suggested, I started updating the code and docs where `decode` or `batch_decode` is used.
Doing so, I noticed RAG model also has the same issue: in `decode` and `batch_decode`, `generator` is used instead of `current_tokenizer`. Do you want me to also update that model and its docs accordingly in this PR? <|||||>Hey, I submitted my changes and also fixed the RAG tokenizer. Please let me know if I missed something or you want me to change any of the fixes. I can rebase and force-push again. <|||||>> do that inside the context manager, so we should also update all the ex
@patil-suraj @sgugger, wouldn't it be nicer to just do `as_target_tokenizer` in the `batch_decode` and `decode` function itself? Because decoding usually corresponds to the "target tokenizer" -> I think this would be nicer for the user<|||||>We could do that, but the reported issue is about not being able to decode the source tokens as the `decode` always uses `spm_target`. So if we do `as_target_tokenizer` inside `decode` then source tokens will be decoded using `spm_target`, which will cause the same issue.<|||||>True, yeah I was a bit off there!
Ok, I understand the fix now. It's a very problematic fix however because it's a big backward breaking change. In 99% of the cases people use `batch_decode` for the target outputs and we don't really want people to update their code just so that the source targets work correctly I think...If I understand correctly this PR would change the default behavior of `batch_decode(...)` which is a no-go sadly...
Could we maybe somehow let the `current_spm` default to `target_spm` when using `batch_decode`, `decode` so that we don't have any breaking changes & then add maybe a new context manager `as_source_tokenizer`? or just add a optional arg to `decode` for Marian?
cc @LysandreJik @sgugger <|||||>It's not very complicated to add the `as_source_tokenizer` context manager. Another solution is to add a flag `use_source_tokenizer` (defaults to False) to `decode` and `batch_decode`.
In any case, backward-compatibility is paramount so it needs to be fully enforced.<|||||>`use_source_tokenizer` seems a better option to me, since the tokenizer already behaves like a source tokenizer by default so adding `as_source_tokenizer` seems a bit confusing IMO.
@Mehrad0711 , here's how we could now implement this
1. `current_spm` should always default to `source_spm` except inside the `as_target_tokenizer`.
2. As Sylvain suggested, add the `use_source_tokenizer` argument to `decode` and `batch_decode`, if it's `True` use `source_spm` in `convert_tokens_to_string`.
3. `convert_tokens_to_string` should never use `current_spm` as it defaults to `source_spm` and this would break backward-compatibility.
And the user should now pass `use_source_tokenizer` to decode source tokens<|||||>Thanks. I can proceed with the suggested implementation.
However, passing `use_source_tokenizer` to `convert_tokens_to_string` requires updating `PreTrainedTokenizer`'s `_decode` method. Since `use_source_tokenizer` is passed to `convert_tokens_to_string` for all tokenizers, afaiu, the tokenizer classes for all models that implement their own `convert_tokens_to_string` should be updated individually to accept `use_source_tokenizer` (even though it's not used for some such as encoder-only models).
I think a potential workaround can be adding instance checks within `_decode` to see if the tokenizer accepts that argument, or perhaps use function overloading.
Please let me know how you want me to proceed.<|||||>I think we can work around this by setting an internal attribute with the value of `use_source_tokenizer` passed by the user. This way we can recover it in `convert_tokens_to_string` without having to overload any other methods. What do you think?<|||||>Thanks for the suggestion. It makes sense. If backward-compatibility wasn't a big issue, I think it would be better to use `current_spm` (set to None) in all tokenizer methods and switch it to source or target spm using two (`as_source_tokenizer` and `as_target_tokenizer`) context managers as needed. This way encoding and decoding methods become source/ target agnositc.
However, since it's an issue, I think your first suggestion which is using `use_source_tokenizer` is still better than setting an internal value from the user perspective because now they have to use a context_manager during encoding but then set an attribute during decoding which can persist during next decoding if not unset. What do you think?<|||||>> than setting an internal value from the user perspective
I never meant the user would have to set it. I meant for us to set it in `decode`/`decode_batch` with the `use_source_tokenizer` argument received.<|||||>Gotcha. Yeah, that should work.<|||||>Hi, the PR is ready for review. please let me know if the changes look good. Thanks.<|||||>Ok I think these changes address the comments. If there are still improvements to the docstring/ code you want to make, please feel free to push directly to this branch. Thanks.<|||||>The docstrings don't accumulate when you subclass and overload, they get reset. So we have to copy the whole things *and* add the extra argument. Will push on your branch the change.<|||||>Thanks. I think the PR is ready for the final review.
cc: @sgugger @patrickvonplaten @patil-suraj
<|||||>Thanks a lot for your work on this PR!<|||||>Thanks a lot for your feedback and a great PR experience! |
transformers | 10,356 | closed | Fine-tuning bart-base on XSum and got 34.0 as ROUGE1 (40.61 with higher lr) | Hi, I'm wondering whether there are any benchmarks for fine-turning bart-base on XSum. I found this one [https://huggingface.co/VictorSanh/bart-base-finetuned-xsum/tree/main](url) which also shows the R1 is 35 -ish. Does it suppose to be this low? | 02-23-2021 17:07:21 | 02-23-2021 17:07:21 | Just an update, I increased lr and got ROUGE-F(1/2/l): 40.61/17.48/32.62<|||||>Maybe @patrickvonplaten or @patil-suraj have an idea<|||||>Hi @XinnuoXu
That model was trained a while ago, and there were some bugs in BART related to `decoder_start_token_id` at that time, see https://discuss.huggingface.co/t/bart-lm-odd-beam-search-output/618/13
which could be the reason for this.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,355 | closed | ProphetNet Positional Embeddings Index Issue | I am having an issue finetuning a ProphetNet model for question generation. The current error I am running into is an indexing issue when getting the `position_embeddings, position_ids`. In the below line of code, we call the `ProphetNetPositionalEmbeddings` module to get the embeddings
https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/prophetnet/modeling_prophetnet.py#L1221
As you can see, the call isn't passing in anything for the `attention mask` (which I am not sure I fully understand, so I appreciate clarification as to why that is happening) or `position_ids`, which means both will be None by default. Then, in the forward method for `ProphetNetPositionalEmbeddings`, we see the following logic
https://github.com/huggingface/transformers/blob/461e8cacf94d1f76367cc9ba2cfd5b9bd3641c81/src/transformers/models/prophetnet/modeling_prophetnet.py#L587-L593
Since `attention_mask` is None as noted above, it is set to a tensor of all ones, which makes sense. However, then the `position_ids` would be calculated to be for each sample in the batch a vector from 1 to `max_length`. This is the cause of the indexing issue, I am facing. Should this vector not be from 0 to `max_length - 1`? Is there something I am doing wrong in my setup that would be causing this? Is there some level of logic I am missing that is causing the issue?
I am happy to share more code if needed to give more context, but I believe this issue is isolated to just ProphetNet code. Any thoughts? | 02-23-2021 16:55:40 | 02-23-2021 16:55:40 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I believe a "fix" was made to this issue in #10501, but it is still not quite correct I think. The solution was to simply clamp the `position_ids` to be bounded between 0 and `max_length - 1`, but the ids still won't be properly set this way. The current solution will result in a tensor that looks like `[1, 2, ..., max_length - 1, max_length - 1]`. Everything is receiving the wrong position embedding except the last item. I think we also need to subtract `1` from the `position_ids` before clamping it. This should fix the offset issue and then the clamp will prevent any leading 0s in the `attention_mask` from becoming -1. Is there something else I am missing for why we wouldn't want this?<|||||>Hey @ManavR123,
ProphetNet is actually a bit weird since the position_ids start at 1 and not at 0. This is because ProphetNet was for its most part derived from Bart which actually had 513 position id weights even though only 512 were allowed (the first position id was skipped -> it's the padding_id_token). ProphetNet however has exactly 512 weights, so it actually allows only 511 tokens. Now to nevertheless allow ProphetNet to handle 512 tokens we just clamp the last id which shouldn't make a huge difference in performance. This means all position ids are correct **except** the last one, which is a bug that is accepted since it provides the possibility to run 512 tokens.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,354 | closed | Add support for ZeRO-2/3 and ZeRO-offload in fairscale | # What does this PR do?
This PR adds support for the new `FullyShardedDataParallel` introduced in fairscale. See [this PR](https://github.com/facebookresearch/fairscale/pull/413) for more details.
The PR changes a tiny bit the behavior of the `--sharded_ddp` flag/training argument to support a list of options. You can still use the TrainingArguments class with `sharded_dpp=True` but if launching a script, `--sharded_ddp` has to be replaced with `--sharded_ddp simple`. The `--sharded_ddp` was marked as an experimental API so I think this breaking change is fine if properly documented.
Other values supported are: `zero_dp_2`, `zero_dp_2 offload`, `zero_dp_3` and `zero_dp_3 offload`. To fully take advantage of the `zero_dp_3`/`zero_dp_3 offload` the model passed to the `Trainer` will need to have its internal layers wrapped inside the `FullyShardedDataParallel`, but this out of scope for this particular PR.
For all those new modes, the model simply needs to be wrapped inside `FullyShardedDataParallel` but the optimizer needs to be created after the model wrapping (to get the parameters shards).
Note that:
- `predict_with_generate` does not work with this integration
- `cpu_offload` does not work for now due to the bug mentioned in [this issue](https://github.com/facebookresearch/fairscale/issues/421). Once the issue is fixed, the option should work with the existing code.
One thing to think further is that this integration breaks the usual convention that `self.model` is the original model (`FullyShardedDataParallel` consumes the model to use less memory). | 02-23-2021 16:35:26 | 02-23-2021 16:35:26 | > Other values supported are: `zero2`, `zero2_offload`, `zero3` and `zero3_offload`. To fully take advantage of the `zero3`/`zero3_offload` the model passed to the `Trainer` will need to have its internal layers wrapped inside the `FullyShardedDataParallel`, but this out of scope for this particular PR.
Do you feel it's better to hardcode these combinations and not have a more flexible approach of:
```
--sharded_ddp "zero2;offload;future_option"
```
or
```
--sharded_ddp "zero2 offload future_option"
```
which would enable adding new features in the future, without needing to create all possible combinations of options which would double every time a new option will be added.
This is the cmd API I'm tentatively using for the pipelines `--pipeline "chunks=5 device_map=0:0-5,1:5-10 ...."`
> One thing to think further is that this integration breaks the usual convention that `self.model` is the original model (`FullyShardedDataParallel` consumes the model to use less memory).
Yes, we will need to rethink this - the trainer is getting more and more complex.<|||||>> Do you feel it's better to hardcode these combinations and not have a more flexible approach of:
>
> --sharded_ddp "zero2;offload;future_option"
Happy to explore that design as it seems more flexible and less prone to future breaking changes. Will adapt the PR accordingly once we get the wrapper to work.<|||||>Probably whitespace separation is more readable: `--sharded_ddp "zero2 offload future_option"`
Also we need to make sure that we distinguish between `FullyShardedDataParallel` and `ShardedDataParallel` since as the [commentary was made](https://github.com/facebookresearch/fairscale/pull/413#issuecomment-784168151), they aren't quite the same. Perhaps `not_full` for `ShardedDataParallel`? both should be corresponding to stage2 but they don't work in the same way.
Deepspeed has a `stage` param which goes from 0 to 3. where stage=0 doesn't enable ZeRO, and then each number matches the stage.
For the user's sake perhaps we could make things as similar as possible so it'd be more intuitive for them to switch between fairscale (and eventually pytorch) and deepspeed.
Also note that DeepSpeed exposes other params like the size of buckets, which actually are very important and need to be user-configurable. I won't be surprised that FSDP will also have those configurable down the road - i.e. more params.<|||||>Reworked the API to take your suggestion of list of options into account @stas00. I don't think we have to worry about uniformizing with deepspeed or cleaning more at this stage as:
- this API will evolve in the future (ShardedDataParallel might very well disappear if FullyShardedDataParallel is better, and this might change again on the road to be merged in PyTorch)
- we don't know yet all the options we will have between deepspeed/fairscale/PyTorch
- this is an experimental API and while we won't break it just for fun, we can make slight changes down the road.<|||||>Moving out the cl arg naming discussion from https://github.com/huggingface/transformers/pull/10354#pullrequestreview-596676591 to the open
So if's not DDP but DP, then we should probably change the cl arg to `_dp` as I suggested above so that it's consistently either DP or DDP all the way through.
Or perhaps we should just call it `--sharded`? the dp part is already inside the value anyway as in: `--sharded zero_dp_3` |
transformers | 10,353 | closed | [bert-base-german-cased] use model repo, not external bucket | References: #10306 | 02-23-2021 15:51:47 | 02-23-2021 15:51:47 | |
transformers | 10,351 | closed | Can every line in the input CSV file contain more than one sentence when pertraining BERT for MLM Loss? | Hello HF Team,
I am familiar with how to pretrain BERT and I have a Dataloader that reads an input CSV file line by line and every time it reads a line, it tokenizes it and sends back the tokens for training to the training code. My question is whether is it ok for this input CSV file to contain more than one sentence on every line when pretraining BERT for masked language modelling?
Otherwise, is it important for it to contain only one meaningful sentence only? I am thinking if self attention will still continue to work and the model train properly even if every single line in the input CSV file (single training sample) is actually more than one sentence, each of it separated with a '.' delimiter of course.
Thanks | 02-23-2021 14:21:14 | 02-23-2021 14:21:14 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead? You'll get more answers over there, as questions like these are the point of the forum :)
Thanks! |
transformers | 10,350 | closed | Got "RuntimeError: CUDA error: device-side assert triggered" with Seq2SeqTrainer | ## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: YES
- Using distributed or parallel set-up in script?: NO
### Who can help
@patrickvonplaten @sgugger @stas00
Models:
- encoderdecoder: @patrickvonplaten, @patil-suraj
Library:
- trainer: @sgugger
## Information
Model I am using (PhoBERT):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create an EncoderDecoderModel with phobert-base as encoder and phobert-base as decoder
2. Prepare train_data and val_data
3. Create Seq2SeqTrainer with that model and data
## Code
`trainer = Seq2SeqTrainer(
model=sum_model,
args=training_args,
compute_metrics=compute_metrics,
train_dataset=train_data,
eval_dataset=val_data,
)`
## Error
`---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-49-67f458786328> in <module>()
25 compute_metrics=compute_metrics,
26 train_dataset=train_data,
---> 27 eval_dataset=val_data,
28 )
6 frames
/usr/local/lib/python3.6/dist-packages/transformers/trainer.py in __init__(self, model, args, data_collator, train_dataset, eval_dataset, tokenizer, model_init, compute_metrics, callbacks, optimizers)
269 # 2. fp16-enabled DeepSpeed loads the model in half the size and it doesn't need .to() anyway
270 if not (self.is_model_parallel or args.deepspeed):
--> 271 model = model.to(args.device)
272
273 # Force n_gpu to 1 to avoid DataParallel as MP will manage the GPUs
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in to(self, *args, **kwargs)
610 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
611
--> 612 return self._apply(convert)
613
614 def register_backward_hook(
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
357 def _apply(self, fn):
358 for module in self.children():
--> 359 module._apply(fn)
360
361 def compute_should_use_set_data(tensor, tensor_applied):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
357 def _apply(self, fn):
358 for module in self.children():
--> 359 module._apply(fn)
360
361 def compute_should_use_set_data(tensor, tensor_applied):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
357 def _apply(self, fn):
358 for module in self.children():
--> 359 module._apply(fn)
360
361 def compute_should_use_set_data(tensor, tensor_applied):
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _apply(self, fn)
379 # `with torch.no_grad():`
380 with torch.no_grad():
--> 381 param_applied = fn(param)
382 should_use_set_data = compute_should_use_set_data(param, param_applied)
383 if should_use_set_data:
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in convert(t)
608 if convert_to_format is not None and t.dim() == 4:
609 return t.to(device, dtype if t.is_floating_point() else None, non_blocking, memory_format=convert_to_format)
--> 610 return t.to(device, dtype if t.is_floating_point() else None, non_blocking)
611
612 return self._apply(convert)
RuntimeError: CUDA error: device-side assert triggered`
## Actual behavior
When I reduce the max_encoder_length to 80, it works OK
But when I increase the max_encoder_length >= 100, the error occurred!
## Expected behavior
The code should run properly with max_encoder_length = 512
| 02-23-2021 13:41:44 | 02-23-2021 13:41:44 | Hi there. Could you please post the code you are using? The steps you are defining are too vague for us to efficiently reproduce the issue and help.<|||||>Hi @sgugger
It was my fault when I update the max_encoder_length and re-run on the updated blocks.
The issue will not happen when I restart the kernel on Google Colab.
I think you can close the issue.
Thank you.<|||||>Glad you could resolve your issue! |
transformers | 10,349 | closed | Padding of bbox input in LayoutLM | I've been working with LayoutLM and had some issues with different lengths of samples in a batch.
```
ValueError: Unable to create tensor, you should probably activate truncation and/or padding with 'padding=True' 'truncation=True' to have batched tensors with the same length.
```
It turns out that `transformers.tokenization_utils_base.PreTrainedTokenizerBase._pad` does not pad the `bbox` items to maximum length in the batch and eventually trying to join differently sized lists into a tensor crashes.
One way to solve this is to pad all required items when generating samples like e.g. the official implementation does for the [FUNSD data set](https://github.com/microsoft/unilm/blob/master/layoutlm/layoutlm/data/funsd.py#L317-L331). I also implemented it this way for my use case and it seems to work well.
But this is basically repeating the pad functionality and I was wondering if the `_pad` method should allow for additional required input like the `bbox`es are for LayoutLM. I'm happy to work on a PR for that but also wanted to check if there's anything more to consider. | 02-23-2021 12:49:05 | 02-23-2021 12:49:05 | Hi! This is a fair request, indeed. The `bbox` values should definitely be padded/truncated by the tokenizer.
I think here we would welcome a PR adding this functionality for the LayoutLM tokenizer, and then think of a way to upstream it to be handled by the tokenizer directly, for LayoutLM but also for any other model that requires special inputs.
Would you be open to contributing a PR which adds this functionality to LayoutLM?<|||||>LayoutLM would really benefit from its own tokenizer indeed. Currently you have to use `BertTokenizer`, but this let's you only tokenize text, not really prepare data for the model.
A nice API (in my opinion) would look something like:
`LayoutLMTokenizer(image: PIL.Image, words: List[str], bounding_boxes: List[List[int]], labels: List[str])`
The tokenizer then automatically takes care of normalizing the bounding boxes (users can still choose which OCR engine to use to get words and bounding boxes), transform the words and labels into token-level `input_ids`, `bbox`, padding (as you mention), etc.
The functionality implemented in the function you refer to ([`convert_examples_to_features`](https://github.com/microsoft/unilm/blob/23a7ea35b55279a171a118ac767e863aa92e692c/layoutlm/layoutlm/data/funsd.py#L206)) could be added by overwriting the `prepare_for_model` method, and the padding functionality by overwriting `_pad`.
<|||||>I'm definitely up for working on this.
Thanks a lot for the suggestions @NielsRogge , I see you already did some great work in improving the layoutLM implementation :+1: .
What I do not fully understand is what we would need the `image` for at this stage. Can you clarify?
I'd also like to understand the better the normalization of bounding boxes you mention. If I understand correctly, the bounding boxes generated by the OCR engine may be split further according to whether the tokenizer splits the text inside a box (the official layoutLM code seems to [repeat the same bounding box](https://github.com/microsoft/unilm/blob/master/layoutlm/layoutlm/data/funsd.py#L252) in those cases).
Afaik, most OCR engines do some kind of tokenization already so the additional splitting may not be optimal for all use cases (it is not for mine because of some downstream tasks). There should either be a way to revert that splitting or disable it. What do you think?<|||||>> Thanks a lot for the suggestions @NielsRogge , I see you already did some great work in improving the layoutLM implementation
Thanks!
> What I do not fully understand is what we would need the `image` for at this stage. Can you clarify?
The image can be used to normalize the bounding boxes for the tokens, based on the width and height of the image. If we decide to let LayoutLMTokenizer to handle the normalization, then it should receive the image.
> I'd also like to understand the better the normalization of bounding boxes you mention. If I understand correctly, the bounding boxes generated by the OCR engine may be split further according to whether the tokenizer splits the text inside a box (the official layoutLM code seems to [repeat the same bounding box](https://github.com/microsoft/unilm/blob/master/layoutlm/layoutlm/data/funsd.py#L252) in those cases).
An OCR engine (like Google's Tesseract) recognizes words and corresponding bounding boxes in an image. However, LayoutLM (like BERT) uses wordpieces, so if a word like San Francisco is tokenized into ['San', 'Fran', '##Cisco'], then we need to repeat the bounding box for every subword token indeed.
> Afaik, most OCR engines do some kind of tokenization already so the additional splitting may not be optimal for all use cases (it is not for mine because of some downstream tasks). There should either be a way to revert that splitting or disable it. What do you think?
Do they? I used Google's Tesseract and it just recognizes words.
<|||||>> The image can be used to normalize the bounding boxes for the tokens, based on the width and height of the image. If we decide to let LayoutLMTokenizer to handle the normalization, then it should receive the image.
So you mean normalization in terms of bringing the coordinates to the same scale? That totally makes sense. Is that really something the tokenizer should do? Or should we expect the user to supply boxes with correctly scaled values?
If we want to do the scaling here, do we really need the full image for that and also restrict it to e.g. PIL/Pillow images? Some may use for example opencv where image objects are numpy arrays and don't have `height` and `width` attributes. The height and width values could also be provided as (optional) paramters for the tokenizer.
> An OCR engine (like Google's Tesseract) recognizes words and corresponding bounding boxes in an image. However, LayoutLM (like BERT) uses wordpieces, so if a word like San Francisco is tokenized into ['San', 'Fran', '##Cisco'], then we need to repeat the bounding box for every subword token indeed.
Actually, I referred to this recognition of words by OCR as tokenization as well - after all the text on a document/image could also be delivered as just one large string. Maybe that wasn't the right choice of words, sorry for the confusion.
I totally get that wordpiece "takes it a step further" and that this makes sense. What I wanted to clarify is how that should be dealt with. It might be confusing to a user to get a larger amount of bounding boxes after tokenization. I guess this is in line with the other tokenizers but it should at least be documented very clearly.<|||||>> So you mean normalization in terms of bringing the coordinates to the same scale? That totally makes sense. Is that really something the tokenizer should do? Or should we expect the user to supply boxes with correctly scaled values?
That's a design decision. We could choose to let the tokenizer handle normalization or not. And yeah maybe PIL images is too strict.
> I guess this is in line with the other tokenizers but it should at least be documented very clearly.
If we add bounding boxes for every token, we should add it to the documentation indeed!<|||||>I created a PR https://github.com/huggingface/transformers/pull/10719 in which I added the functionality to repeat bounding boxes for text that is split, also solving the padding problem that lead me here.
However, I ended up basically repeating a lot of code (also for the tests) and I'm not sure this is the nicest way to tackle the problem. Maybe it could make more sense to add an optional `additional_input` parameter for the base tokenizers to avoid all this repetition. There may be more models that need additional inputs than the `input_ids`.
I also removed the fast version of the tokenizer for now as I first would like to clarify with the maintainers if this is the right approach.
(I also added optional coordinate normalization)<|||||>@LysandreJik could you maybe give this a look and suggest how to proceed with this?<|||||>ping @LysandreJik . Would be great to get some feedback on my PR :slightly_smiling_face: <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,348 | closed | BertForMaskedLM cannot be initialized from BERT checkpoints | When I try to load a BERT model from a TF checkpoint (via `transformers-cli convert`) into a `BertForMaskedLM`, I get the following warning:
```
Some weights of BertForMaskedLM were not initialized from the model checkpoint at `SZTAKI-HLT/hubert-base-cc` and are newly initialized:
['cls.predictions.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight',
'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight', 'cls.predictions.transform.LayerNorm.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
```
The model also performs poorly (as in: completely randomly) in masked LM. If I load a "named" model, such as `bert-base-cased`, I do not get the warning and masked LM works OK.
This is all to be expected if the tensors mentioned in the warning are indeed not part of the converted model. The question then is twofold:
1. Why aren't they? MaskedLM is one of the training tasks for a BERT model, and users rightly expect that it works (I have already received two reports for my model to that effect); i.e. that they can initialize a `BertForMaskedLM` model from a BERT checkpoint / HF model without any problems.
2. How can I convert the model so that it includes said tensors? To my knowledge, there are not options in `transformers-cli convert` that would enable me to do so.
3. The [documentation](https://huggingface.co/transformers/converting_tensorflow_models.html) should warn people of this (and better yet, describe how to convert all tensors).
## Environment info
- `transformers` version: 4.3.2
- Platform: Linux-5.4.0-60-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.9
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): 2.3.0 (False)
- Using GPU in script?: N/A
- Using distributed or parallel set-up in script?: N/A
### Who can help
@LysandreJik
@sgugger
## Information
Model I am using (Bert, XLNet ...): SZTAKI-HLT/hubert-base-cc (BERT)
The problem arises when using:
* [X] the official example scripts: `transformer-cli convert`
* [ ] my own modified scripts:
The tasks I am working on is:
* [X] an official GLUE/SQUaD task: masked LM
* [ ] my own task or dataset:
## To reproduce
Steps to reproduce the behavior:
1. `BertForMaskedLM.from_pretrained('SZTAKI-HLT/hubert-base-cc')`
2. Observer warning messages
3. Try to use it for masked LM
## Expected behavior
Conversion: the ability to convert tensors for the training tasks
Usage: no warning messages and same MLM / NSP performance as with the official TF BERT code | 02-23-2021 12:36:09 | 02-23-2021 12:36:09 | Note that the way to load TF weights in a PyTorhc model while using the hub is just to do:
```
model = BertForMaskedLM.from_pretrained("SZTAKI-HLT/hubert-base-cc", from_tf=True)
```
It does look like this model has some weights missing (I still get a warning but with a few less weights than you) but if that's blocking you, there is nothing we can do about it: you should contact the author of the model on the hub (the same weights are missing in the TF version).<|||||>@sgugger **I** am the author of the model. :) And yes, the weights are missing, hence this issue; as described above, I used `transformers-cli convert` to convert the original BERT TF (1.5) checkpoint to Pytorch, and the script apparently did not convert all the weights.
I might be wrong about this, but I was under the impression that the `from_tf=True` is to be used when the model has both PT and TF versions uploaded to the hub, not for importing an original BERT checkpoint.<|||||>Ah sorry I misunderstood your problem, sorry! I though you were trying to use the `transformers-cli` on the PyTorch model file of the hub. Not sure what the problem is with the conversion script. Maybe @LysandreJik will have an idea?<|||||>Hello! Thank your for opening an issue. As you both have said, there seems to be an error with the conversion.
Do you mind letting me know how you obtained your checkpoint? For example, is it one of the checkpoints available on google-research/bert, or is a custom one?
All the checkpoints available on google-research/bert should convert without any issue.<|||||>OK, I have experimented a bit and it seems that actually conversion works -- it is possible that I created a `BertModel` and saved my model as that instead of `BertForPretraining`. In any case, I have updated my model(s) and the issue is moot.
Before closing it, however, one final question. I created the Pytorch model first. When I converted that to TF2 via
```
TFBertForPreTraining.from_pretrained('SZTAKI-HLT/hubert-base-cc', from_pt=True)
```
, I got the following warning:
```
Some weights of the PyTorch model were not used when initializing the TF 2.0 model TFBertForPreTraining: ['cls.predictions.decoder.bias', 'bert.embeddings.position_ids']
```
The model, when used for masked LM, behaves identically to the Pytorch model, so I am wondering why they store different tensors (given that these tensors come from the original TF checkpoint) and if the model will be alright without them.<|||||>Glad you could convert it!!
These warnings aren't important, the bias is included in another weight and the position IDs are a buffer that does not need to be created in TensorFlow.
I'm addressing these warnings in #10397. |
transformers | 10,347 | closed | [Benchmark] | # 🖥 Benchmarking `transformers`
## Benchmark
Which part of `transformers` did you benchmark?
## Set-up
What did you run your benchmarks on? Please include details, such as: CPU, GPU? If using multiple GPUs, which parallelization did you use?
## Results
Put your results here! | 02-23-2021 12:09:58 | 02-23-2021 12:09:58 | |
transformers | 10,346 | closed | Custom tokenizer with run_mlm script | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux-5.9.16-1-MANJARO-x86_64-with-glibc2.10
- Python version: 3.8.5
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: NO
### Who can help
@LysandreJik, @n1t0
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I follow the official link [(https://huggingface.co/docs/tokenizers/python/latest/pipeline.html#example)] to train and save a Bert WordPieceLevel tokenizer on a custom corpus.
2. I use this tokenizer to train a bert model from scratch using the run_mlm script
3.
`python run_mlm.py --output_dir=../Data/model/ --model_type=bert --mlm_probability 0.1 --tokenizer_name=../Data/tokenizer --learning_rate 1e-4 --do_train --train_file ../Data/corpus.txt --gradient_accumulation_steps=4 --num_train_epochs 100 --per_gpu_train_batch_size 2 --save_steps 50000 --seed 42 --config_name=../Data/config/ --line_by_line --do_eval --max_seq_length=8 --logging_steps 5000 --validation_split_percentage 20 --save_steps 50000 --save_total_limit 10`
from tokenizers import Tokenizer
from tokenizers.models import WordPiece
from tokenizers.pre_tokenizers import Whitespace
from tokenizers import normalizers
from tokenizers.normalizers import NFD, StripAccents
from tokenizers.processors import TemplateProcessing
from tokenizers.trainers import WordPieceTrainer
vocab_file = '../Data/tokenizer/config.json'
corpus_file = '../Data/corpus.txt'
df = pd.read_csv(corpus_file)
bert_tokenizer = Tokenizer(WordPiece(unk_token="[UNK]"))
bert_tokenizer.normalizer = normalizers.Sequence([NFD(), StripAccents()])
bert_tokenizer.pre_tokenizer = Whitespace()
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[("[CLS]", 1),("[SEP]", 2),],)
trainer = WordPieceTrainer(vocab_size=25000,min_frequency=3,special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
bert_tokenizer.train_from_iterator(df.query_text.to_list(),trainer)
bert_tokenizer.save(vocab_file)
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
My training configuration is as follows
` "architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 128,
"initializer_range": 0.02,
"intermediate_size": 256,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 1536,
"model_type": "bert",
"num_attention_heads": 4,
"num_hidden_layers": 4,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.3.2",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 25000
`
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I am getting the error
` loading configuration file ../Data/tokenizer/config.json
Traceback (most recent call last):
File "run_mlm.py", line 457, in <module>
main()
File "run_mlm.py", line 276, in main
tokenizer = AutoTokenizer.from_pretrained(model_args.tokenizer_name, **tokenizer_kwargs)
File ".../lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py", line 362, in from_pretrained
config = AutoConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
File ".../lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py", line 379, in from_pretrained
raise ValueError(
ValueError: Unrecognized model in ../Data/tokenizer. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: wav2vec2, convbert, led, blenderbot-small, retribert, mt5, t5, mobilebert, distilbert, albert, bert-generation, camembert, xlm-roberta, pegasus, marian, mbart, mpnet, bart, blenderbot, reformer, longformer, roberta, deberta, flaubert, fsmt, squeezebert, bert, openai-gpt, gpt2, transfo-xl, xlnet, xlm-prophetnet, prophetnet, xlm, ctrl, electra, encoder-decoder, funnel, lxmert, dpr, layoutlm, rag, tapas` | 02-23-2021 12:07:11 | 02-23-2021 12:07:11 | Hi! The issue here is that the `AutoTokenizer` has no idea what is the type of your tokenizer: it's looking for the `model_type` specified in the `config.json`, but it seems it cannot find it.
Could you show us the results of `ls ../Data/tokenizer/`, and if the file `config.json` is in it, could you show us the exact content of the JSON file?
Thanks a lot!<|||||>I am expecting the config.json and vocabulary files to be saved by running `bert_tokenizer.save(vocab_file)` (Please check the attached code). But unfortunately it saves a json file containing only the vocabulary. I tried the function `bert_tokenizer.save_model`, but got an error saying Tokenizer don't have such a function. So there is no configuration files. But only a vocabulary json file. If I give a directory path as input to `bert_tokenizer.save`, it gives me error `Exception: Is a directory (os error 21)`.<|||||>The `bert_tokenizer.save(vocab_file)` method does not save the configuration as the configuration is linked to the model. It is unfortunately currently impossible to use the `AutoTokenizer` without having the model `config.json` in the same folder, which is a hard limitation of the `AutoTokenizer`.
We are aware of this limitation and it is part of the immediate roadmap. Expect a change in the coming weeks related to that issue.
Thank you for your understanding.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @LysandreJik! Has there been any change on this subject?<|||||>There hasn't been any change - but we've been freeing some time to work on this subject. I would expect this to be resolved in 2 or 3 weeks.<|||||>Awesome, thanks a lot for your reply :) <|||||>I also encountered this problem, how to solve it<|||||>Using a recent version of the library should now work for these use-cases.
Could you try using the `master` branch to see if it fixes your issue? You should use it to both save your tokenizer, as well as to load it in the script. If it doesn't work, please provide the code you're using as well as the full stack trace. Thank you! |
transformers | 10,345 | closed | MarianMT - ONNX only accepts fixed input despite setting dynamic axes | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.1.0
- Platform: Linux-5.8.0-43-generic-x86_64-with-glibc2.29 (Ubuntu 20.04.2)
- Python version: 3.8.3
- PyTorch version (GPU?): 1.6.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Yes (No too, error persists in both cases)
- Using distributed or parallel set-up in script?: No
### Who can help
marian: @patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): Helsinki-NLP/opus-mt-en-hi (bug persists for other languages too)
The problem arises when using:
* [ ] my own modified scripts: (give details below)
I slightly modified the `convert_graph_to_onnx.py` script with the code snippet given below(call to export is exactly the same). Apparently torch.triu() is not supported for onnx conversion, so following prior issues in PyTorch [#32968](https://github.com/pytorch/pytorch/issues/32968) I modified the script, resulting in successful onnx conversion of models.
torch_triu = torch.triu
def triu_onnx(x, diagonal=0):
l = x.shape[0]
arange = torch.arange(l, device=x.device)
mask = arange.expand(l, l)
arange = arange.unsqueeze(-1)
if diagonal:
arange = arange + diagonal
mask = mask >= arange
return x.masked_fill(mask == 0, 0)
torch.triu = triu_onnx
export(
nlp.model,
model_args,
f=output.as_posix(),
input_names=ordered_input_names,
output_names=output_names,
dynamic_axes=dynamic_axes,
do_constant_folding=True,
use_external_data_format=use_external_format,
enable_onnx_checker=True,
opset_version=opset,
)
torch.triu = torch_triu
The tasks I am working on is:
* Simple Machine Translation
## To reproduce
Steps to reproduce the behavior:
1. ```python convert_graph_to_onnx.py --framework pt --model Helsinki-NLP/opus-mt-en-hi onnx-models/opus-mt-en-hi.onnx```
2.
```
from transformers import AutoTokenizer
from onnxruntime import ExecutionMode, InferenceSession, SessionOptions
import numpy as np
tok_name = 'Helsinki-NLP/opus-mt-en-hi'
model_name = 'onnx-models/opus-mt-en-hi.onnx'
tokenizer = AutoTokenizer.from_pretrained(tok_name)
options = SessionOptions()
options.intra_op_num_threads = 1
options.execution_mode = ExecutionMode.ORT_SEQUENTIAL
session = InferenceSession(model_name, options)
tokens = tokenizer.encode_plus('Testing onnx conversion through a sample input for machine translation.')
tokens = {name: np.atleast_2d(value) for name, value in tokens.items()}
op = session.run(None, tokens)
```
3. Stack Trace
```
---------------------------------------------------------------------------
RuntimeException Traceback (most recent call last)
<ipython-input-96-d76889a45083> in <module>
14 tokens = tokenizer.encode_plus('Testing onnx conversion through a sample input for machine translation.')
15 tokens = {name: np.atleast_2d(value) for name, value in tokens.items()}
---> 16 op = session.run(None, tokens)
~/.pyenv/versions/3.8.3/lib/python3.8/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py in run(self, output_names, input_feed, run_options)
122 output_names = [output.name for output in self._outputs_meta]
123 try:
--> 124 return self._sess.run(output_names, input_feed, run_options)
125 except C.EPFail as err:
126 if self._enable_fallback:
```
```
RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Non-zero status code returned while running Reshape node. Name:'Reshape_62' Status Message: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/reshape_helper.h:42 onnxruntime::ReshapeHelper::ReshapeHelper(const onnxruntime::TensorShape&, std::vector<long int>&) gsl::narrow_cast<int64_t>(input_shape.Size()) == size was false. The input tensor cannot be reshaped to the requested shape. Input shape:{1,13}, requested shape:{5}
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Ideally the input should have smoothly passed through the onnx converted model during inference, but it doesn't.
Possible useful information
```
Dynamic Axes:
{'input_ids': {0: 'batch', 1: 'sequence'},
'attention_mask': {0: 'batch', 1: 'sequence'},
'output_0': {0: 'batch', 1: 'sequence'},
'output_1': {0: 'batch', 1: 'sequence'}}
```
```
Generated inputs order: ['input_ids', 'attention_mask']
```
**My lead is that** -
Exporting is not taking into account the dynamic axes for some reason, when any Marian Mt model is being used.
The error also notes - requested shape size - 5, which is the sequence length of the dummy input (line 196 in `convert_graph_to_onnx.py` while converting to onnx.
Notably, passing an input with sequence length 5 works perfectly fine.
Moreover, this script works perfectly for standard models like distilbert for both model conversion, and inference. So it's surely some model-specific problem.
Will be really helpful to get a fix for this, especially since there are numerous Marian mt models so it can have a larger impact! | 02-23-2021 11:23:46 | 02-23-2021 11:23:46 | pinging @mfuntowicz, our `onnx` expert.<|||||>Any lead on the solution? @mfuntowicz, @patrickvonplaten, @patil-suraj <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,344 | closed | Fix broken examples/seq2seq/README.md markdown | # What does this PR do?
This PR fix broken markdown in examples/seq2seq/README.md
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-23-2021 10:58:43 | 02-23-2021 10:58:43 | |
transformers | 10,343 | closed | Where can we find the `RAG` implementation? | I noticed that `transformers` included the implementation for `DPR`. But for `RAG`, I only find a [demo](https://huggingface.co/rag/). Is there a source code for `RAG`? Or do you know where is Facebook's source code for `RAG`? Thanks | 02-23-2021 06:41:28 | 02-23-2021 06:41:28 | The implementation can be found [here](https://github.com/huggingface/transformers/tree/master/src/transformers/models/rag).<|||||>@NielsRogge Thanks! That's it. |
transformers | 10,342 | closed | DialoGPT tokenizer config issue | ## Environment info
- `transformers` version: 4.3.2
- Platform: Darwin-19.3.0-x86_64-i386-64bit
- Python version: 3.6.5
- PyTorch version (GPU?): 1.6.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
- gpt2: @patrickvonplaten, @LysandreJik
Library:
- tokenizers: @n1t0, @LysandreJik
## Information
Model I am using (DialoGPT-small.):
When I load tokenizer, its `model_max_len` is coming infinite.
```
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-small")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-small")
max_len = tokenizer.model_max_length
```
## Expected behavior
Before It was coming as 1024.
Is this some recent change?
| 02-23-2021 06:36:55 | 02-23-2021 06:36:55 | Hi! This change may have originated from the move to the git-based repos. @patrickvonplaten and I have just modified the DialoGPT tokenizer configuration to have 1024 as model max length, you shouldn't have to do anything but re-run your script to see it updated.<|||||>@LysandreJik Don't hesitate to reference the url for the commit on huggingface.co for future reference
Here I believe it's https://huggingface.co/microsoft/DialoGPT-small/commit/9fb5c2d6a01395898bfd90acce2dbec1537730f1<|||||>Indeed! Here are the commits:
`DialoGPT-small`: [huggingface@364722e](https://huggingface.co/microsoft/DialoGPT-small/commit/364722ef15f5c04dcb9a57d3b77815bbc1d51efc) and [huggingface@9fb5c2d](https://huggingface.co/microsoft/DialoGPT-small/commit/364722ef15f5c04dcb9a57d3b77815bbc1d51efc)
`DialoGPT-medium`: [huggingface@e84a3e](https://huggingface.co/microsoft/DialoGPT-medium/commit/e84a3e0adc90aabc6b57e59318e15bf4b733eedc)
`DialoGPT-large`: [huggingface@acc7ea](https://huggingface.co/microsoft/DialoGPT-large/commit/acc7eaf98122bc6922976182b4d365d650f179b3)<|||||>Thanks @LysandreJik for quick fix. |
transformers | 10,341 | closed | Translate English into Japanese using mbart | Transformers version: 4.4.0.dev0
Hello, I am trying to translate English into Japanese(en-ja). I confirm that there is no error in my target content and source content. However, when I try to translate, the result of predict is always English. What should I do?
I only changed config.json as shown below:
"task_specific_params": {
"translation_en_to_ja": { "decoder_start_token_id": 250020}
}
I execute run_seq2seq.py through the following command:
python run_seq2seq.py
--model_name_or_path facebook/mbart-large-cc25
--do_train
--do_eval
--do_predict
--task translation_en_to_ja
--source_lang en_XX
--target_lang ja_XX
--train_file train.json
--validation_file val.json
--test_file test.json
--output_dir result
--per_device_train_batch_size=4
--per_device_eval_batch_size=4
--overwrite_output_dir
--predict_with_generate
The results of the model predict are as follows:
test_bleu = 1.665
test_gen_len = 41.0
test_loss = 4.2494
test_mem_cpu_alloc_delta = 0MB
test_mem_cpu_peaked_delta = 8MB
test_runtime = 77.6885
test_samples = 4
test_samples_per_second = 0.051
Am I not setting the config.json correctly? Or there are other things that need to be set up?
Looking forward to your reply.
@patil-suraj
| 02-23-2021 05:12:56 | 02-23-2021 05:12:56 | Hi @DUT-Tjy
`facebook/mbart-large-cc25` is not fine-tuned for translation, it's a pretrained model, which should be fine-tuned if you want to use it for translation. You could use the `mbart-large-50-one-to-many-mmt` or `mbart-large-50-many-to-many-mmt` model for `en-ja` translation, these are fine-tuned multilingual translation models.
https://huggingface.co/models?filter=mbart-50
We have stopped supporting the `task_specific_params` params. You should directly set the `decoder_start_token_id` in `config`, instead of `config.task_specific_params`. <|||||>Thank you for your reply!
I execute run_seq2seq.py again through the following command:
python transformers/examples/seq2seq/run_seq2seq.py
--model_name_or_path facebook/mbart-large-50-one-to-many-mmt
--do_predict
--task translation_en_to_ja
--source_lang en_XX
--target_lang ja_XX
--train_file train.json
--validation_file val.json
--test_file test.json
--output_dir predict
--per_device_train_batch_size=4
--per_device_eval_batch_size=4
--overwrite_output_dir
--predict_with_generate
However, the generated prediction content is not Japanese, but a mixture of languages, as follows:
Ik zal jullie vandaag leren hoe je onderzoek doet.
[وکٹر] بس انلاین تلاش کرنے کی طرح؟
En nee.
The contents of test.json are as follows
{"translation": {"ja": "オンライン調査みたいなものですか?", "en": "Like just searching online?"}}
{"translation": {"ja": "それも含みます。", "en": "Yes and no."}}
Is there something wrong with the command I set?
Looking forward to your reply.
@patil-suraj<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@DUT-Tjy Hi, I also face this problem. I expect my generated sentences are only in the Vietnamese language, but they are mixed in both English and Vietnamese. Have you resolved it? If yes, could you please share the solution with me? |
transformers | 10,340 | open | tokenizer.Tokenizer compatibility with Inference API or Auto* classes | # 🚀 Feature request
Make tokenizers created with the tokenizers library compatible with the Inference API or Auto classes
## Motivation
I have trained a model on a specific domain by modeling a sequence generation problem as a language modeling problem to predict the next token in the set. The tokenizer associated with the model I used (TransformerXL) was not compatible with my domain since my tokens contained whitespace so I created my own using the `WordLevelTrainer` class in the `tokenizers` library. **Now that I have a complete working solution I would like to use this tokenizer and model in the huggingface Inference API, however it does not work because it requires the tokenizer associated with the model**. Making the `transformers` models compatible with `tokenizers` library could make all kinds of use cases outside of NLP possible with these libraries.
## Your contribution
Is it possible to hack the saved config for a tokenizer created through the `tokenizers` library to work directly with the `Auto` classes? If so I can document this approach for other users.
| 02-23-2021 01:39:13 | 02-23-2021 01:39:13 | |
transformers | 10,339 | closed | Problem with GPT2/DistilGPT2 prediction - dimension mismatch | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.4.0
- Platform: Linux-4.18.0-193.el8.x86_64-x86_64-with-glibc2.10
- Python version: 3.8.3
- PyTorch version (GPU?): 1.5.1+cu101 (True)
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: single gpu
### Who can help
Perhaps @patrickvonplaten, @LysandreJik could help?
## Information
Model I am using: GPT2/DistilGPT2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I'm able to train GPT2/DistilGPT2 successfully. However, during prediction, I consistently get the following error:
```
***** Running Prediction *****
Num examples = 1922
Batch size = 1
0%| | 0/1922 [00:00<?, ?it/s]Traceback (most recent call last):
File "../../models/jigsaw/tr-3.4//run_puppets.py", line 283, in <module>
main()
File "../../models/jigsaw/tr-3.4//run_puppets.py", line 212, in main
pred_results = trainer.predict(test_dataset = eval_dataset) # call predict to get access to both metrics and predictions
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer.py", line 1287, in predict
return self.prediction_loop(test_dataloader, description="Prediction")
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer.py", line 1353, in prediction_loop
preds_host = logits if preds_host is None else nested_concat(preds_host, logits, dim=0)
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in nested_concat
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in <genexpr>
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in nested_concat
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 47, in <genexpr>
return type(tensors)(nested_concat(t, n, dim) for t, n in zip(tensors, new_tensors))
File "/u/ioana/.conda/envs/tr34/lib/python3.8/site-packages/transformers/trainer_pt_utils.py", line 49, in nested_concat
return torch.cat((tensors, new_tensors), dim=dim)
RuntimeError: Sizes of tensors must match except in dimension 3. Got 53 and 23
```
It doesn't seem to be a function of training (e.g., I've trained for 1-2-3 epochs, same results for prediction; the good news is that training seems to work perfectly, or at least, it seems to train and I get a model checkpoint).
Any hunch on where I should start looking for a problem? I have no experience with these models. I checked other issues that were closed and some indicated that may be an attention problem. Thanks in advance!
I've used the same scripts successfully with 10+ different models, including GPT.
PS: I may upgrade to the newer version of the library, but that requires some work on my side to update my code as sometimes the upgrades are not backward compatible... | 02-22-2021 22:35:25 | 02-22-2021 22:35:25 | Maybe @sgugger has seen that error previously?<|||||>Yes, and it has been fixed... but only in the more recent versions of Transformers.<|||||>Excellent, thank you guys, I'll work on upgrading tomorrow.<|||||>If it can help you, we have a [migration guide from versions v3 to v4](https://huggingface.co/transformers/migration.html#migrating-from-transformers-v3-x-to-v4-x).
Please let us know if you run into any issues not described here!<|||||>will take a look, thank you! My tasks are similar to text classification in Glue, so I usually start from the sample code and modify it accordingly.<|||||>These are some different bits that I found by glancing at the new sample code to run GLUE. @LysandreJik
- Columns in the train/dev/test datasets: Fill in the `task_to_keys` dictionary with appropriate column names
- The scripts assume the datasets have a column called `label`; label_to_id needs attention
- Data processing/tokenization changed the api
- The dataset ingestion changed (the code is in there, though, on an else branch if the task is not a glue task)
- Metric computation also changed<|||||>It also looks like I have to install `pip install datasets` separately. I don't think I had to do that before.<|||||>Thank you for mentioning all of this. I believe the changes you're seeing are related to the example scripts, which are not static, and not related to the core of the library.
The changes related to the core of the library here would be those that were applied to the `Trainer`; did you manage to run your previous example script with the latest version, after updating it w.r.t the migration guide?<|||||>Correct. Training works, I have to fix a few things related to prediction & metrics computation. I have some home-brewed code that computes metrics for test (which is missing in the original sample code). I'll figure it out soon. So far, easier than expected. The data ingestion simplified a lot and I'm actually surprised it worked :D <|||||>I figured out the (one of the?) problem: some name collapse on my side. Fixed. Fingers crossed it works now. In any case, not too painful to upgrade. But now I expect backward compatibility 🥇 since most of the big pieces have gone through lots of refactoring. <|||||>I fixed all the problems and I managed to reproduce some old results with BERT-finetuned model. Thanks for your help!<|||||>Glad you could get it to work!<|||||>@LysandreJik It turns out I still have a problem with fine-tuning GPT with padding and batching. I have the following lines in my code:
```
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
config.pad_token_id = config.eos_token_id
```
which work fine for GPT2*. However, the padding fails in GPT. After some digging, I realized there is no eos_token so the statements above have no effect.
This is the tokenizer configuration:
```
{"unk_token": "<unk>", "model_max_length": 512, "name_or_path": "openai-gpt"}
```
Any advice on how to fix this? I'd like to run GPT with padding & batching if at all possible. Thanks!<|||||>The open-ai GPT model has neither a pad, nor bos, nor eos token, which means that you will have to set them yourself. I'd advise to either set the `<unk_token>` to the EOS token:
```python
tokenizer.eos_token = tokenizer.unk_token
```
The other solutions is to add a special token before fine-tuning as explained here: https://huggingface.co/transformers/internal/tokenization_utils.html?highlight=add_special_tokens#transformers.tokenization_utils_base.SpecialTokensMixin.add_special_tokens<|||||>Thank you for this suggestion, I'll give it a try. |
transformers | 10,338 | closed | Fix evaluation with label smoothing in Trainer | # What does this PR do?
There was a bug in Trainer when using label smoothing: the `compute_loss` function pops the labels out of the inputs so they couldn't be gathered. This PR fixes that.
Fixes #10309 | 02-22-2021 21:14:14 | 02-22-2021 21:14:14 | |
transformers | 10,337 | closed | [trainer] port metrics logging and saving methods to all example scripts | In an effort to make the examples easier to read, in https://github.com/huggingface/transformers/pull/10266 we added new trainer methods:
* `trainer.log_metrics` - to perform consistent formatting for logged metrics
* `trainer.save_metrics` - to save the metrics into a corresponding json file.
and deployed them in `run_seq2seq.py`.
The next task is do the same for all the other `examples/*/run_*.py` scripts.
Steps:
1. Study the diff for `run_seq2seq.py`. https://github.com/huggingface/transformers/pull/10266/files#diff-82bfb61a8b91894c2c2101734a6ab7b415be4ace5cd1e01b4c37663020d924ae
2. pick a script, e.g. `examples/multiple-choice/run_swag.py`
3. apply the same changes as in step 1 removing the explicit metrics printing lines and replacing them with the 2 new methods
4. test the modified script (usually `README.md` for that folder should have the instructions to do so) and see that your change works - train/eval/test metrics are printed using the new way and that `(train|eval|test|all)_results.json` are generated.
You can use a very short datasample 5 records is enough, by just adding: `--max_train_samples 5 --max_val_samples 5 --max_test_samples 5`
repeat for other scripts.
Thank you very much!
The metrics log should be similar to this, with the exception of using different scoring metrics:
```
02/16/2021 17:06:39 - INFO - __main__ - ***** train metrics *****
02/16/2021 17:06:39 - INFO - __main__ - epoch = 1.0
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_alloc_delta = 2MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_alloc_delta = 230MB
02/16/2021 17:06:39 - INFO - __main__ - init_mem_gpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - total_flos = 2128GF
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_alloc_delta = 55MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_alloc_delta = 692MB
02/16/2021 17:06:39 - INFO - __main__ - train_mem_gpu_peaked_delta = 661MB
02/16/2021 17:06:39 - INFO - __main__ - train_runtime = 2.3114
02/16/2021 17:06:39 - INFO - __main__ - train_samples = 100
02/16/2021 17:06:39 - INFO - __main__ - train_samples_per_second = 3.028
02/16/2021 17:06:43 - INFO - __main__ - ***** val metrics *****
02/16/2021 17:13:05 - INFO - __main__ - epoch = 1.0
02/16/2021 17:13:05 - INFO - __main__ - eval_bleu = 24.6502
02/16/2021 17:13:05 - INFO - __main__ - eval_gen_len = 32.9
02/16/2021 17:13:05 - INFO - __main__ - eval_loss = 3.7533
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_cpu_peaked_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_alloc_delta = 0MB
02/16/2021 17:13:05 - INFO - __main__ - eval_mem_gpu_peaked_delta = 510MB
02/16/2021 17:13:05 - INFO - __main__ - eval_runtime = 3.9266
02/16/2021 17:13:05 - INFO - __main__ - eval_samples = 100
02/16/2021 17:13:05 - INFO - __main__ - eval_samples_per_second = 25.467
02/16/2021 17:06:48 - INFO - __main__ - ***** test metrics *****
02/16/2021 17:06:48 - INFO - __main__ - test_bleu = 27.146
02/16/2021 17:06:48 - INFO - __main__ - test_gen_len = 41.37
02/16/2021 17:06:48 - INFO - __main__ - test_loss = 3.6682
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_cpu_peaked_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_alloc_delta = 0MB
02/16/2021 17:06:48 - INFO - __main__ - test_mem_gpu_peaked_delta = 645MB
02/16/2021 17:06:48 - INFO - __main__ - test_runtime = 5.1136
02/16/2021 17:06:48 - INFO - __main__ - test_samples = 100
02/16/2021 17:06:48 - INFO - __main__ - test_samples_per_second = 19.556
``` | 02-22-2021 21:13:28 | 02-22-2021 21:13:28 | Can I work on this issue?<|||||>Yes, please and thank you!<|||||>Hi @stas00,
Sometimes it is saving it as txt file instead of a JSON file like the below code
```python
output_train_file = os.path.join(training_args.output_dir, "train_results.txt")
if trainer.is_world_process_zero():
with open(output_train_file, "w") as writer:
logger.info("***** Train results *****")
for key, value in sorted(train_result.metrics.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
```
Should we keep it in JSON format only or write code for saving it as a txt file?
I have seen such behavior in [run_qa.py](https://github.com/huggingface/transformers/blob/master/examples/question-answering/run_qa.py#L483), [run_mlm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py#L416), run_clm.py, run_plm.py, and many other run_**.py files<|||||>`.json` format everywhere please, as this method writes out is a json data that it writes.
You don't need to keep the code that did .txt files writing.
These are examples and by definition they have no API as such to maintain, other than ensuring we don't drop functionality if someone uses these examples for something. And this effort will make things consistent on the metrics logging/saving front.
Thank you.<|||||>Hi @stas00
In [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/token-classification/run_ner.py#L432) we are saving two things `test_result` and `test_predictions` shall we add another function in a trainer for saving results apart from metrics? or just add an extra Argument for saving it as `test_predictions.json` file
```python
output_test_results_file = os.path.join(training_args.output_dir, "test_results.txt")
if trainer.is_world_process_zero():
with open(output_test_results_file, "w") as writer:
for key, value in sorted(metrics.items()):
logger.info(f" {key} = {value}")
writer.write(f"{key} = {value}\n")
# Save predictions
output_test_predictions_file = os.path.join(training_args.output_dir, "test_predictions.txt")
if trainer.is_world_process_zero():
with open(output_test_predictions_file, "w") as writer:
for prediction in true_predictions:
writer.write(" ".join(prediction) + "\n")
```<|||||>Yes, `test_predictions.txt` is a different feature that was unintentionally dropped in some of the scripts and ideally should be restored as well.
Here is a request to restore it: https://github.com/huggingface/transformers/issues/10381
So if you'd like to tackle it as well together inside this one PR or as a separate PR that would be fantastic!
I'd say, it probably could be `trainer.save_predictions("test", predictions)`. And in which case please put it in all example scripts where it's relevant.
It probably should remain `test_predictions.txt`, as there is no data structure to it.
Note, that the secondary helper Trainer methods have been just moved in master (so rebase your branch), e.g. `save_metrics`:
```
src/transformers/trainer.py: from .trainer_pt_utils import _get_learning_rate, log_metrics, metrics_format, save_metrics
src/transformers/trainer_pt_utils.py:def save_metrics(self, split, metrics):
```
<|||||>Sure @stas00,
I would love to work on that as well if possible!
I would create separate PR because it will help me to make fewer mistakes.
Is there any way to test these four files faster: `run_tf_multiple_choice.py`, `run_xnli.py`, `run_tf_glue.py`, `run_tf_text_classification.py`. For Other files testing script, I figure out from [test_examples.py](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py)
<|||||>Great! thank you, @bhadreshpsavani!
> Is there any way to test these four files faster: `run_tf_multiple_choice.py`, `run_xnli.py`, `run_tf_glue.py`, `run_tf_text_classification.py`. For Other files testing script, I figure out from [test_examples.py](https://github.com/huggingface/transformers/blob/master/examples/test_examples.py)
If you'd like this could be your next challenge after this task. Ideally all examples should be tested, so missing tests, even one or two would be very welcome. We can create a separate issue and discuss the specifics if that appeals to you. If not, then please do not worry about it.
But otherwise, if you're testing manually, just use very short `--max_train_samples 5 --max_val_samples 5 --max_test_samples 5`<|||||>Ya @stas00, we can add the missing tests after this task.
I have noticed that `--max_train_samples 5 --max_val_samples 5 --max_test_samples 5` is mostly not working for scripts other than `run_seq2seq.py`.
It mostly giving this error
`ValueError: Some specified arguments are not used by the HfArgumentParser: ['--max_train_samples', '5', '--max_val_samples', '5', '--max_test_samples', '5']`<|||||>As you can tell I only ever use seq2seq for testing. You're absolutely correct that other examples don't have those.
I think it'd be greatly appreciated and very useful if other examples had a way to do the same. Let me check if others agree with that.<|||||>Oh and as you are doing an amazingly useful work syncing all examples to look and feel similar, there is one very crucial thing to sync and it's `templates/adding_a_new_example_script/` on which all new examples will be based, so we better have a good template to start with. I forgot to mention that earlier. Thank you!
<|||||>> As you can tell I only ever use seq2seq for testing. You're absolutely correct that other examples don't have those.
>
> I think it'd be greatly appreciated and very useful if other examples had a way to do the same. Let me check if others agree with that.
Created a dedicated issue for that now, should you be interested, @bhadreshpsavani
https://github.com/huggingface/transformers/issues/10423
Thank you!<|||||>Sure @stas00,
I will be happy to work on it! |
transformers | 10,336 | closed | [Benchmark] Converting a QA distilbert model to onnx - the f1 score plummet | # 🖥 Benchmarking onnx QA `transformers`
## Issue
Poor benchmark result (squadv2) of converted onnx QA model using run_squad_onnx benchmark.
## Context
As part of my day job, my goal is to convert our QA model to onnx to make them available in production in Java.
My teammate generated a distilbert model trained on SQUADv2. He reproduced SOTA result on the squad benchmark.
I want to evaluate the quality of my converted model.
## Set-up
### Hardware and OS
Training of pytorch model:
- OS: Debian
- pytorch version: 1.7.0 on GPU
Conversion and benchmarking:
- OS: MacOS
- onnxruntime: 1.6.0 (CPUExecutioner)
- onnx: 1.8.1
### To reproduce
To convert my model I used the convert fucntion as follow:
`convert('pt', <path-to-distilbert-model>, '/tmp/onnx', 12, 'distilbert-base-uncased-distilled-squad')`
This generate an onnx model succesfully. To benchmark the model I modified the legacy run_squad with onnx inference. You can find the source here: https://gist.github.com/pievalentin/c0007be4c2483bb113326fed0b1bddb2
I modified the inputs to make it ort compatible and I update the start_logit and end_logit to match the output generated by the inference session.
I run the script with the following config:
python run_squad_ort.py --framework=ort --ort_model_path=<path-to-onnx-model> --model_name_or_path=<path-to-original-pt-model> --model_type=question-answering --output_dir=/tmp/qa --max_seq_length=384 --doc_stride=128 --n_best_size=20 --max_answer_length=30 --data_dir=/eai/datasets/squad2 --tokenizer_name=distilbert-base-uncased-distilled-squad
here is the config.json of the pytorch model:
```
{
"_name_or_path": "distilbert-base-uncased-distilled-squad",
"activation": "gelu",
"architectures": [
"DistilBertForQuestionAnswering"
],
"attention_dropout": 0.1,
"dim": 768,
"dropout": 0.1,
"hidden_dim": 3072,
"initializer_range": 0.02,
"max_position_embeddings": 512,
"model_type": "distilbert",
"n_heads": 12,
"n_layers": 6,
"pad_token_id": 0,
"qa_dropout": 0.1,
"seq_classif_dropout": 0.2,
"sinusoidal_pos_embds": false,
"tie_weights_": true,
"vocab_size": 30522,
"return_dict": false
}
```
## Results
With the previous config, I get those poor result:
```
{
"exact": 8.641455403015245,
"f1": 11.271879679607238,
"total": 11873,
"HasAns_exact": 0.1349527665317139,
"HasAns_f1": 5.403344709172896,
"HasAns_total": 5928,
"NoAns_exact": 17.12363330529857,
"NoAns_f1": 17.12363330529857,
"NoAns_total": 5945,
"best_exact": 50.07159100480081,
"best_exact_thresh": 0,
"best_f1": 50.07310704960835,
"best_f1_thresh": 0
}
```
As if the weight of the model were discarded. So I am wondering what I am missing. Is it a poor configuration or benchmarking is not done the right way. I saw this script which is pretty similar to mine: https://github.com/onnx/models/blob/f6779d235046f28c0d3bf4ec25e4456c4689d2ce/text/machine_comprehension/bert-squad/dependencies/run_onnx_squad.py
So I would guess I must be missing something either in the conversion or the benchmarking.
| 02-22-2021 19:08:41 | 02-22-2021 19:08:41 | After further investigating, I found this useful repo: https://github.com/airKlizz/benchmark-for-transformers
Working on adding squad to this |
transformers | 10,335 | closed | Return cross-attention weights in generation function | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
With v4.2.0 release, generation can now return encoder and decoder self-attention weights but it still doesn't return cross-attention weights. These weights are already computed and returned in model `forward` method, and just need to be returned in `generate` method.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
Visualizing cross-attention weights is useful for many applications such as token-alignment.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I can submit a PR.
| 02-22-2021 18:58:23 | 02-22-2021 18:58:23 | Hi @patrickvonplaten!
Please let me know what you think about the feature. I can send a PR for it once you confirm.<|||||>Yes, this would be a nice addition indeed :-)<|||||>Happy to help you in your PR! |
transformers | 10,334 | closed | Loading from last checkpoint functionality in Trainer.train | Enhance resume_from_checkpoint argument of Trainer.train to accept
bool type. If True given, last saved checkpoint in self.args.output_dir
will be loaded. (#10280)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Please look at [the feature request](https://github.com/huggingface/transformers/issues/10280) for full description of the changes. Thanks.
Fixes #10280
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 02-22-2021 15:43:34 | 02-22-2021 15:43:34 | Raised changes. 1 reply to your review comment.
Do let me know if any other change required.
Thanks.<|||||>Thanks a lot for your contribution! |
transformers | 10,333 | closed | Clean TF ConvBert | # What does this PR do?
This PR aims to clean TF ConvBert by adding explicit keyword arguments, typing and update the documentation in the model implementation to make it easier to understand and read.
| 02-22-2021 15:03:23 | 02-22-2021 15:03:23 | I don't understand this PR, I disagree with your proposal of adding keyword names when they're not required and don't help readability.
Mentioned here https://github.com/huggingface/transformers/pull/9788#discussion_r564365003 and here https://github.com/huggingface/transformers/pull/9788#discussion_r564373664.
Same goes for the BART refactor.<|||||>I have applied the same changes than in the #9788 PR. Should I remove all the keyword names and keep only the typing parts? |
transformers | 10,332 | closed | bug in bert pretraining | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: -
- Using distributed or parallel set-up in script?: -
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
- albert, bert, xlm: @LysandreJik
## Information
In this line https://github.com/huggingface/transformers/blob/e73a3e1891775a915846cc0f24b7e9a26d6688fb/src/transformers/data/data_collator.py#L381 you need to change 0.5 to 0.1 to match the description written that only in 10% you would like to change tokens with randomly selected tokens.
## To reproduce
Nothing to reproduce.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
probs should match BERT paper. | 02-22-2021 14:51:25 | 02-22-2021 14:51:25 | Duplicate of https://github.com/huggingface/transformers/issues/10285 |
transformers | 10,331 | closed | Add note to resize token embeddings matrix when adding new tokens to voc | Closes https://github.com/huggingface/transformers/issues/10319 | 02-22-2021 14:09:27 | 02-22-2021 14:09:27 | |
transformers | 10,330 | closed | [DeepSpeed] strange learning rate schedule in linear_schedule_with_warmup | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Linux
- Python version: 3.7.3
- PyTorch version (GPU?): 1.7 (yes)
- Tensorflow version (GPU?): N/A
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (DeepSpeed)
### Who can help
@stas00
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): GPT-2
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I am trying using deepspeed for run_clm.py to train GPT-2 (from scratch).
I want to use the same scheduler (`linear_schedule_with_warmup`) and `optimizer` as ones used in run_clm.py.
So, the `scheduler` and `optimizer` sections are removed in `examples/tests/deepspeed/ds_config.json`,
and the original ones are used.
My `ds_config.json` is as follows:
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 2e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 2e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"zero_allow_untested_optimizer": true,
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
I ran the following command (using 4GPUs in one node):
$ cd examples/language-modeling/
$ deepspeed run_clm.py \
--output_dir=/somewhere \
--model_type=gpt2 \
--do_train \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--tokenizer_name gpt2 \
--block_size=512 \
--num_train_epochs=5 \
--warmup_steps=100 \
--learning_rate=2e-5 \
--per_device_train_batch_size=32 \
--per_device_eval_batch_size=32 \
--save_steps=10000 \
--save_total_limit=5 \
--dataloader_drop_last \
--deepspeed ds_config.json \
--logging_steps=10
The learning rate schedule was strange. The following is a screenshot of tensorboard.

The initial learning rate was 1e-5, which should be 0. The learning rate went up to 2e-5 (it was OK), and went down to 0 around the middle (before the end), which was strange.
I tested a `WarmupDecayLR` scheduler in `deepspeed` (without `transformers`), and it seemed OK.
So, I think the utilization of this scheduler in `transformers` is strange.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The learning rate schedule through `deepspeed` should be the same as the original one used in `run_clm.py`.
<!-- A clear and concise description of what you would expect to happen. -->
| 02-22-2021 13:45:34 | 02-22-2021 13:45:34 | Incidentally a bug fix was just merged as part of: https://github.com/huggingface/transformers/pull/10310
- the scheduler step was getting run twice.
Could you please re-test with `transformers` master?
Thank you!
<|||||>Thank you for your response.
I have tested the latest version, but the following error occurred. There is something wrong in the lr initialization.
```
..
File "run_clm.py", line 376, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1054, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint)
..
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py", line 728, in get_last_lr
assert getattr(self, '_last_lr', None) is not None, "need to call step() first"
AssertionError: need to call step() first
``` <|||||>Thank you for testing with the master version.
Please always post the full backtrace and the full command line you used - otherwise it's impossible to reproduce the problem and know how to fix it.
Thank you.<|||||>Sorry. I just ran the same command shown above, and the full error is as follows:
```
File "run_clm.py", line 417, in <module>
main()
File "run_clm.py", line 376, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1054, in train
Traceback (most recent call last):
File "run_clm.py", line 417, in <module>
Traceback (most recent call last):
File "run_clm.py", line 417, in <module>
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1135, in _maybe_log_save_evaluate
Traceback (most recent call last):
File "run_clm.py", line 417, in <module>
main()
File "run_clm.py", line 376, in main
if version.parse(torch.__version__) >= version.parse("1.4")
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py", line 728, in get_last_lr
main()
File "run_clm.py", line 376, in main
assert getattr(self, '_last_lr', None) is not None, "need to call step() first"
AssertionErrortrain_result = trainer.train(resume_from_checkpoint=checkpoint): need to call step() first
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1054, in train
main()
File "run_clm.py", line 376, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1054, in train
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1054, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1135, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1135, in _maybe_log_save_evaluate
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch)
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/transformers/trainer.py", line 1135, in _maybe_log_save_evaluate
if version.parse(torch.__version__) >= version.parse("1.4")
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py", line 728, in get_last_lr
if version.parse(torch.__version__) >= version.parse("1.4")
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py", line 728, in get_last_lr
if version.parse(torch.__version__) >= version.parse("1.4")
File "/home/.../anaconda3/envs/transformers-4.3.2/lib/python3.7/site-packages/deepspeed/runtime/lr_schedules.py", line 728, in get_last_lr
assert getattr(self, '_last_lr', None) is not None, "need to call step() first"
AssertionError: need to call step() first
assert getattr(self, '_last_lr', None) is not None, "need to call step() first"
AssertionError: need to call step() first
assert getattr(self, '_last_lr', None) is not None, "need to call step() first"
AssertionError: need to call step() first
1%|▋ | 10/1095 [00:08<16:13, 1.11it/s]
```
Thanks.<|||||>Great, thank you, I'm able to reproduce this problem. Let me investigate and I will get back to you with a solution. <|||||>I understand the problem.
The optimizer doesn't kick in until a much later step, so `lr_scheduler` doesn't get its first `step()` yet and `._maybe_log_save_evaluate` fails to retrieve `get_last_lr` since there wasn't any yet.
A quick workaround is to add `"initial_scale_power": 1,`, which will force the optimizer to churn from step one.
```
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 1,
"hysteresis": 2,
"min_loss_scale": 1
},
```
but it might not be an optimal solution. https://www.deepspeed.ai/docs/config-json/#fp16-training-options
I will think of how to resolve this correctly, but meanwhile please let me know if that resolves the scheduler issue.
to explain - when you use deepspeed's fp16 it skips the optimizer/scheduler calls until the OVERFLOW is no more. And you'd see the following in the log:
```
OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 4294967296
OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 4294967296, reducing to 2147483648.0
OVERFLOW! Rank 0 Skipping step. Attempted loss scale: 2147483648.0, reducing to 1073741824.0
```
This is also probably why you see an odd behavior as you reported originally (besides the double step bug I fixed)<|||||>OK, this PR should work too if you would like to try it instead: https://github.com/huggingface/transformers/pull/10362
<|||||>Thank you for your work.
I set `"initial_scale_power": 1`, re-ran the command, and the training finished without errors.

After the PR #10362 is merged into master, I will try it. Thanks.<|||||>FYI, it has been merged.<|||||>Thanks.
I have tested the latest version (without setting `"initial_scale_power": 1`), and the learning rate behavior is as expected!

Thanks for your work. It is very useful to use deepspeed in transformers.<|||||>Thank you for your feedback and supporting this problem fixing process, @tomohideshibata |
transformers | 10,329 | closed | Raise an error instead of a warning when model files are not loaded correctly | # 🚀 Feature request
Currently, when I initialize a model and if my pre-trained model files aren't fully matched with my model architecture, code silently logs the event and warns the user. I think it is better to have a flag to stop the training if model weights are not loaded as expected.
## Motivation
With the current implementation, user can train a model from scratch without knowing it if he/she doesn't look at the logs carefully (Even if he/she should look at it).
## Your contribution
I am open to work for this issue. If you have any idea about how to implement it, let me know. I can start working on this issue in the coming weeks (not for now). | 02-22-2021 13:26:25 | 02-22-2021 13:26:25 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Today, I have encountered a bug in our codebase due to this issue. The model loading wasn't ok for some reason and the code just logged the warnings. I would like to get an error instead of a warning. I can't monitor thousands of models that are running in production for this particular warning message. Could you help me to work on this issue? |
transformers | 10,328 | closed | DeBERTa-v2 fixes | Applying @BigBird01's fixes. | 02-22-2021 12:33:29 | 02-22-2021 12:33:29 | |
transformers | 10,327 | closed | mBART 50 models not found in model shortcut name list | Transformers version: 4.4.0.dev0
Hello, I'm trying to fine-tune mBART 50 with your seq2seq examples.
Getting this error:
Model name 'facebook/mbart-large-50' not found in model shortcut name list (facebook/mbart-large-en-ro, facebook/mbart-large-cc25).
Traceback (most recent call last):
File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 668, in <module>
main()
File "/content/transformers/examples/seq2seq/run_seq2seq.py", line 349, in main
use_auth_token=True if model_args.use_auth_token else None,
File "/usr/local/lib/python3.6/dist-packages/transformers/models/auto/tokenization_auto.py", line 399, in from_pretrained
return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1789, in from_pretrained
resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1806, in _from_pretrained
**(copy.deepcopy(kwargs)),
File "/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py", line 1860, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/usr/local/lib/python3.6/dist-packages/transformers/models/mbart/tokenization_mbart.py", line 109, in __init__
self.set_src_lang_special_tokens(kwargs.get("src_lang", "en_XX"))
File "/usr/local/lib/python3.6/dist-packages/transformers/models/mbart/tokenization_mbart.py", line 199, in set_src_lang_special_tokens
self.cur_lang_code = self.lang_code_to_id[src_lang]
KeyError: None
Any Ideas on how to fix this?
Thanks | 02-22-2021 11:28:59 | 02-22-2021 11:28:59 | Hi @codingnoobneedshelp , thank for reporting this issue. Right now MBart50Tokenizer does not work with `AutoTokenizer`.
There will be a new script for translation in the next ~2 weeks that will handle this issue. For now, you could just modify the script to use `MBart50Tokenizer`, instead of `AutoTokenizer`.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @codingnoobneedshelp
This is now resolved, the `run_translation.py` script now supports fine-tuning mBART-50. |
transformers | 10,326 | closed | [DeepSpeed] unable to increase batch size from 4 for T5-3b with 2x 32GB V100 GPUs | Hi,
I'm trying T5-3b with DeepSpeed on 2 V100-32GB GPU's. But I'm unable to increase batch size beyond 4 with max i/p sequence length of 512 and max o/p sequence length of 4.
Previously I tried with t5.parallelize() [ i.e, without DeepSpeed ] on same setup and was able to train with batch size of 2.
Below are my training args and DeepSpeed's config -
```
training_args = Seq2SeqTrainingArguments(
output_dir='./seq_out/results',
overwrite_output_dir=True,
evaluation_strategy="epoch",
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
learning_rate=3e-5,
weight_decay=0.01,
num_train_epochs=2,
warmup_steps=500,
logging_dir='./seq_out/logs',
logging_steps=10,
load_best_model_at_end=True,
deepspeed='ds_config.json'
)
```
```
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 2,
"allgather_partitions": true,
"allgather_bucket_size": 1.5e8,
"overlap_comm": true,
"reduce_scatter": true,
"reduce_bucket_size": 1.5e8,
"contiguous_gradients": true,
"cpu_offload": true
},
"zero_allow_untested_optimizer": true,
"optimizer": {
"type": "AdamW",
"params": {
"lr": 3e-5,
"betas": [
0.8,
0.999
],
"eps": 1e-8,
"weight_decay": 3e-7
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": 0,
"warmup_max_lr": 3e-5,
"warmup_num_steps": 500
}
},
"steps_per_print": 2000,
"wall_clock_breakdown": false
}
```
Observed similar behavior with T5-large as well . I was able to train with 14 batch size ( same i/p, o/p seq length and config.json as above ) on single GPU ( 32 GB V100 ) BUT when executing the same on 2x 32 GB GPUs, I was not able to go beyond 14 batch size( which I was able to train with 1 GPU itself) and memory from both the GPUs was consumed (31 GB and 28 GB).
Reducing `allgather_bucket_size ` and `reduce_bucket_size` didn't help in increasing batch size.
But I expected more batch size with DeepSpeed and CPU offloading. Is this fine or am I making something wrong which is hindering deepspeed's capability..?
| 02-22-2021 10:09:08 | 02-22-2021 10:09:08 | |
transformers | 10,325 | closed | Input mismatch with TFDistilBert training from scratch inspite of cross checking input dimensions | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Colab
- Python version: 3.6
- PyTorch version (GPU?): None
- Tensorflow version (GPU?): 2.4.1
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@jplu
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): TFDistilBert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
```
tokenizer = tokenizers.BertWordPieceTokenizer("/content/drive/Shareddrives/Darshan's Shared Driver/NewTrainingData/Tokenizer/vocab.txt", strip_accents=False)
tokenizer.enable_padding(length=128)
tokenizer.enable_truncation(max_length=128)
def tokenize(sentence):
sentence = sentence.numpy().decode('utf-8')
a = tokenizer.encode(sentence)
return tf.constant(a.ids,tf.int32), tf.constant(a.attention_mask, tf.int32)
def get_tokenized(sentence):
return tf.py_function(tokenize, inp=[sentence], Tout=[tf.int32,tf.int32])
with open("TextFile.txt") as f:
lines = f.readlines()
dataset = tf.data.Dataset.from_tensor_slices(lines)
dataset = dataset.map(get_tokenized, num_parallel_calls=tf.data.AUTOTUNE)
config = DistilBertConfig(vocab_size=30000)
model = TFDistilBertForMaskedLM(config)
inp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="input_ids")
inp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="attention_mask")
op = model([inp1, inp2])
model = tf.keras.models.Model(inputs=[inp1, inp2], outputs=op)
model.compile(tf.keras.optimizers.Adam(1e-4))
model.fit(dataset.batch(32).prefetch(tf.data.AUTOTUNE), epochs=1)
```
Error:
```
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:754 train_step
y_pred = self(x, training=True)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/base_layer.py:998 __call__
input_spec.assert_input_compatibility(self.input_spec, inputs, self.name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/input_spec.py:207 assert_input_compatibility
' input tensors. Inputs received: ' + str(inputs))
ValueError: Layer model expects 2 input(s), but it received 1 input tensors. Inputs received: [<tf.Tensor 'IteratorGetNext:0' shape=<unknown> dtype=int32>]
```
I have cross checked the output shape and input dimensions. If this is not the correct way then how exactly do I train a TF DistilBert model from scratch?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Training should start as soon as fit is called
<!-- A clear and concise description of what you would expect to happen. -->
| 02-22-2021 09:37:21 | 02-22-2021 09:37:21 | Hello!
As first, I can see several issues on the way you want to train the model:
1. The way you build your dataset is not correct. More precisely, in the `tokenize` function, the first element of the tuple (`a.ids`) is taken as the input, and the second (`a.attention_mask`) is taken as the label. Hence the error you get.
2. When you instantiate your `tf.keras.models.Model` you define the `inputs` and the `outputs` to be the same, this is not correct either, you have to run the model once and then give this output.<|||||>@jplu I realized my mistake and I changed the code to this
```
def tokenize(sentence):
sentence = sentence.numpy().decode('utf-8')
a = tokenizer.encode(sentence)
return tf.constant(a.ids,tf.int32), tf.constant(a.attention_mask, tf.int32)
def get_tokenized(sentence):
return tf.py_function(tokenize, inp=[sentence], Tout=[tf.int32,tf.int32])
def get_tokenized_final(a,b):
return (a,b), None
dataset = tf.data.Dataset.from_tensor_slices(lines)
dataset = dataset.map(get_tokenized, num_parallel_calls=tf.data.AUTOTUNE).map(get_tokenized_final, num_parallel_calls=tf.data.AUTOTUNE)
import tensorflow as tf
config = DistilBertConfig(vocab_size=30000)
model = TFDistilBertForMaskedLM(config)
inp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="input_ids")
inp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="attention_mask")
op = model([inp1,inp2])
model = tf.keras.models.Model(inputs=[inp1, inp2], outputs=model.output)
```
Now the model throws two warnings
```
WARNING:tensorflow:The parameters `output_attentions`, `output_hidden_states` and `use_cache` cannot be updated when calling a model.They have to be set to True/False in the config object (i.e.: `config=XConfig.from_pretrained('name', output_attentions=True)`).
WARNING:tensorflow:The parameter `return_dict` cannot be set in graph mode and will always be set to `True`.
```
and then throws the final error
```
ValueError: in user code:
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:805 train_function *
return step_function(self, iterator)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:795 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:1259 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:2730 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/distribute/distribute_lib.py:3417 _call_for_each_replica
return fn(*args, **kwargs)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:788 run_step **
outputs = model.train_step(data)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/engine/training.py:757 train_step
self.optimizer.minimize(loss, self.trainable_variables, tape=tape)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:498 minimize
return self.apply_gradients(grads_and_vars, name=name)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py:598 apply_gradients
grads_and_vars = optimizer_utils.filter_empty_gradients(grads_and_vars)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/keras/optimizer_v2/utils.py:79 filter_empty_gradients
([v.name for _, v in grads_and_vars],))
ValueError: No gradients provided for any variable: ['tf_distil_bert_for_masked_lm_1/distilbert/embeddings/word_embeddings/weight:0', 'tf_distil_bert_for_masked_lm_1/distilbert/embeddings/position_embeddings/embeddings:0', 'tf_distil_bert_for_masked_lm_1/distilbert/embeddings/LayerNorm/gamma:0', 'tf_distil_bert_for_masked_lm_1/distilbert/embeddings/LayerNorm/beta:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/q_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/q_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/k_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/k_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/v_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/v_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/out_lin/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/attention/out_lin/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/sa_layer_norm/gamma:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/sa_layer_norm/beta:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/ffn/lin1/kernel:0', 'tf_distil_bert_for_masked_lm_1/distilbert/transformer/layer_._0/ffn/lin1/bias:0', 'tf_distil_bert_for_masked_lm_1/distilbert/tra...
```
Any idea what I am doing wrong?<|||||>You cannot do `model.output`, as said in my previous message you have to run the model once to get how the output looks like :)<|||||>@jplu Could you tell me exactly what you mean by "run" the model? If I pass a sample array with all ones, it gives me a Broadcasting error as follows
```
config = DistilBertConfig(vocab_size=30000)
model = TFDistilBertForMaskedLM(config)
inp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="input_ids")
inp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="attention_mask")
_ = model([inp1,inp2])
# Error is thrown for this call
a = tf.ones((128,),dtype=tf.int32)
model((a,a))
```
Error is as attached
```
InvalidArgumentError: Incompatible shapes: [512,768] vs. [128,768] [Op:BroadcastTo]
```
More specifically the error is raised in `modeling_tf_distilbert.py `
```
183 if position_ids is None:
--> 184 position_embeds = self.position_embeddings(position_ids=inputs_embeds)
185 else:
186 position_embeds = self.position_embeddings(position_ids=position_ids)
```
-----------------------------------------------------------------------------------------
If by "run" you mean calling fit on the model then it raises the same gradient error
<|||||>Here a dummy example:
```python
import tensorflow as tf
from transformers import TFDistilBertForMaskedLM, DistilBertTokenizer, DistilBertConfig
config = DistilBertConfig(vocab_size=30000)
model = TFDistilBertForMaskedLM(config)
inp1 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="input_ids")
inp2 = tf.keras.layers.Input(shape=(128,), dtype=tf.int32, name="attention_mask")
output = model([inp1,inp2])
model = tf.keras.models.Model(inputs=[inp1,inp2], outputs=[output])
tokenizer = DistilBertTokenizer.from_pretrained("distilbert-base-uncased")
data = tokenizer(["Hello1", "Hello2", "Hello3"], truncation=True, max_length=128, padding="max_length", return_tensors="tf")
labels = tf.ones((3, 128), dtype=tf.int32)
X = tf.data.Dataset.from_tensor_slices((dict(data), labels)).batch(1)
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
model.compile(loss=loss, optimizer="adam")
model.fit(X, epochs=1)
```<|||||>@jplu Thanks for this but this tokenizes the data and then loads it as a tf.data.Dataset. I was looking for an implementation where the tokenization can be integrated in the pipeline itself and can be done on the fly. I found [this](https://github.com/tensorflow/tensorflow/issues/38762) issue on tensorflow but there are no fixes for it yet. Do you have any idea how to do this because my dataset is big enough to fit in colab memory but cannot be fully tokenized in memory?<|||||>Sorry you cannot do this.<|||||>Okay. Thanks for all the help! |
transformers | 10,324 | closed | [PretrainedFeatureExtractor] + Wav2Vec2FeatureExtractor, Wav2Vec2Processor, Wav2Vec2Tokenizer | # 🚨🚨🚨**IMPORTANT** Wav2Vec2 repositories that were added before 4.4 should make sure to manually add a feature extractor class.
This can be done as easily as doing:
```
git clone <your/repo/>
cd <your/repo/>
```
```python
from transformers import Wav2Vec2FeatureExtractor
feat_extract = Wav2Vec2FeatureExtractor() # or feat_extract = Wav2Vec2FeatureExtractor(return_attention_mask=True) for lv60 models
feat_extract.save_pretrained("./")
```
```
git add . && git commit -m "add feature processor file" && git push
```
# What does this PR do?
This is a new design for how to handle the feature extraction + tokenization functionality for speech models in a single class.
Speech models connect the two different formats `speech` and `text`. In order to have more flexibility when extending Transformers to speech tasks, such as ASR, I propose a composite `Processor` class that has both a `tokenizer` and a `feature_extractor` attribute, similar to how composite tokenizer are currently handled for models, such as RAG, [see](https://github.com/huggingface/transformers/blob/88605f37a6fe7bde336f52700229d619b5ffa0f6/src/transformers/models/rag/tokenization_rag.py#L28).
For ASR models the output of the model is text so that a `tokenizer` is required and the input is a sequence of `feature_vectors` (which includes raw waveform features) so that a `feature_extractor` is required.
The tokenizer is hereby of the exact same format as our current tokenizer implementations (*e.g.* Speech2TextTransformer models train their tokenizers the same way NLP models train their tokenizers, see section 4.1 [here](https://arxiv.org/pdf/2007.10310.pdf)). Feature processors on the other hand are of a completely new format and therefore deserve a `PreTrainedFeatureExtractor` class that mostly handles the loading & saving for all feature extractors and in addition, provides padding functionality. Since feature extractors are deterministic by nature (feature extractors are not trained, as tokenizers can be), we only need a single `feature_extractor_config.json` file to load and save the class IMO.
To meet the demands of a single model processing class that can handle both the text and speech modality while being flexible enough for different kinds of speech models, I propose to add a composite `SpeechProcessor` class for each speech model that has both a `tokenizer` and `feature_extractor` attribute and in short, would look as follows for Wav2Vec2:
```python
Wav2Vec2Processor:
def __init__(feature_extractor: Wav2Vec2FeatureExtractor, tokenizer: Wav2Vec2CTCTokenizer):
self.feature_extractor = feature_extractor
self.tokenizer = tokenizer
Wav2Vec2CTCTokenizer(PreTrainedTokenizer):
...
Wav2Vec2FeatureExtractor(PreTrainedFeatureExtractor):
...
```
So this means we leverage all the existing functionalities of the tokenizers for the tokenizer part of the speech models and
create a new `PreTrainedFeatureExtractor` to handle general feature extraction functionality. The composite `Wav2Vec2Processor` is then in style very similar to `RagTokenizer` and would provide the following functionality to the user:
```python
from transformers import Wav2Vec2SpeechProcessor, Wav2Vec2ForCTC
model = Wav2Vec2ForCTC.from_pretained("facebook/wav2vec2-base-960h")
processor = Wav2Vec2SpeechProcessor.from_pretrained("facebook/wav2vec2-base-960h")
inputs = processor(raw_waveform, return_tensors="pt", padding="longest")
logits = model(**inputs)
predicted_ids = torch.argmax(logits, dim=-1)
pred_transcription = model.batch_decode(predicted_ids)
# Also the processor can then later be used to encode & decode labels, *e.g.*
with processor.as_tokenizer():
label_ids = processor(label_str)
```
A couple of advantages of the following design:
- It makes sense logically. When we add multi-modal models, it is quite natural for me to add compotise `...Processor` classes to the library as well
- It is general enough to handle a bunch of different use cases. E.g. `Speech2TextTransformers` will have more or less the same feature extractor for the different tasks it was trained on, but will have different tokenizers depending on whether the model was trained on Librispeech/Must-C or Covost (cc @patil-suraj). The current design can handle this very nicely by simply changing the tokenizer
- We only need to create a `PretrainedFeatureExtractor` class, all the Speech model's tokenization functionality is handled by the already existing `PreTrainedTokenizer` class.
- It's general enough to handle all speech models IMO
## Backwards breaking compatibility
`Wav2Vec2Tokenzier` is deprecated and is replaced by a better `Wav2Vec2CTCTokenizer` class that actually can inherit the full tokenizer test suit. `Wav2Vec2Tokenizer` can still be used by is not be found in the docs anymore. It was made sure that the tokenizer configs stay the same for bcp so that I only had to add files for the `Wav2Vec2FeatureProcessor` (see: https://huggingface.co/facebook/wav2vec2-base-960h/commit/dbdb8c54a01c6b0ca8ec79f811970214fb72cecc).
Essentially, one is advised to replace `Wav2Vec2Tokenizer` with `Wav2Vec2Processor` in all scripts from now, whereas the API of `Wav2Vec2Processor` is identical to the API of the old `Wav2Vec2Tokenizer
**The only big breaking change is that the AutoTokenizer now loads `Wav2Vec2CTCTokenizer` instead of `Wav2Vec2Tokenizer`**
## Review
@LysandreJik, @patil-suraj, @sgugger, @thomwolf - this PR is now ready for a complete review.
@patil-suraj, it would be very nice, if you could do a very thorough review and make sure that this design is 100% compatible with the `Speech2TextTransformersProcessor` that we'll add soon. | 02-22-2021 08:19:10 | 02-22-2021 08:19:10 | > This approach looks great and doesn't seem limiting at all. Implementing it for Wav2Vec2/SpeechToTextTransformer and refactoring/upstreaming methods down the road seems like a good implementation roadmap.
>
> Regarding the implementation of `FeatureProcessors`, what do you have in mind regarding understandability/explicitness? Do you expect something like models, where we aim for maximum accessibility, with copy/pastes and single-file containers, or do you expect something like tokenizers, where some tokenizers inherit from others while modifying certain aspects, and some level of abstraction, making them harder to decypher?
>
> I'm asking because I think it's relevant to the different preprocessing that can be handled by the feature processors. For example, normalizing or converting to MFCCs seems like it would be something quite widespread among speech-based feature processors, do we want to have that in each implementation (abstraction-free) or will the goal be to upstream these methods in the parent class once we identify similarities among feature processors?
Yeah good question! To be honest, I'm not really sure yet. I would like to enforce the rule that feature extractors can only inherit from `PreTrainedFeatureExtractor` and no other feature extractor. IMO, the best approach to begin with is to limit (as you've suggested) the user-facing API for `FeatureProcessor` to `__call__`, `from_pretrained()`, `save_pretrained()` and maybe something like `from_file()` and then in the beginning.
I think a method like `pad()` is general enough to have this method in the beginning be implemented in `PreTrainedFeatureExtractor` because every extractor will need to do padding no matter what.
For pretty much all other methods (actually including `normalization()`), I would copy-paste them into each feature processor and make sure that they are private methods `_normalize()` so that we can later still do some refactoring here if needed.
So in general my strategy would be to have as little abstraction as possible - *e.g.* copy-paste classes such as those: https://github.com/huggingface/transformers/blob/19c14579f0c7f5f15c5a5115b2fd18582e61ac3b/src/transformers/models/speech_to_text_transformer/tokenization_speech_to_text_transformer.py#L239 to each feature extractor - and then when having more models maybe move some things upstream into the `PretrainedFeatureExtractor` file <|||||>Thanks for explaining, sounds like a good approach to me! Thanks for drafting the proposal. |
transformers | 10,321 | open | [Tensor Parallelism] Megatron-LM to transformers | # 🚀 Feature request
Splitting the discussion that started here: https://github.com/huggingface/transformers/pull/10301#issuecomment-782917393 to add the potential future feature of transformers and it's Tensor Parallelism (Horizontal Model Parallelism) - for bigger context please see [Parallelism notes](https://github.com/huggingface/transformers/issues/9766).
Let's start with important clarification: MP can mean many different things
1. Vertical MP - slice the layers vertically - one or more full layers placed on each gpu = Vertical MP - in which case VertMP is a simple version of PP with chunks=1
2. Horizontal MP - slice the layers horizontally - place a slice of a full model on each gpu - Example Megatron-LM
At the moment I think it's only Megatron-LM that implements Horizontal MP. @anthon-l has ported that model to `transformers`, except the Horizontal MP parts, since currently `transformers` doesn't yet have support for it. There is already naive Vertical MP in t5 and gpt2 thanks to @alexorona's work, I ported Bart too but it's unmerged, and there is an ongoing effort to figure out how to implement the Pipeline. All these will have to co-operate with each other and also share common tools.
@anton-l [started sharing](https://github.com/huggingface/transformers/pull/10301#issuecomment-782917393) what needs to be done to make that important feature available - and then down the road potentially make it available to other (all?) `transformers` models.
@anton-l, the floor is yours. | 02-21-2021 21:57:17 | 02-21-2021 21:57:17 | @stas00 thanks for starting this thread!
I guess, in order for everyone to be on the same page, a brief explanation of horizontal parallelism is needed. This would be a good place for future reference and introduce other contributors to the core concepts.
**NOTE for everyone reading:** If you find any of the explanations below confusing, you can read about Megatron-LM in much more detail in its original paper: https://arxiv.org/pdf/1909.08053.pdf
## The core idea
The main thing that separates Megatron-style (horizontal) parallelism from vertical parallelism is the way that it splits the model layers between GPUs without the need for idle time during training/inference (i.e. waiting while the previous GPUs complete their work on the previous layers of the model). This makes the whole process much more asynchronous, just like in MapReduce. Here's my rough sketch of how it looks:

Now the question is, how do we split the computation of those layers so that the parallelized model weights would be equivalent to the CPU ones?
## Parallelized layers
Let's start with a simple building block of any transformer: a fully connected layer (nn.Linear) followed by a nonlinear activation (GeLU). Following the Megatron's paper notation, we can write the dot-product part of it as `Y = GeLU(XA)`, where `X` and `Y` are the input and output vectors, and `A` is the weight matrix.
If we look at the computation in matrix form, it's easy to see how the matrix multiplication can be split between multiple GPUs:

Basically, if we split the weight matrix `A` column-wise across `N` GPUs and perform matrix multiplications `XA_1` through `XA_n` in parallel, then we will end up with `N` output vectors `Y_1, Y_2, ..., Y_n` which can be fed into GeLU independently:

Using this principle, we can update an MLP of arbitrary depth, without the need for any synchronization between GPUs until the very end, where we need to reconstruct the output vector from shards. The authors provide a helpful illustration for that:

### Quick note on self-attention
Parallelizing the multiheaded attention layers is even simpler, since they are already inherently parallel, due to having multiple independent heads!

## Practical implementation
If you want to just dive right in, here are the basic building blocks implemented in Megatron-LM:
- [ColumnParallelLinear](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/layers.py#L195)
- [RowParallelLinear](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/layers.py#L290)
- [ParallelMLP](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L58)
- [ParallelSelfAttention](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/transformer.py#L112)
All of these rely on basic `Scatter`, `Gather` and `Reduce` ops to split and aggregate the weight matrices. Thanks to [PyTorch Distributed](https://pytorch.org/tutorials/intermediate/dist_tuto.html), we can use `torch.distributed.all_reduce` and `all_gather` for that, without having to worry about GPU synchronization. The scatter and gather layers just have to define appropriate forward and backward passes like so:
```python
def _split(input_):
world_size = get_tensor_model_parallel_world_size()
input_list = split_tensor_along_last_dim(input_, world_size)
rank = get_tensor_model_parallel_rank()
output = input_list[rank].contiguous()
return output
def _gather(input_):
world_size = get_tensor_model_parallel_world_size()
last_dim = input_.dim() - 1
rank = get_tensor_model_parallel_rank()
tensor_list = [torch.empty_like(input_) for _ in range(world_size)]
tensor_list[rank] = input_
torch.distributed.all_gather(tensor_list, input_, group=get_tensor_model_parallel_group())
output = torch.cat(tensor_list, dim=last_dim).contiguous()
return output
class ScatterToModelParallelRegion(torch.autograd.Function):
def forward(ctx, input_):
return _split(input_)
def backward(ctx, grad_output):
return _gather(grad_output)
class GatherFromModelParallelRegion(torch.autograd.Function):
def forward(ctx, input_):
return _gather(input_)
def backward(ctx, grad_output):
return _split(grad_output)
```
In a single transformer layer, there are 4 communication operations in total, for the forward and backward passes:

## Other things to consider
#### Parallelized embeddings and output logits
Since the weights of input and output embeddings of BERT/GPT2 are tied, they require a coordinated modification. In the original implementation, the input embedding matrix is parallelized along the vocabulary dimension (column-wise), and the output embeddings' matrix multiplications is parallelized _together with the cross-entropy loss_ to reduce the communication size (see end of section 3 in the paper):
- [VocabParallelEmbedding](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/layers.py#L123)
- [parallel_lm_logits](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/language_model.py#L28)
#### Model parallelism-aware Dropout
Transformers have dropout layers outside the model parallel regions before residual connections and within model parallel regions in the self attention block. Because some dropout layers are in a model parallel region, while others are not, we need to treat random number generation carefully to ensure dropout works correctly. See appendix B.2 in the paper for reference.
The necessary RNG state tracking is implemented in [random.py](https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/mpu/random.py)
#### Hybrid model and data parallelism
Combining horizontal parallelism with data parallelism requires grouping the GPUs in a specific way, as described in appendix B.1:

<|||||>Phew! That felt like a start of a whole blog post :smile:
As for porting all of this, I would follow [fairseq's example](https://github.com/pytorch/fairseq/blob/master/fairseq/model_parallel/models/transformer.py) and copy Megatron-LM's parallel layers verbatim into an existing (but separate) implementation of `BertModel` or `GPT2Model` as a proof-of-concept and then work from there.
After the first semi-working prototype we could figure out how to implement the switching mechanism between a homogeneous model and a parallelized one, but it's too early to think about that, IMO. What do you think, @stas00 ?<|||||>Amazing! Thank you for this awesome presentation, @anton-l! This could totally be a great blog post - I agree!
Let me study the information you shared and I will follow up then!
Until then I have a quick suggestion: Do you have an easy access to 2 gpus? That would be enough to make a
PoC work and then we can find a larger cluster with more gpus to experiment on and eventually port the 8 splits from fairseq.
I suppose it'd be easier to implement this for Megatron-LM, but the main use would be t5 and gpt2 where we have most huge models at the moment. So we could start there as well. If it works for you. Which also can be worked on independently of your Megatron-LM PR.<|||||>Regarding the setup: I can borrow a second gpu for the time being, that shouldn't be a problem :)
As for the models, I think GPT2 is a good candidate for our experiments, since the transformers' implementation is already stable and has multiple smaller checkpoints for quick demos.
Also, I don't think we should even be too concerned about porting the 8 original splits of fairseq's megatron, since I've already concatenated them for the model's PR. If everything was done correctly, this potentially allows us to create an arbitrary split across 2^n devices, not just 8.<|||||>Sounds good on all accounts. GPT2 would be perfect, @anton-l!
I had the same thought about just splitting your merged model if needed.
Please let us know how we can support you in this endeavor.
just for you to be aware, I mentioned in the other thread the DeepSpeed version of their Megatron-LM port - perhaps theirs is newer - I haven't had a chance to study it yet. https://github.com/jeffra/DSE/tree/master/megatron-lm . You can diff the different versions against the baseline - that is I assume it has been changed - perhaps it hasn't. If you want to have a look, if not, it is good too. It will be good to start anywhere.
<|||||>@anton-l Thanks for the great work on this, its really nice to be able to load the pretrained model so thanks for that too! Did you have any progress on fine-tuning across multiple GPUs? Would love to see if the results get any better with some fine-tuning...<|||||>@anton-l, let's do it if you have resources and interest? Let me know how I can be of help.
Now having used Megatron-LM in [big science experiments](https://github.com/bigscience-workshop/bigscience/blob/master/experiments/gpt2.md) it's time to port it to transformers.<|||||>@stas00 @anton-l Just curious, is Megatron-LM now ported to transformers? Or the proof of concept mentioned in:
> As for porting all of this, I would follow [fairseq's example](https://github.com/pytorch/fairseq/blob/master/fairseq/model_parallel/models/transformer.py) and copy Megatron-LM's parallel layers verbatim into an existing (but separate) implementation of `BertModel` or `GPT2Model` as a proof-of-concept and then work from there.
I would love to work on this issue, if there is anything I could do!
|
transformers | 10,320 | closed | BERT for speech | How can I use HF's BERT models for speech-to-text training? | 02-21-2021 17:21:16 | 02-21-2021 17:21:16 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discusss.huggingface.co) instead?
Also, [here's the doc](https://huggingface.co/transformers/model_doc/wav2vec2.html#transformers.Wav2Vec2ForCTC) for `Wav2Vec2ForCTC` which seems to be the model you're interested in.
Thanks! |
transformers | 10,319 | closed | [Question] Add a new token to tokenizer and bart model | Hi,
I have extended the word embedding of a tokenizer and a bart model through `tokenizer.add_token()` and `model.resize_token_embeddings(len(tokenizer))`. Because the ground truth consist of a newly added token, the dimension of the decoder output should be extended as well.
But I can't figure out how to extend the model. Can anyone give me some help?
Thanks in advance ! 💯 | 02-21-2021 17:11:05 | 02-21-2021 17:11:05 | Hello! You should use the [resize_token_embeddings](https://huggingface.co/transformers/main_classes/model.html?highlight=resize_token_embeddings#transformers.PreTrainedModel.resize_token_embeddings) method for that. Will add that to the documentation. |
transformers | 10,318 | closed | Guidance for continued pre-training of BART with de-noising. | # 🚀 Feature request
An example of continued pre-training of BART with de-noising.
## Motivation
I'm using the run causal LM [script](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py), but it seems on line 340, it's simply copying input to output (learning the identify function).
1. Would line 340 be the best place to add the de-noising or should I introduce it as part of the collator?
2. Is there any code which implements de-noising in HuggingFace? BART defines 4-5 main operations which should be easy to reproduce - I just don't want to introduce new code if well-tested code already exists.
Thanks!!!
| 02-21-2021 16:15:35 | 02-21-2021 16:15:35 | also it is my questions, thanks <|||||>denoising function is a part of T5 pretraining as well, is there a denoising function implementation in Huggingface repo ? Any advice is appreciated. thanks <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,317 | closed | ForTokenClassification head on BART | # 🚀 Feature request
Hello guys! I'm trying to reproduce the token classification experiments from the [BART paper](https://arxiv.org/abs/1910.13461) using the HF/Transformers and found that a token classification head is missing on the current BART model HF implementation.
The current BART implementation only has the "BartForConditionalGeneration" and "BartForSequenceClassification". Are there any plans to add a "BartForTokenClassification" head too?
| 02-21-2021 14:30:22 | 02-21-2021 14:30:22 | It would be great to have a `BartForTokenClassification`. Does it use the same head as `BertForTokenClassification`, etc.?
Feel free to open a PR :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,316 | closed | fix typo in conversion script | # What does this PR do?
Fix typo in `convert_fsmt_original_pytorch_checkpoint_to_pytorch.py`
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors which may be interested in your PR.
@stas00
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @jplu
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- nlp datasets: [different repo](https://github.com/huggingface/nlp)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 02-21-2021 13:15:07 | 02-21-2021 13:15:07 | Wonderful! Thank you for this fix, @tagucci!
(I tweaked your PR to run `make style` to appease to auto-formatters to have CI pass) |
transformers | 10,315 | closed | Huggingface mt5 does not reach the performance of original mt5 on paws-x | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: linux
- Python version: 3.7
- PyTorch version (GPU?): 1.7
- Tensorflow version (GPU?): -
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
Library:
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
t5: @patrickvonplaten, @patil-suraj
## Information
Hi
I ran the model of mT5-small on paws-x in zero-shot cross-lingual setup where we tune on english and evaluate on all languages in paws-x dataset and obtain only 80.2 while the reported original mt5-small on this datasets is 82.4. (see table 2 in mt5 paper) I used the setup in mt5 paper, is there any missing details from original mt5 work in the huggingface implementation? thanks
## Expected behavior
reaching the performance of original model. | 02-21-2021 11:59:18 | 02-21-2021 11:59:18 | Hey @dorost1234,
could you post your question on the [forum](https://discuss.huggingface.co/) and see whether you can get help from the community there? We try to keep GitHub issues for bug reports mostly.
It would also be very important that you attach a notebook or something that allows people to understand what you have done and what you might have missed...<|||||>Hi Patrick
thanks, sure, I posted here because I mainly wanted to ask if you have in
the past compared the performance of original mt5 with the HuggingFace
model side by side on one setting?
this is unfortunately a lot of codes for me to share, since I build on top
of my codebase, and this is not very easy to make a small example showing
these differences.
Overall, knowing if in the past such comparison is done is great.
thanks.
On Mon, Feb 22, 2021 at 3:02 PM Patrick von Platen <[email protected]>
wrote:
> Hey @dorost1234 <https://github.com/dorost1234>,
>
> could you post your question on the forum
> <https://discuss.huggingface.co/> and see whether you can get help from
> the community there? We try to keep GitHub issues for bug reports mostly.
>
> It would also be very important that you attach a notebook or something
> that allows people to understand what you have done and what you might have
> missed...
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <https://github.com/huggingface/transformers/issues/10315#issuecomment-783396814>,
> or unsubscribe
> <https://github.com/notifications/unsubscribe-auth/AS37NMXVTAFKQLNWCBFUG43TAJPXBANCNFSM4X65FR7Q>
> .
>
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,314 | closed | ConvBERT fix torch <> tf weights conversion | (from @patrickvonplaten):
This PR corrects the shape of the grouped linear layer weight so that the general conversion function does not have to be changed.
All models are tested to work correctly as follows:
```python
from transformers import ConvBertModel, TFConvBertModel
import tensorflow as tf
import torch
input_ids = [[1, 2, 3, 4, 5]]
tf_input_ids = tf.convert_to_tensor(input_ids)
pt_input_ids = torch.tensor(input_ids)
for name in ["conv-bert-base", "conv-bert-medium-small", "conv-bert-small"]:
model_tf = TFConvBertModel.from_pretrained(f"YituTech/{name}", from_pt=True)
model = ConvBertModel.from_pretrained(f"YituTech/{name}")
assert abs(model_tf(tf_input_ids)[0].cpu().numpy().sum() - model(pt_input_ids)[0].cpu().numpy().sum()) < 1e-2, "Error"
```
Changing the size and the name of a weight means that all tf weights have to be updated, but I think this is ok here since the TF models (if I understood correctly) were not behaving as expected before anyways.
I also checked that the conversion the other way around works as expected (`...from_tf=True`) | 02-21-2021 11:37:13 | 02-21-2021 11:37:13 | I'll push the fixed weights and remove from_pt in test.<|||||>> Ok with the change!
>
> Should we do a patch release for this?
Think it's a good idea<|||||>Thanks @patrickvonplaten <|||||>Ok will do a patch this afternoon<|||||>cc @stefan-it and @mrm8488 that have been playing with the model. We'll release v4.3.3 very soon which will contain that patch.<|||||>Should I re-convert the TF model then :thinking: <|||||>I think you should, once v4.3.3 is released (in a few minutes)<|||||>v4.3.3 has been released! |
transformers | 10,313 | closed | ValueError: too many values to unpack (expected 2) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 3.0.2
- Platform: Linux-4.19.112+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.7.0+cu101 (True)
- Tensorflow version (GPU?): 2.4.1 (True)
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: No
### Who can help
albert, bert, xlm: @LysandreJik
trainer: @sgugger
## Information
Model I am using (XLMRobertaForSequenceClassification):
The problem arises when using:
* [ *] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [* ] my own task or dataset: (give details below)
## To reproduce
# This training code is based on the `run_glue.py` script here:
# https://github.com/huggingface/transformers/blob/5bfcd0485ece086ebcbed2d008813037968a9e58/examples/run_glue.py#L128
for epoch_i in range(0, epochs):
# ========================================
# Training
# ========================================
# Perform one full pass over the training set.
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
# Measure how long the training epoch takes.
t0 = time.time()
# Reset the total loss for this epoch.
total_train_loss = 0
# Put the model into training mode. Don't be mislead--the call to
# `train` just changes the *mode*, it doesn't *perform* the training.
# `dropout` and `batchnorm` layers behave differently during training
# vs. test (source: https://stackoverflow.com/questions/51433378/what-does-model-train-do-in-pytorch)
model.train()
# For each batch of training data...
for step, batch in enumerate(train_dataloader):
# Progress update every 40 batches.
if step % 40 == 0 and not step == 0:
# Calculate elapsed time in minutes.
elapsed = format_time(time.time() - t0)
# Report progress.
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using the
# `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Always clear any previously calculated gradients before performing a
# backward pass. PyTorch doesn't do this automatically because
# accumulating the gradients is "convenient while training RNNs".
# (source: https://stackoverflow.com/questions/48001598/why-do-we-need-to-call-zero-grad-in-pytorch)
model.zero_grad()
# Perform a forward pass (evaluate the model on this training batch).
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
# It returns different numbers of parameters depending on what arguments
# arge given and what flags are set. For our useage here, it returns
# the loss (because we provided labels) and the "logits"--the model
# outputs prior to activation.
loss, logits = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# Accumulate the training loss over all of the batches so that we can
# calculate the average loss at the end. `loss` is a Tensor containing a
# single value; the `.item()` function just returns the Python value
# from the tensor.
total_train_loss += loss.item()
# Perform a backward pass to calculate the gradients.
loss.backward()
# Clip the norm of the gradients to 1.0.
# This is to help prevent the "exploding gradients" problem.
torch.nn.utils.clip_grad_norm_(model.parameters(), 1.0)
# Update parameters and take a step using the computed gradient.
# The optimizer dictates the "update rule"--how the parameters are
# modified based on their gradients, the learning rate, etc.
optimizer.step()
# Update the learning rate.
scheduler.step()
# Calculate the average loss over all of the batches.
avg_train_loss = total_train_loss / len(train_dataloader)
# Measure how long this epoch took.
training_time = format_time(time.time() - t0)
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(training_time))
# ========================================
# Validation
# ========================================
# After the completion of each training epoch, measure our performance on
# our validation set.
print("")
print("Running Validation...")
t0 = time.time()
# Put the model in evaluation mode--the dropout layers behave differently
# during evaluation.
model.eval()
# Tracking variables
total_eval_accuracy = 0
total_eval_loss = 0
nb_eval_steps = 0
# Evaluate data for one epoch
for batch in validation_dataloader:
# Unpack this training batch from our dataloader.
#
# As we unpack the batch, we'll also copy each tensor to the GPU using
# the `to` method.
#
# `batch` contains three pytorch tensors:
# [0]: input ids
# [1]: attention masks
# [2]: labels
b_input_ids = batch[0].to(device)
b_input_mask = batch[1].to(device)
b_labels = batch[2].to(device)
# Tell pytorch not to bother with constructing the compute graph during
# the forward pass, since this is only needed for backprop (training).
with torch.no_grad():
# Forward pass, calculate logit predictions.
# token_type_ids is the same as the "segment ids", which
# differentiates sentence 1 and 2 in 2-sentence tasks.
# The documentation for this `model` function is here:
# https://huggingface.co/transformers/v2.2.0/model_doc/bert.html#transformers.BertForSequenceClassification
# Get the "logits" output by the model. The "logits" are the output
# values prior to applying an activation function like the softmax.
(loss, logits) = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
# Accumulate the validation loss.
total_eval_loss += loss.item()
# Move logits and labels to CPU
logits = logits.detach().cpu().numpy()
label_ids = b_labels.to('cpu').numpy()
# Calculate the accuracy for this batch of test sentences, and
# accumulate it over all batches.
total_eval_accuracy += flat_accuracy(logits, label_ids)
# Report the final accuracy for this validation run.
avg_val_accuracy = total_eval_accuracy / len(validation_dataloader)
print(" Accuracy: {0:.2f}".format(avg_val_accuracy))
# Calculate the average loss over all of the batches.
avg_val_loss = total_eval_loss / len(validation_dataloader)
# Measure how long the validation run took.
validation_time = format_time(time.time() - t0)
print(" Validation Loss: {0:.2f}".format(avg_val_loss))
print(" Validation took: {:}".format(validation_time))
# Record all statistics from this epoch.
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Valid. Loss': avg_val_loss,
'Valid. Accur.': avg_val_accuracy,
'Training Time': training_time,
'Validation Time': validation_time
}
)
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(format_time(time.time()-total_t0)))
## Expected behavior
THE ERROR:
======== Epoch 1 / 4 ========
Training...
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-92-840aefe69c26> in <module>()
85 token_type_ids=None,
86 attention_mask=b_input_mask,
---> 87 labels=b_labels)
88
89 # Accumulate the training loss over all of the batches so that we can
ValueError: too many values to unpack (expected 2)
I am working an a four-label classification task (on an Arabic dataset). I have run this code before but it is not working now. I get this error: ValueError: too many values to unpack (expected 2). I have not changed any of the steps or the preprocessing but it raises up this error this time. The tensor for each label instance is [0,1,2,3]. The error points to the labels parameter. Could you please suggest solutions?
| 02-21-2021 09:32:52 | 02-21-2021 09:32:52 | Please post the full error stacktrace. Which version of the script are you using?<|||||>Hi thank you for your reply. I did post the trace back for the error.
Looking forward to your reply
Hadeel
> On 21 Feb 2021, at 10:01 am, cronoik <[email protected]> wrote:
>
>
> Please post the full error stacktrace. Which version of the script are you using?
>
> —
> You are receiving this because you authored the thread.
> Reply to this email directly, view it on GitHub, or unsubscribe.
<|||||>Please use the [forums](https://discuss.huggingface.co/) to debug custom code with help from the community and only when you have isolated a part that is a bug in the library use a GitHub issue.
In this particular case, you should include the full stack trace, as pointed out before (it's not in your message, just the last frame) and the code that created your model: the error seems to point out that it only returns one value when you need two (since you write `loss, logits = model(...)`). You should debug that by looking at the model return on one batch.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 10,312 | closed | LayoutLM Tensorflow model | # 🚀 Feature request
It would be great if there was a TF version of the layoutlm. I see there are scripts in the repo to convert PyTorch checkpoints to TF models but I think the requirement is to have a TF model architecture to be able to load PyTorch model's wights in it.
## Motivation
We are using TF in production and we'd love to be able to use the layoutlm.
## Your contribution
I am happy to tackle the conversion.
I was wondering if there are instructions on how to do the conversion properly so that it can be added to the repo.
| 02-21-2021 09:30:10 | 02-21-2021 09:30:10 | Sure, I can guide you if you want.
As LayoutLM is only a slight adaptation from BERT, I guess you can define `modeling_tf_layoutlm.py` based on `modeling_tf_bert.py`. Note that all layers should be renamed, e.g. `TFBertEmbeddings` -> `TFLayoutLMEmbeddings`. LayoutLM adds position embeddings for the tokens based on the bounding boxes, so I guess this is the only thing that needs to be added in the [embedding layer](https://github.com/huggingface/transformers/blob/88605f37a6fe7bde336f52700229d619b5ffa0f6/src/transformers/models/bert/modeling_tf_bert.py#L131).
In PyTorch, we have:
https://github.com/huggingface/transformers/blob/88605f37a6fe7bde336f52700229d619b5ffa0f6/src/transformers/models/layoutlm/modeling_layoutlm.py#L65-L68
So this will need to be added to the `TFLayoutLMEmbeddings` class. Regarding a conversion script to convert the PyTorch weights into the TF version, there's a general script to convert PyTorch weights to TF 2 models: https://github.com/huggingface/transformers/blob/master/src/transformers/convert_pytorch_checkpoint_to_tf2.py
This works if all weights names are identical between the PT and TF implementations.<|||||>@NielsRogge Thanks for the guidance, I think I know where to start.
I'll comment here if I had more questions. I am hoping to have a PR by the end of this week <|||||>Should I upload TF weights under `https://huggingface.co/microsoft/` in the same place PT weights are stored?<|||||>Yes, pinging @julien-c here to give you access<|||||>pinging @LysandreJik and @sgugger <|||||>We don't have granular write access to model repos so (unless you're affiliated with Microsoft in some way) I would suggest you upload the files to a model repo under your HF user account and then we (or them) can copy the relevant files to the main repos!
Let me know if this is a suitable workflow.<|||||>@julien-c Thanks, the suggested workflow sounds good.<|||||>closing this issue as the code is already merged. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.