repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 15,043 | closed | [SpeechEncoderDecoder] Fix from pretrained | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes an edge case that leads to failing CI for: `tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2BertModelTest::test_real_model_save_load_from_pretrained`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-05-2022 15:30:02 | 01-05-2022 15:30:02 | The other two `VisionEncoderDecoderModel` and `EncoderDecoderModel` were correct |
transformers | 15,042 | closed | [CLIP] Fix TF test | # What does this PR do?
Fixes the slow TFCLIP tests by using the TF checkpoint instead of converting from `PT`. | 01-05-2022 13:50:55 | 01-05-2022 13:50:55 | |
transformers | 15,041 | closed | [CLIP] Fix PT test | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes tests/test_modeling_clip.py::CLIPModelIntegrationTest::test_inference in that it's not run for the TF suit
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-05-2022 12:33:47 | 01-05-2022 12:33:47 | |
transformers | 15,040 | closed | [Wav2Vec2ProcessorWithLM] improve decoder download | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR significantly speeds up the `.from_pretrained(...)` method of `Wav2Vec2ProcessorWithLM`.
Currently the tests will fail. They should pass as soon as https://github.com/huggingface/huggingface_hub/commit/010db7570ffaa131666a000189c6c3962aa24e32 released.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-05-2022 11:30:02 | 01-05-2022 11:30:02 | Not super urgent the PR - would be good to be merged by early/mid next week.<|||||>Just released version v0.4.0 for `huggingface_hub` - reran the test suite, everything passes.
Merging! |
transformers | 15,039 | closed | Wav2Vec2 bart-large - finetuning failing with ValueError: tokenizer has to be of type....but is <class 'transformers.models.bart.tokenization_bart_fast.BartTokenizerFast' | Hii,
I am trying to understand how to finetune wav2vec2 models with my own decoder, I am trying to replicate the finetuning for the
patrickvonplaten/wav2vec2-2-bart-large model with the run script from
https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition
The create_model.py as well as the config contains 'Wav2Vec2Processor' as the processor class.
But on the line 429(run_speech_recognition_seq2seq.py) this command
processor = AutoProcessor.from_pretrained(training_args.output_dir)
fails with -
'''
AttributeError: 'str' object has no attribute 'from_pretrained'
'''
And if i pass the processor class directly it then fails with
'''
alueError: tokenizer has to be of type <class 'type'>, but is <class 'transformers.models.bart.tokenization_bart_fast.BartTokenizerFast'
'''
I am am still debugging but unable to figure out what the issue is.
--- Update
The internal error is it has to be of type PreTrainedTokenizer but Bart has PreTrainedTokenizerFast, due to which it fails
Thanks! | 01-05-2022 07:19:07 | 01-05-2022 07:19:07 | Hey @programmeddeath1,
Sorry to answer so late. Could you please provide a short code snippet that reproduces the error? Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,038 | closed | BERT model from pipeline hangs with multiprocessing pool | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: transformers==4.9.1
- Platform: macOS Catalina 10.15.7 (19H2)
- Python version: 3.7.5
- PyTorch version (GPU?):
- Tensorflow version (GPU?):tensorflow==2.7.0
- Using GPU in script?:No
- Using distributed or parallel set-up in script?:Yes
### Who can help
@sgugger
Models:
- BERT: @LysandreJik
Library:
- Pipelines: @Narsil
## Information
The model I am using is Bert. I can run the model without problem without multiprocessing. However, when I try to use multiprocessing and run the model in another process. the process will hang and never return.
## To reproduce
Steps to reproduce the behavior:
helper.py
```python
import multiprocessing
import os
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering, pipeline
qa = pipeline('question-answering',
model=TFDistilBertForQuestionAnswering.from_pretrained("./bert"),
tokenizer=DistilBertTokenizer.from_pretrained("./bert"))
def compute(query, context):
question_list = [{"question": query, "context": context}]
print(f"start qa: {os.getpid()}")
result = qa(question_list)
print("end qa")
return result
class NoDaemonProcess(multiprocessing.Process):
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, value):
pass
class NoDaemonContext(type(multiprocessing.get_context("fork"))):
Process = NoDaemonProcess
```
main.py
```python
import multiprocessing
from helper import NoDaemonContext, compute
import os
if __name__ == "__main__":
print(f"current pid: {os.getpid()}")
query = "what is a potato?"
context = "A potato is a starchy vegetable. A potato is nothing"
mp_context = NoDaemonContext()
pool = mp_context.Pool(multiprocessing.cpu_count())
async_res = pool.apply_async(compute, (query, context))
async_res.wait()
res = async_res.get()
```
1. run `python main.py`
( I have to use the class NoDaemonContext, otherwise, the multiprocessing module will complain `AssertionError: daemonic processes are not allowed to have children`)
## Expected behavior
the code will hang in the line calling the model
```
result = qa(question_list)
```
| 01-05-2022 07:05:57 | 01-05-2022 07:05:57 | similar issue: [https://github.com/huggingface/transformers/issues/14919](url)<|||||>The error seems to come from Tensorflow itself, here is an example without using `pipeline` whatsoever.
```python
import multiprocessing
import os
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering, pipeline
import tensorflow as tf
model = TFDistilBertForQuestionAnswering.from_pretrained("bert-base-uncased")
def compute(query, context):
question_list = [{"question": query, "context": context}]
print(f"start qa: {os.getpid()}")
tokens = tf.zeros((1, 10), dtype=tf.int32)
result = model(input_ids=tokens)
print("end qa")
return result
class NoDaemonProcess(multiprocessing.Process):
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, value):
pass
class NoDaemonContext(type(multiprocessing.get_context("fork"))):
Process = NoDaemonProcess
if __name__ == "__main__":
print(f"current pid: {os.getpid()}")
query = "what is a potato?"
context = "A potato is a starchy vegetable. A potato is nothing"
mp_context = NoDaemonContext()
pool = mp_context.Pool(multiprocessing.cpu_count())
async_res = pool.apply_async(compute, (query, context))
async_res.wait()
```
Other hints that it's tricky to share tf session across processes: https://stackoverflow.com/questions/36610290/tensorflow-and-multiprocessing-passing-sessions
The easiest way to solve ALL your problems related to threading or multiprocessing is to load the pipeline DIRECTLY on the thread or process instead of trying to share it afterwards. If you are doing GPU processing, you might have different issues, and for CPU processing it will work but the model will be loaded N times (no real way around it with TF I think, also please note that using CPU parallelism this way might not even be a win if TF is able to use all your cores for matrix multiplications, OnnxRuntime does that for instance, just keep measuring.)
Keep in mind that if you are controlling the parallelism yourself, you should deactivate it every else where it might be activated (otherwise there's a high risk that threads/processes steal work on individual cores and context switches hurt performance instead of helping). For instance, `slow` tokenizers use a threading pool to encode questions, the easy fix is to use `DistilBertTokenizerFast` which will not use old code and work out of the box (no need to handle Daemonic processes vs not daemonic for instance).
Here is the solution closest to your original one (I recommend removing the NoDaemon thing, and using DistilBertTokenizerFast instead if you can though)
```python
import multiprocessing
import os
from transformers import DistilBertTokenizer, TFDistilBertForQuestionAnswering, pipeline
QA = None
def get_qa():
if QA is not None:
return QA
qa = pipeline(
"question-answering",
model=TFDistilBertForQuestionAnswering.from_pretrained("bert-base-uncased"),
tokenizer=DistilBertTokenizer.from_pretrained("bert-base-uncased"),
)
return qa
def compute(query, context):
question_list = [{"question": query, "context": context}]
print(f"start qa: {os.getpid()}")
qa = get_qa()
result = qa(question_list)
print("end qa", flush=True)
return result
class NoDaemonProcess(multiprocessing.Process):
@property
def daemon(self):
return False
@daemon.setter
def daemon(self, value):
pass
class NoDaemonContext(type(multiprocessing.get_context("fork"))):
Process = NoDaemonProcess
if __name__ == "__main__":
print(f"current pid: {os.getpid()}")
query = "what is a potato?"
context = "A potato is a starchy vegetable. A potato is nothing"
mp_context = NoDaemonContext()
# compute(query, context=context)
with mp_context.Pool(multiprocessing.cpu_count()) as pool:
async_res = pool.apply_async(compute, (query, context))
print(async_res.get())
# async_res = pool.apply(compute, (query, context))
# print(async_res)
```
I will close this in favor of : https://github.com/huggingface/transformers/issues/14919 for further discussion.
<|||||>@Narsil Thank you so much for your help and I think you are right after reloading the model for every process/thread and the code is working as expected without hanging issue. |
transformers | 15,037 | closed | Wrap Roberta integration test forward passes with torch.no_grad() | # What does this PR do?
This PR wraps forward passes in Roberta integration tests with torch.no_grad(). See issue #14642
Fixes #14642
[Issue link](https://github.com/huggingface/transformers/issues/14642)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @LysandreJik @sgugger
| 01-05-2022 02:01:01 | 01-05-2022 02:01:01 | |
transformers | 15,036 | closed | use block_size instead of max_seq_length in tf run_clm example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
tensorflow version of `run_clm.py` does not make appropriate use of a `block_size` flag.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger, @patil-suraj | 01-05-2022 00:30:30 | 01-05-2022 00:30:30 | |
transformers | 15,035 | closed | Issue with `stas/tiny-wmt19-en-de` model | ## Environment info
The environment in the Hosted inference API
### Who can help
@stas00
Models:
`stas/tiny-wmt19-en-de` (I assume the issue might be there with other language pairs)
## Information
I am unable to use the model to generate translations, it seems to be generating gibberish (see Screenshot attached).
I tried to use both on the Hosted API & on a local Python setup and get similar results.
## To reproduce
Steps to reproduce the behavior:
1. Open this link: https://huggingface.co/stas/tiny-wmt19-en-de?text=This+is+a+sentence+in+English.
2. Click Generate and the output should be strange.
# Screenshot
<img width="1316" alt="Screen Shot 2022-01-04 at 4 47 53 PM" src="https://user-images.githubusercontent.com/2609265/148128671-eb021720-efcb-4dd5-95a1-71f68717cf38.png">
| 01-04-2022 21:53:00 | 01-04-2022 21:53:00 | Hi, tingofurro
Have a look at what the model card says:
> This is a tiny model that is used in the transformers test suite. It doesn't do anything useful, other than testing that modeling_fsmt.py is functional.
>
> Do not try to use it for anything that requires quality.
Thus closing this.
<|||||>Oops! Thanks for the clarification. |
transformers | 15,034 | closed | ValueError: Layer weight shape (30522, 768) not compatible with provided weight shape torch.Size([1, 15, 3072]) | I tried to use word embedding from using BERT as an embedding layer using Keras like that
```
inputs2 = Input(shape=(max_length,))
sent = Embedding(vocab_size, 768, mask_zero=True)(inputs2)
model.layers[1].set_weights([embedding_matrix])
model.layers[1].trainable = False
```
but got the following error
```
model.layers[1].set_weights([embedding_matrix])
File "/home/user/.local/lib/python3.6/site-packages/keras/engine/base_layer.py", line 1801, in set_weights
'shape %s' % (ref_shape, weight_shape))
ValueError: Layer weight shape (30522, 768) not compatible with provided weight shape torch.Size([1, 15, 3072])
``` | 01-04-2022 19:44:26 | 01-04-2022 19:44:26 | Hi @mathshangw, is there a way for you to share more? We don't have knowledge of most of the variables you're showing here.<|||||>Hi @mathshangw! Thank you for sharing your issue with us.
I see two potential issues, given the information you gave us so far:
1. the `inputs2` variable in your initial script contains a pytorch tensor. I am not sure whether Keras accept pytorch tensors as inputs, try to convert it to a TF tensor. Alternatively, you can change Bert to be a tensorflow model, by changing `AutoModel` to `TFAutoModel` (then the output would be a TF tensor);
2. The output of your `get_bert_embed_matrix` function is already an embedding, so you don't need an Embedding layer. You can use that as an input to a downstream model.
Let me know if it helps.
Finally -- I know this is your first issue here, but check our [guide for issues](https://github.com/huggingface/transformers/blob/master/ISSUES.md) if you haven't so far. It helps you find the answer faster (if it exists out there) and it helps us and the community to deliver the best experience possible (if it doesn't) 🤗 <|||||>Thanks a lot for replying .. so excuse me does this function write for return word-embedding using BERT ? <|||||>Yes, the concatenation of the last 4 hidden states, returned by your `get_bert_embed_matrix` function, is one of the most well-known methods ([table 7 in the original BERT paper](https://arxiv.org/pdf/1810.04805.pdf)) to obtain word-level embeddings for BERT :) <|||||>> Hi @mathshangw! Thank you for sharing your issue with us.
>
> I see two potential issues, given the information you gave us so far:
>
> 1. the `inputs2` variable in your initial script contains a pytorch tensor. I am not sure whether Keras accept pytorch tensors as inputs, try to convert it to a TF tensor. Alternatively, you can change Bert to be a tensorflow model, by changing `AutoModel` to `TFAutoModel` (then the output would be a TF tensor);
>
> 2. The output of your `get_bert_embed_matrix` function is already an embedding, so you don't need an Embedding layer. You can use that as an input to a downstream model.
>
>
> Let me know if it helps.
>
> Finally -- I know this is your first issue here, but check our [guide for issues](https://github.com/huggingface/transformers/blob/master/ISSUES.md) if you haven't so far. It helps you find the answer faster (if it exists out there) and it helps us and the community to deliver the best experience possible (if it doesn't) hugs
thanks a lot but how can i use the function as an input to a downstream model , please <|||||>You can do as follows (there are actually many ways to do it):
1. Ensure you get the output of `get_bert_embed_matrix` as a tf.Tensor ([docs](https://www.tensorflow.org/api_docs/python/tf/Tensor))
2. Define your model using the sequential API ([docs](https://www.tensorflow.org/api_docs/python/tf/keras/Sequential))
3. Ensure the first later is a `Flatten` keras layer, so it handles the input shapes for you (see the examples in the [docs](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten))
4. Define the rest of your downstream model<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,033 | closed | Fix doc example: mask_time_indices (numpy) has no attribute 'to' | # What does this PR do?
In speech to text models, there are doc examples
```
>>> mask_time_indices = _compute_mask_indices((batch_size, sequence_length), mask_prob=0.2, mask_length=2)
>>> with torch.no_grad():
... outputs = model(input_values, mask_time_indices=mask_time_indices)
```
which gives `numpy.ndarray`, and the following line fails
https://github.com/huggingface/transformers/blob/19d37c2dd36d73537a6855c56808e524fb584459/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1481
with `AttributeError: 'numpy.ndarray' object has no attribute 'to'`.
This PR add
```
mask_time_indices = torch.tensor(mask_time_indices, device=input_values.device, dtype=torch.bool)
```
in the examples to make them work.
## Who can review?
@patrickvonplaten | 01-04-2022 16:23:22 | 01-04-2022 16:23:22 | Thank you! |
transformers | 15,032 | closed | Removing tokens from the tokenizer | Any methods that I can remove unwanted tokens from the tokenizer?
Referring to #4827 , I tried to remove tokens from the tokenizer with the following code.
First, I fetch the tokenizer from huggingface hub.
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base")
print(len(tokenizer.vocab))
```
```
32000
```
From the fetched tokenizer, I tried to remove tokens such as `[unused363]`.
So I first extracted tokens with 'unused' and deleted afterwards.
```python
# get all tokens with "unused" in target_tokenizer
unwanted_words = []
for word in tokenizer.vocab:
if "unused" in word:
unwanted_words.append(word)
# remove all unwanted tokens from target_tokenizer
for word in unwanted_words:
del tokenizer.vocab[word]
print(len(tokenizer.vocab))
```
```
32000
```
Apparently, `del` didn't do its job.
The list `unwanted_words` has 500 elements but none of which are removed from the tokenizer.
Any other methods that I can refer to? | 01-04-2022 15:59:34 | 01-04-2022 15:59:34 | @snoop2head , I will answer your question purely "technically" because in general it can be quite complicated to remove tokens from a tokenizer that has been trained with a certain algorithm.
Indeed, it can seems a bit complicated to modify a tokenizer fast because they wrap the [tokenizers](https://huggingface.co/docs/tokenizers/python/latest/index.html) library which is not executed in python but in rust. In particular the attribute vocab cannot be modified because it is in fact just the reading of an attribute of an object which runs in rust. So when we want to modify a part of the tokenizer that runs in rust we have to recreate it.
Here are the steps to remove tokens from your vocabulary:
1) Get your tokenizer and the list of tokens you want to remove
```python
import json
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("klue/roberta-base")
# get all tokens with "unused" in target_tokenizer
unwanted_words = []
for word in tokenizer.vocab:
if "unused" in word:
unwanted_words.append(word)
```
2) Get the arguments that allowed to initialize the "model" component of the `backend_tokenizer`.
```python
model_state = json.loads(tokenizer.backend_tokenizer.model.__getstate__())
print(len(model_state["vocab"]))
# 32000
```
3) Modify the initialization arguments, in particular the vocabulary to remove the tokens we don't want
```python
# remove all unwanted tokens from the vocabulary
for word in unwanted_words:
del model_state["vocab"][word]
print(len(model_state["vocab"]))
# 31500
```
4) Intitialize again the "model" component of the `backend_tokenizer` with the new vocabulary
```python
from tokenizers import models
model_class = getattr(models, model_state.pop("type"))
tokenizer.backend_tokenizer.model = model_class(**model_state)
print(len(tokenizer.vocab))
# 31500
```
I take this opportunity, could you tell us more about why you want to be able to remove tokens from your vocabulary?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I will close this issue as there is not much activity. But don't hesitate to answer later if you want to bring us more information. :slightly_smiling_face: <|||||>@SaulLu How can we remove stale terms and add newer terms? I am interested to train a model on evolving text where words go through a vocabulary and semantic shift. In such a case, I would train a new tokenizer on the latest data and create a new vocabulary of common terms between old and new tokenizer along with removal of stale tokens and adding newer ones from the new tokenizer.<|||||>@SaulLu, thanks for the example you gave above. I tried using the same approach with GPT but it fails at the point of initializing the "model" component of the backend_tokenizer with the new vocabulary
```
#1. Get your tokenizer and the list of tokens you want to remove
import json
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("gpt2")
# get all tokens with "unused" in target_tokenizer
unwanted_words = [ 'ply', 'Ġmor','Ġprovide','IC','ung','Ġparty', 'Ġexist', 'Ġmag',]
#2. Get the arguments that allowed to initialize the "model" component of the backend_tokenizer.
model_state = json.loads(tokenizer.backend_tokenizer.model.__getstate__())
print(len(model_state["vocab"]))
#3. Modify the initialization arguments, in particular the vocabulary to remove the tokens we don't want
# remove all unwanted tokens from the vocabulary
for word in unwanted_words:
del model_state["vocab"][word]
print(len(model_state["vocab"]))
#4. Intitialize again the "model" component of the backend_tokenizer with the new vocabulary
from tokenizers import models
model_class = getattr(models, model_state.pop("type"))
tokenizer.backend_tokenizer.model = model_class(**model_state)
print(len(tokenizer.vocab))
```
The error I have is below:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-21-fa908d23c419>](https://localhost:8080/#) in <module>
30 model_class = getattr(models, model_state.pop("type"))
31
---> 32 tokenizer.backend_tokenizer.model = model_class(**model_state)
33
34 print(len(tokenizer.vocab))
TypeError: argument 'merges': failed to extract enum PyMerges ('Merges | Filename')
- variant Merges (Merges): TypeError: failed to extract field PyMerges::Merges.0, caused by TypeError: 'str' object cannot be converted to 'PyTuple'
- variant Filename (Filename): TypeError: failed to extract field PyMerges::Filename.0, caused by TypeError: 'list' object cannot be converted to 'PyString'
```
Is there any thing I can do to get it to work? |
transformers | 15,031 | closed | [DocTests Speech] Add doc tests for all speech models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR revives the doc tests and adds doc tests for all speech models now that the new docs are finished.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-04-2022 14:32:26 | 01-04-2022 14:32:26 | |
transformers | 15,030 | closed | Enabling `TF` on `image-classification` pipeline. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@philschmid
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-04-2022 09:41:59 | 01-04-2022 09:41:59 | |
transformers | 15,029 | closed | Hotfix `chunk_length_s` instead of `_ms`. | And fixes issue that the filled in token was the padded_token which could lead to incorrect decoding.
Using the same token for CTC is better (prevent extra repetition)
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
@patrickvonplaten
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-04-2022 09:06:21 | 01-04-2022 09:06:21 | Feel free to merge as soon as the tests are passing :-) |
transformers | 15,028 | closed | add test checking the offsets for an input splitted into words for different `add_prefix_space` and `trim_offsets` args | # What does this PR do?
Added a new test that tests the offsets returned by the fast tokenizers when the input is already pre-tokenized for the roberta tokenizer. To pass this test use the new version of the library tokenizers v0.11.0 (thanks to [this PR](https://github.com/huggingface/tokenizers/pull/844)).
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This addition was discussed [here](https://github.com/huggingface/transformers/pull/14752#issuecomment-998144055)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
<!-- Would love a review by @LysandreJik , @Narsil or @sgugger --> | 01-04-2022 09:04:31 | 01-04-2022 09:04:31 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,027 | closed | Adding QoL for `batch_size` arg (like others enabled everywhere). | # What does this PR do?
Adds a `pipeline(.. batch_size=16)` argument.
Previously this was `__call__` only argument: `pipe = pipeline(....); pipe(..., batch_size=16)`.
This PR made both supported (like all other pipeline arguments).
Some linked issues: https://github.com/huggingface/transformers/issues/14327
https://github.com/huggingface/transformers/issues/14333#issuecomment-973007453
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-04-2022 08:46:21 | 01-04-2022 08:46:21 | Gentle reminder to fill the description of PRs pretty please, before merging @Narsil :-)<|||||>@sgugger Oups, sorry ! Done. :D |
transformers | 15,026 | open | [Benchmark] HF Trainer on A100 | # 🖥 Benchmarking `transformers` w/ HF Trainer on a single A100 40GB
We are going to use a special benchmarking tool that will do all the work for us. https://github.com/huggingface/transformers/pull/14934
This is the index post and specific benchmarks are in their own posts below:
1. [fp16 vs bf16 vs tf32 vs fp32](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004543189)
2. [gradient accumulation steps](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004592231)
3. [batch size](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957)
4. [gradient checkpointing](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005034578)
5. [optimizers](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005220263)
6. [combining winning strategies](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005227577) **~3x speed improvement!**
7. [RTX-3090 vs A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005235845)
Note that each benchmark was run only once, so multiple runs and averaging is probably going to give slightly different results. The purpose here though is to see relative differences roughly and not try to give an exact number.
See also the [same benchmarks for RTX-3090](https://github.com/huggingface/transformers/issues/14608)
| 01-04-2022 05:41:33 | 01-04-2022 05:41:33 | # precision: fp16 vs bf16 vs tf32 vs fp32
Main interest: benchmarking the new --bf16 and --tf32 on Ampere/RTX-3090, comparatively to fp16 and fp32 modes.
- bf16 is `autocast(dtype=torch.bfloat16)`
- tf32 is `torch.backends.cuda.matmul.allow_tf32 = True`
## Benchmark
The benchmark uses 3 different t5 models, and at the end of the section also gpt2. For t5 the main script is:
```
CUDA_VISIBLE_DEVICES=0 python \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 64 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 40000 --dataloader_num_workers 2
```
and now adding one of:
```
--tf32 0 # fp32
--tf32 0 --fp16
--tf32 0 --bf16
--tf32 1
--tf32 1 --fp16
--tf32 1 --bf16
```
But we are going to use a special benchmarking tool that will do all the work for us. https://github.com/huggingface/transformers/pull/14934
Important notes:
1. `--tf32 0 --fp16 0` combo is just fp32 (which is the default mode - we don't have this option per se)
2. I changed `--per_device_train_batch_size` in the base command from 32 (`t5-small`) to 16 (`t5-base`) to 8 (`t5-large`) to be able to fit into the GPU memory while keeping it as occupied as possible.
3. I changed `--max_train_samples` in the base command from 20k (`t5-small`) to 10k (`t5-base`) to 5k (`t5-large`) to give each run about 1-3min of run time so that the benchmark doesn't take too too long, but is long enough to put strain on the card.
*** Setup:
```
Datetime : 2022-01-03 22:43:38
Software:
transformers: 4.16.0.dev0
torch : 1.10.1
cuda : 11.3
python : 3.8.12
Hardware:
1 GPUs : NVIDIA A100-SXM4-40GB, 39.59GB
```
## Benchmark 1: t5-small
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 272.59 | 0 | 2.49 |
| --tf32 1 | 581.61 | 113 | 2.49 |
| --fp16 --tf32 0 | 643.07 | 136 | 2.49 |
| --fp16 --tf32 1 | 635.24 | 133 | 2.49 |
| --bf16 --tf32 0 | 616.23 | 126 | 2.50 |
| --bf16 --tf32 1 | 612.59 | 125 | 2.50 |
Conclusions:
- fp16 is 136% faster than fp32
- bf16 is ~4% slower than fp16
- tf32 is 113% faster than fp32, and only ~10% slower than fp16
- tf32 makes ~1% impact on bf16 and fp16 modes
```
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 64 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 40000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
## Benchmark 2: t5-base
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 80.10 | 0 | 2.21 |
| --tf32 1 | 214.10 | 167 | 2.21 |
| --fp16 --tf32 0 | 219.20 | 174 | 2.21 |
| --fp16 --tf32 1 | 218.46 | 173 | 2.21 |
| --bf16 --tf32 0 | 214.17 | 167 | 2.22 |
| --bf16 --tf32 1 | 225.44 | 181 | 2.22 |
Conclusions:
- fp16 is 173% faster than fp32
- bf16 is about the same as fp16
- tf32 is 167% faster than fp32, and is about the same as fp16
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
## Benchmark 3: t5-large
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 31.59 | 0 | 2.03 |
| --tf32 1 | 36.13 | 14 | 2.03 |
| --fp16 --tf32 0 | 34.86 | 10 | 0.00 |
| --fp16 --tf32 1 | 36.77 | 16 | 0.00 |
| --bf16 --tf32 0 | 31.35 | -1 | 2.04 |
| --bf16 --tf32 1 | 31.30 | -1 | 2.04 |
Conclusions:
- **fp16 overflows here** (loss=0). (this is a very well [known issue](https://github.com/huggingface/transformers/pull/10956) with many bf16-pretrained models that are being attempted to be finetuned in fp16).
- tf32 is only 14% faster than fp32
- **bf16 for some reason performs really badly** - same as fp32 (this same benchmark on RTX-3090 doesn't have this problem)
```
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-large \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 8 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
If I use a higher bs=16 instead of 8, bf16 does deliver a better than fp32 performance, but still not on par with fp16:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 39.59 | 0 | 2.10 |
| --tf32 1 | 67.24 | 70 | 2.10 |
| --fp16 --tf32 0 | 70.88 | 79 | 0.00 |
| --fp16 --tf32 1 | 70.38 | 78 | 0.00 |
| --bf16 --tf32 0 | 61.37 | 55 | 2.12 |
| --bf16 --tf32 1 | 59.95 | 51 | 2.12 |
It'd be great to know why CUDA doesn't activate some optimization since not everybody is going to run benchmarks, but if you do run benchmarks and find yourself in this situation Eddie Yan proposed that adding `--gradient_accumulation_steps` to create a much larger batch for the scheduler to step with which should help a lot.
So let's try `--per_device_train_batch_size 16 --gradient_accumulation_steps 4` for a total of effective bs=64:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 45.60 | 0 | 2.35 |
| --tf32 1 | 86.78 | 90 | 2.36 |
| --fp16 --tf32 0 | 77.47 | 70 | 0.00 |
| --fp16 --tf32 1 | 79.63 | 75 | 0.00 |
| --bf16 --tf32 0 | 75.85 | 66 | 2.41 |
| --bf16 --tf32 1 | 73.19 | 61 | 2.41 |
Both bf16 and tf32 show a much better performance here.
## Benchmark 4: gpt2
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 28.77 | 0 | 3.34 |
| --tf32 1 | 63.51 | 121 | 3.34 |
| --fp16 --tf32 0 | 69.60 | 142 | 3.34 |
| --fp16 --tf32 1 | 69.98 | 143 | 3.34 |
| --bf16 --tf32 0 | 70.37 | 145 | 3.34 |
| --bf16 --tf32 1 | 69.88 | 143 | 3.34 |
Conclusions:
- fp16 is ~140% faster than fp32
- bf16 is on par with fp16
- tf32 is 121% faster than fp32, and only ~8% slower than fp16
- tf32 makes almost no impact on bf16 and fp16 modes
```
*** The benchmark command line was:
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--logging_strategy no --save_strategy no --do_train --max_train_samples 5000 \
--per_device_train_batch_size 16 --num_train_epochs 1 --warmup_steps 8 \
--block_size 512 --report_to none ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
## Benchmark 5: gpt2-medium
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 10.60 | 0 | 2.98 |
| --tf32 1 | 24.81 | 134 | 2.98 |
| --fp16 --tf32 0 | 27.67 | 161 | 2.99 |
| --fp16 --tf32 1 | 27.62 | 160 | 2.99 |
| --bf16 --tf32 0 | 27.57 | 160 | 2.99 |
| --bf16 --tf32 1 | 27.55 | 160 | 2.99 |
Conclusions:
- fp16 is ~160% faster than fp32
- bf16 is on par with fp16
- tf32 is 134% faster than fp32, and only ~10% slower than fp16
- tf32 makes no impact on bf16 and fp16 modes
```
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2-medium \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--logging_strategy no --save_strategy no --do_train --max_train_samples 2500 \
--per_device_train_batch_size 8 --num_train_epochs 1 --warmup_steps 8 \
--block_size 512 --report_to none ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```<|||||># gradient accumulation steps
Let's choose `t5-base` model to test with as it's pretty large yet doesn't overflow like t5-large.
Let's measure `--gradient_accumulation_steps` 1,2,4,8,16,32 with different precision configurations.
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:-------------------------------------------------|------------------------------------:|------------:|----------------:|
| --gradient_accumulation_steps 1 --tf32 0 | 93.68 | 0 | 2.21 |
| --gradient_accumulation_steps 1 --tf32 1 | 210.53 | 125 | 2.21 |
| --gradient_accumulation_steps 1 --tf32 0 --fp16 | 217.75 | 132 | 2.21 |
| --gradient_accumulation_steps 1 --tf32 0 --bf16 | 224.09 | 139 | 2.22 |
| --gradient_accumulation_steps 2 --tf32 0 | 97.48 | 4 | 2.28 |
| --gradient_accumulation_steps 2 --tf32 1 | 236.39 | 152 | 2.28 |
| --gradient_accumulation_steps 2 --tf32 0 --fp16 | 244.81 | 161 | 2.28 |
| --gradient_accumulation_steps 2 --tf32 0 --bf16 | 246.08 | 163 | 2.29 |
| --gradient_accumulation_steps 4 --tf32 0 | 99.68 | 6 | 2.39 |
| --gradient_accumulation_steps 4 --tf32 1 | 248.24 | 165 | 2.40 |
| --gradient_accumulation_steps 4 --tf32 0 --fp16 | 259.20 | 177 | 2.41 |
| --gradient_accumulation_steps 4 --tf32 0 --bf16 | 263.39 | 181 | 2.42 |
| --gradient_accumulation_steps 8 --tf32 0 | 100.67 | 7 | 2.58 |
| --gradient_accumulation_steps 8 --tf32 1 | 252.45 | 169 | 2.58 |
| --gradient_accumulation_steps 8 --tf32 0 --fp16 | 261.59 | 179 | 2.58 |
| --gradient_accumulation_steps 8 --tf32 0 --bf16 | 267.37 | 185 | 2.62 |
| --gradient_accumulation_steps 16 --tf32 0 | 100.97 | 8 | 2.83 |
| --gradient_accumulation_steps 16 --tf32 1 | 253.68 | 171 | 2.84 |
| --gradient_accumulation_steps 16 --tf32 0 --fp16 | 256.13 | 173 | 2.84 |
| --gradient_accumulation_steps 16 --tf32 0 --bf16 | 274.14 | 193 | 2.89 |
Let's filter out just one subset so that it's easier to compare the gradient accumulation differences alone, so re-running with just bf16 enabled (`--tf32 0 --bf16`):
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --gradient_accumulation_steps 1 | 228.41 | 0 | 2.22 |
| --gradient_accumulation_steps 2 | 248.77 | 9 | 2.29 |
| --gradient_accumulation_steps 4 | 263.54 | 15 | 2.42 |
| --gradient_accumulation_steps 8 | 270.12 | 18 | 2.62 |
| --gradient_accumulation_steps 16 | 271.99 | 19 | 2.89 |
Conclusions:
* that's a significant speed up for even 4 steps
* at 16 the impact is almost negligible over 8
* the loss gets much bigger with the higher accumulation steps - my benchmark is very short and with less steps to take when the batches are larger, the model simply doesn't have a chance to step down far enough. The same can be observed with just [normal batch size changes](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957).
non-zero lr warm up too plays a role here since it's a very short run.
```
1.
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_accumulation_steps 1|--gradient_accumulation_steps 2|--gradient_accumulation_steps 4|--gradient_accumulation_steps 8|--gradient_accumulation_steps 16' \
'--tf32 0|--tf32 1|--tf32 0 --fp16|--tf32 0 --bf16' --report-metric-keys \
train_loss --repeat-times 1
2.
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 --tf32 0 --bf16 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_accumulation_steps 1|--gradient_accumulation_steps 2|--gradient_accumulation_steps 4|--gradient_accumulation_steps 8|--gradient_accumulation_steps 16' \
--report-metric-keys train_loss --repeat-times 1
```
<|||||># batch size
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --per_device_train_batch_size 1 | 7.77 | 0 | 1.90 |
| --per_device_train_batch_size 2 | 15.51 | 100 | 2.01 |
| --per_device_train_batch_size 4 | 29.66 | 282 | 2.09 |
| --per_device_train_batch_size 8 | 61.16 | 687 | 2.16 |
| --per_device_train_batch_size 16 | 115.84 | 1392 | 2.25 |
| --per_device_train_batch_size 32 | 224.96 | 2797 | 2.38 |
Conclusions:
- No surprise here, the speed here is directly proportional to the gpu capacity utilization. In this particular configuration BS=16 is the highest BS we can fit. So when we use BS=1 we greatly underutilize the GPU. The speed up is linear and almost directly proportional to the batch-size.
- as with [gradient accumulation steps](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004592231) lm loss gets worse with the increase in the batch size because my benchmark is very short and with less steps to take when the batches are larger, the model simply doesn't have a chance to step down far enough.
```
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 --bf16' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--per_device_train_batch_size 1|--per_device_train_batch_size 2|--per_device_train_batch_size 4|--per_device_train_batch_size 8|--per_device_train_batch_size 16|--per_device_train_batch_size 32' \
--report-metric-keys train_loss --repeat-times 1
```
<|||||># gradient checkpointing
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------|------------------------------------:|------------:|----------------:|
| --gradient_checkpointing 0 | 225.67 | 24 | 2.30 |
| --gradient_checkpointing 1 | 182.68 | 0 | 2.30 |
Conclusions:
- as expected since gradient checkpointing recalculates forward activations it should be slower - we get a 24% slowdown here.
Let's look at memory:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | Train<br>mem<br>gpu<br>alloc<br>delta | Train<br>mem<br>gpu<br>peaked<br>delta |
|:---------------------------|------------------------------------:|------------:|----------------:|----------------------------------------:|-----------------------------------------:|
| --gradient_checkpointing 0 | 122.81 | 35 | 2.38 | 2739MB | 1155MB |
| --gradient_checkpointing 1 | 90.92 | 0 | 2.38 | 2697MB | 3229MB |
We can clearly see that peak GPU memory is ~2/3 less.
note: I had to half BS in the 2nd benchmark as I was getting OOM.
```
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 --bf16' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_checkpointing 0|--gradient_checkpointing 1' --report-metric-keys \
train_loss --repeat-times 1
CUDA_VISIBLE_DEVICES=3 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 --bf16 --skip_memory_metrics 0' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_checkpointing 0|--gradient_checkpointing 1' --report-metric-keys \
'train_loss train_mem_gpu_alloc_delta train_mem_gpu_peaked_delta' \
--repeat-times 1
```
<|||||># optimizers
Let's do fp32 first:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_hf | 214.55 | 2 | 2.21 |
| --optim adamw_torch | 209.72 | 0 | 2.21 |
| --optim adafactor | 158.56 | -24 | 2.21 |
| --optim adamw_apex_fused | 227.96 | 9 | 2.21 |
Observations:
- apex's FusedAdam is the fastest.
fp16:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_hf | 221.08 | 5 | 2.21 |
| --optim adamw_torch | 209.85 | 0 | 2.21 |
| --optim adafactor | 160.69 | -23 | 2.21 |
| --optim adamw_apex_fused | 231.71 | 10 | 2.21 |
bf16:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_hf | 221.28 | 4 | 2.22 |
| --optim adamw_torch | 212.83 | 0 | 2.22 |
| --optim adafactor | 164.21 | -23 | 2.22 |
| --optim adamw_apex_fused | 237.31 | 12 | 2.22 |
Observations:
- The relative speed up is similar
```
# fp32
CUDA_VISIBLE_DEVICES=3 python scripts/benchmark/trainer-benchmark.py --base-cmd \
' \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \
--do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 32 \
--max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 \
' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--optim adamw_hf|--optim adamw_torch|--optim adafactor|--optim adamw_apex_fused' \
--report-metric-keys train_loss --base-variation '--optim adamw_torch'
# fp16 - just add --fp16 to base-cmd
# bf16 - just add --bf16 to base-cmd
```<|||||># combining winning strategies
Now let's combine the winning strategies from each individual benchmark above and compare with the baseline:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 92.15 | 0 | 2.21 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16 | 267.21 | 190 | 2.62 |
**Getting an almost 3x improvement in speed!**
```
CUDA_VISIBLE_DEVICES=3 python \
../transformers-stas/scripts/benchmark/trainer-benchmark.py --base-cmd \
' \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \
--do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 32 \
--max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 \
' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--optim adamw_torch --gradient_accumulation_steps 1 --tf32 0|--optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16' \
--report-metric-keys train_loss --base-variation '--optim adamw_torch'
```<|||||># RTX-3090 vs A100
In all the benchmarks above I was making the batch size bigger and run more samples comparative to the same [RTX-3090 benchmarks](https://github.com/huggingface/transformers/issues/14608) as A100 40GB card can handle more than RTX-3090 24GB, but let's compare now the 2 using the same config. So we will have RTX-3090 fully loaded, but A100 will be only partially loaded.
Also each card is running on a different machines so there is a bit of hardware difference as well.
**A100**
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 85.99 | 0 | 2.16 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16 | 153.72 | 79 | 2.42 |
**RTX-3090**
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 88.94 | 0 | 2.16 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16 | 173.15 | 95 | 2.42 |
Observations:
- This is very unexpected, RTX-3090 appears to be faster when we use only 1/2 of A100's capacity to match the 2.
- as the machines aren't the same it'd be good to find a machine that has both cards and test them on equal hardware.
Same software was used for both setups:
```
transformers: 4.16.0.dev0
torch : 1.10.1
cuda : 11.3
python : 3.8.12
```
```
CUDA_VISIBLE_DEVICES=0 python \
/hf/transformers-trainer-benchmark/scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \
--do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 16 \
--max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 \
' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--optim adamw_torch --gradient_accumulation_steps 1 --tf32 0|--optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16' \
--report-metric-keys train_loss
```
I thought that perhaps this had to do with bf16, so I re-did the same with `--fp16` instead of `--bf16`, but the outcome is similar that RTX-3090 appears to be faster on the same benchmark:
**A100**
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 85.89 | 0 | 2.16 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --fp16 | 144.20 | 68 | 2.39 |
**RTX-3090**
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 88.95 | 0 | 2.16 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --fp16 | 168.28 | 89 | 2.39 |
Still not good for A100. Let's try w/o tf32:
**A100**
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 86.35 | 0 | 2.16 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 0 --fp16 | 156.16 | 81 | 2.39 |
**RTX-3090**
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 88.87 | 0 | 2.16 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 0 --fp16 | 167.36 | 88 | 2.39 |
This is better for A100. So tf32 was making things worse here for some reason.
Eddie Yan explained the reason for RTX-3090 being faster:
> **When there is underutilization on GPUs of similar architectures, it may come down to clock rates, and 3090 does have faster peak clock rates than A100**.
<|||||>@stas00 Another interesting issue that I found regarding batch size is that it is an important parameter when the model is mostly in fp32 and relies on autocast to dispatch to fp16 or bf16. I believe this is because of the overhead of casting back-and-forth can dominate the total runtime compared to the actual kernel/operator.
Consider the following microbenchmark:
```
import time
import torch
from torch.cuda.amp import autocast
def bench(dtype, dim, shape, auto=True):
linear = torch.nn.Linear(shape[-1], dim, device='cuda')
inp = torch.randn(shape, device='cuda')
ctx_manager = None
if not auto:
inp = inp.to(dtype)
linear = linear.to(dtype)
else:
ctx_manager = autocast(dtype=dtype)
def run(inp, layer, ctx_manager):
if ctx_manager is not None:
with ctx_manager:
layer(inp)
else:
layer(inp)
run(inp, linear, ctx_manager)
torch.cuda.synchronize()
t1 = time.time()
for i in range(1000):
run(inp, linear, ctx_manager)
torch.cuda.synchronize()
t2 = time.time()
return t2 - t1
if __name__ == '__main__':
hidden_dim = 1024
for auto in (True, False):
print(f"autocast: {auto}")
for batch_size in (8, 16, 32, 64):
shape = [batch_size, 56, hidden_dim]
times = list()
for dtype in (torch.float32, torch.float16, torch.bfloat16):
times.append(bench(dtype, hidden_dim, shape, auto))
print(f"batch_size: {batch_size} fp32 {times[0]:3f} fp16 {times[1]:3f} bf16 {times[2]:3f}")
print(f"speedup: fp32 {(times[0]/times[0]):3f} fp16 {(times[0]/times[1]):3f} bf16 {(times[0]/times[2]):3f}")
```
I get the following times on A6000 (similar architecture to 3090):
```
autocast: True
batch_size: 8 fp32 0.040668 fp16 0.061515 bf16 0.060857
speedup: fp32 1.000000 fp16 0.661102 bf16 0.668253
batch_size: 16 fp32 0.050241 fp16 0.061965 bf16 0.061339
speedup: fp32 1.000000 fp16 0.810793 bf16 0.819065
batch_size: 32 fp32 0.109936 fp16 0.089657 bf16 0.091546
speedup: fp32 1.000000 fp16 1.226184 bf16 1.200876
batch_size: 64 fp32 0.189083 fp16 0.150391 bf16 0.150227
speedup: fp32 1.000000 fp16 1.257275 bf16 1.258648
autocast: False
batch_size: 8 fp32 0.038590 fp16 0.031647 bf16 0.030893
speedup: fp32 1.000000 fp16 1.219398 bf16 1.249145
batch_size: 16 fp32 0.049446 fp16 0.032320 bf16 0.031509
speedup: fp32 1.000000 fp16 1.529887 bf16 1.569281
batch_size: 32 fp32 0.111689 fp16 0.056600 bf16 0.060192
speedup: fp32 1.000000 fp16 1.973323 bf16 1.855555
batch_size: 64 fp32 0.190082 fp16 0.103095 bf16 0.104311
speedup: fp32 1.000000 fp16 1.843766 bf16 1.822261
``` <|||||>Thank you, @eqy.
update: edited out the original note on casting back, since the explicit casting is not being measured
Added a nicely formatted table output so it's much easier to analyze. Updated script attached: [bench.txt](https://github.com/huggingface/transformers/files/7873072/bench.txt)
On RTX-3090 I get:
Autocast: True
Results:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 0.057 | 0.070 | 0.070 |
| 16 | 0.082 | 0.082 | 0.070 |
| 32 | 0.169 | 0.103 | 0.119 |
| 64 | 0.267 | 0.190 | 0.191 |
Speedup:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 1.000 | 0.810 | 0.818 |
| 16 | 1.000 | 0.997 | 1.179 |
| 32 | 1.000 | 1.639 | 1.421 |
| 64 | 1.000 | 1.411 | 1.398 |
Autocast: False
Results:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 0.052 | 0.040 | 0.040 |
| 16 | 0.082 | 0.045 | 0.045 |
| 32 | 0.170 | 0.073 | 0.090 |
| 64 | 0.268 | 0.143 | 0.148 |
Speedup:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 1.000 | 1.320 | 1.306 |
| 16 | 1.000 | 1.849 | 1.820 |
| 32 | 1.000 | 2.338 | 1.895 |
| 64 | 1.000 | 1.866 | 1.814 |
<|||||>I believe there are "speed-of-light" cases where the cast-back wouldn't be necessary, though this may not be possible for the architectures we're interested in. Here, I think the big picture is that once the batch-size falls below a certain amount, the "building-block" operations like GEMMs will be slower in reduced precision vs. fp32 when casts are needed.<|||||>why do you think bs=32 is an oddball relative to other bs for speedup? in both cases w/ and w/o amp its relatively faster for bf16 and fp16 then bs=64, and much more significantly for fp16. One would expect 8 < 16 < 32 < 64, but here it is 8 < 16 < 64< 32.
so actual results are proportionally in line, but the speed ups aren't.<|||||>That's interesting, I didn't see quite so dramatic results on an A100 (80GB), 2 runs:
## Autocast: True
Results:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 0.062 | 0.086 | 0.077 |
| 16 | 0.043 | 0.089 | 0.085 |
| 32 | 0.073 | 0.084 | 0.084 |
| 64 | 0.119 | 0.101 | 0.112 |
Speedup:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 1.000 | 0.714 | 0.805 |
| 16 | 1.000 | 0.486 | 0.508 |
| 32 | 1.000 | 0.865 | 0.871 |
| 64 | 1.000 | 1.181 | 1.058 |
## Autocast: False
Results:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 0.045 | 0.049 | 0.040 | | 16 | 0.041 | 0.048 | 0.047 |
| 32 | 0.073 | 0.046 | 0.044 |
| 64 | 0.120 | 0.063 | 0.076 |
Speedup:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 1.000 | 0.908 | 1.129 |
| 16 | 1.000 | 0.855 | 0.873 |
| 32 | 1.000 | 1.570 | 1.638 |
| 64 | 1.000 | 1.913 | 1.580 |
## Autocast: True
Results:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 0.062 | 0.086 | 0.077 |
| 16 | 0.059 | 0.089 | 0.085 |
| 32 | 0.073 | 0.084 | 0.084 |
| 64 | 0.119 | 0.101 | 0.114 |
Speedup:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 1.000 | 0.720 | 0.802 |
| 16 | 1.000 | 0.660 | 0.691 |
| 32 | 1.000 | 0.871 | 0.866 |
| 64 | 1.000 | 1.182 | 1.051 |
## Autocast: False
Results:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 0.044 | 0.048 | 0.041 |
| 16 | 0.041 | 0.047 | 0.048 |
| 32 | 0.073 | 0.045 | 0.047 |
| 64 | 0.120 | 0.063 | 0.077 |
Speedup:
| bs | torch.float32 | torch.float16 | torch.bfloat16 |
|-----:|----------------:|----------------:|-----------------:|
| 8 | 1.000 | 0.929 | 1.094 |
| 16 | 1.000 | 0.883 | 0.861 |
| 32 | 1.000 | 1.615 | 1.541 |
| 64 | 1.000 | 1.907 | 1.562 |
<|||||>> # 🖥 Benchmarking `transformers` w/ HF Trainer on A100 40GB
> We are going to use a special benchmarking tool that will do all the work for us. #14934
>
> This is the index post and specific benchmarks are in their own posts below:
>
> This is the index post and specific benchmarks are in their own posts below:
>
> 1. [fp16 vs bf16 vs tf32 vs fp32](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004543189)
> 2. [gradient accumulation steps](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004592231)
> 3. [batch size](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957)
> 4. [gradient checkpointing](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005034578)
> 5. [optimizers](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005220263)
> 6. [combining winning strategies](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005227577) **~3x speed improvement!**
> 7. [RTX-3090 vs A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005235845)
>
> Note that each benchmark was run only once, so multiple runs and averaging is probably going to give slightly different results. The purpose here though is to see relative differences roughly and not try to give an exact number.
>
> See also the [same benchmarks for RTX-3090](https://github.com/huggingface/transformers/issues/14608)
Is all benchmarking done on A100 ("NVIDIA_TESLA_A100") single GPU? Can you also include CUDA memory required Vs Data points for training Vs No. of GPU's. <|||||>> Is all benchmarking done on A100 ("NVIDIA_TESLA_A100") single GPU?
Yes.
> Can you also include CUDA memory required Vs Data points for training Vs No. of GPU's.
I don't understand your question.<|||||>> > Is all benchmarking done on A100 ("NVIDIA_TESLA_A100") single GPU?
>
> Yes.
>
> > Can you also include CUDA memory required Vs Data points for training Vs No. of GPU's.
>
> I don't understand your question.
On how many data points and epochs is it benchmarked on with Single GPU?<|||||>> > > Is all benchmarking done on A100 ("NVIDIA_TESLA_A100") single GPU?
> >
> >
> > Yes.
> > > Can you also include CUDA memory required Vs Data points for training Vs No. of GPU's.
> >
> >
> > I don't understand your question.
>
> On how many data points and epochs is it benchmarked on with Single GPU?
I get error with 4 GPU's, 20 epochs on A100 with 700000 data points
`python -m torch.distributed.launch --nproc_per_node 4 train.py --gradient_accumulation_steps 8 --per_device_train_batch_size 8 --optim adamw_hf --tf32 --bf16"])`
`'Traceback (most recent call last):\n', ' File "train.py", line 160, in <module>\n trainer.train()\n', ' File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1398, in train\n tr_loss_step = self.training_step(model, inputs)\n', ' File "/opt/conda/lib/python3.7/site-packages/transformers/trainer.py", line 1994, in training_step\n self.scaler.scale(loss).backward()\n', ' File "/opt/conda/lib/python3.7/site-packages/torch/_tensor.py", line 363, in backward\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\n', ' File "/opt/conda/lib/python3.7/site-packages/torch/autograd/__init__.py", line 175, in backward\n allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass\n', 'RuntimeError: CUDA out of memory. Tried to allocate 3.82 GiB (GPU 0; 39.59 GiB total capacity; 17.07 GiB already allocated; 2.75 GiB free; 21.43 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\n'`<|||||>> On how many data points and epochs is it benchmarked on with Single GPU?
It's defined by `--max_train_samples`
To your last OOM comment - please let's not derail this Benchmark Issue. If you want to discuss an unrelated question please open a new issue. Best to delete it from here and post in another Issue. Thank you. |
transformers | 15,025 | closed | Which model checkpoint should be selected for evaluation? | According to this sample:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/summarization/run_summarization.py
I get many checkpoints.

Which should be selected for testing | 01-04-2022 05:35:21 | 01-04-2022 05:35:21 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,024 | closed | Training causal language models from scratch without grouping independent data samples into blocks | Trying to train a causal language model from scratch in Pytorch using run_clm_no_trainer script.
My dataset has independent sentences separated by newlines.
The current script groups these sentence (which are image captions) while training which creates erroneous results when sampling for generation
Currently I'm trying to avoid this by adding period mark between these sentences before training and at inference splitting generated result at period. This is more of a hack and has obvious shortcomings.
Is there a way to avoid grouping of text while training. I've gone through the script and I may be missing something really obvious. | 01-04-2022 05:28:53 | 01-04-2022 05:28:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,023 | closed | [doc] normalize HF Transformers string | I noticed we use 🤗 Transformer instead of 🤗 Transformers in a couple of places, this PR fixes it.
That was a tricky one with UTF32 emoji. This worked.
```
LC_ALL=C find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's|(\xF0\x9F\xA4\x97 Transformer) |$1s |' {} \;
```
To grep I had to do:
```
LC_ALL=C grep -Ir -P '\xF0\x9F\xA4\x97 Transformer ' .
```
@sgugger
| 01-04-2022 04:36:19 | 01-04-2022 04:36:19 | |
transformers | 15,022 | closed | Enable AMP for xla:gpu device in trainer class | # What does this PR do?
This PR enables AMP in trainer class for xla:gpu device.
# Discussion
It looks like the torch_xla support in trainer class is primarily for xla:tpu device.
I found the following features may be useful but not essential and I can include them in this PR if necessary:
1. Rename `tpu` to `xla` in the codebase.
2. Currently xla device is always turned on when torch_xla is installed. It may be useful to allow users to optionally turn it off without uninstalling torch_xla.
3. Currently users need to set `GPU_NUM_DEVICES` manually when using xla:gpu device. It may be useful to set a default value for it when torch_xla and cuda devices are available.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable --> | 01-04-2022 04:19:15 | 01-04-2022 04:19:15 | Oh, interesting! Thanks for your contribution, pinging @sgugger on the issue.<|||||>> I'm not entirely sure about this PR in the sense that PyTorch XLA support is mainly for TPU, and I don't know if traditional mixed precision training with the gradient scaler will work on TPUs.
>
> So we should probably split the test to detect if we have GPUs available or TPUs. Some of the logic will stay common between the two, but the mixed precision part might only work for XLA GPUs?
Right, XLA:TPU does not support AMP and only XLA:GPU support it.<|||||>> Right, XLA:TPU does not support AMP and only XLA:GPU support it.
So as I said in my previous comment, could you add a new test `is_gpu_xla_available` and use this one for the part where you add grad scalers? Otherwise the changes will make the Trainer stop working on TPU.<|||||>@sgugger Maybe I'm missing something. Could you elaborate why the changes will make the Trainer stop working on TPU? The code inside `self.do_grad_scaling` is unreachable if running on TPU since either `--fp16` or `--bf16` option will raise error on TPU. I tested the following training script on TPU:
```bash
python run_mlm.py \
--model_name_or_path bert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--overwrite_output_dir true \
--output_dir /tmp/test-mlm \
--per_device_train_batch_size 10 \
--do_eval \
--do_train
```
With master branch:
```bash
WARNING:root:TPU has started up successfully with version pytorch-1.9
WARNING:__main__:Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False
...
***** train metrics *****
epoch = 3.0
train_loss = 1.7568
train_runtime = 0:12:23.47
train_samples = 4627
train_samples_per_second = 18.67
train_steps_per_second = 1.868
```
With this PR:
```bash
WARNING:root:TPU has started up successfully with version pytorch-1.9
WARNING:__main__:Process rank: -1, device: xla:1, n_gpu: 0distributed training: False, 16-bits training: False
...
***** train metrics *****
epoch = 3.0
train_loss = 1.7577
train_runtime = 0:10:19.70
train_samples = 4627
train_samples_per_second = 22.399
train_steps_per_second = 2.241
```<|||||>Ah yes, you're right. Thanks for testing! |
transformers | 15,021 | closed | AutoTokenizer unable to load pre-trained bert-base-uncased tokenizer | Hello! I am running the following code to load the bert-base-uncased tokenizer:
```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
```
which results in the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-6-ff60bf5b2785> in <module>
----> 1 tokenizer = AutoTokenizer.from_pretrained('bert-base-uncased')
~/opt/miniconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py in from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs)
548 tokenizer_class_py, tokenizer_class_fast = TOKENIZER_MAPPING[type(config)]
549 if tokenizer_class_fast and (use_fast or tokenizer_class_py is None):
--> 550 return tokenizer_class_fast.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)
551 else:
552 if tokenizer_class_py is not None:
~/opt/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py in from_pretrained(cls, pretrained_model_name_or_path, *init_inputs, **kwargs)
1639 if os.path.isfile(pretrained_model_name_or_path) or is_remote_url(pretrained_model_name_or_path):
1640 if len(cls.vocab_files_names) > 1:
-> 1641 raise ValueError(
1642 f"Calling {cls.__name__}.from_pretrained() with the path to a single file or url is not "
1643 "supported for this tokenizer. Use a model identifier or the path to a directory instead."
ValueError: Calling BertTokenizerFast.from_pretrained() with the path to a single file or url is not supported for this tokenizer. Use a model identifier or the path to a directory instead.
```
The same error does not occur when I use `bert-large-uncased` or any other model, which is very weird!
System/Package Spec:
```
OS: macOS Big Sur
transformers: 4.15.0
torch: 1.10.0 (cpu)
python: 3.8.3
```
| 01-04-2022 02:12:37 | 01-04-2022 02:12:37 | do you have a folder named `bert-large-uncased` where you're running this code?<|||||>OMG! I just realized there's a file I downloaded long ago in this directory that's named `bert-base-uncased` wow is this embarrassing! |
transformers | 15,020 | closed | [Trainer] finetuning: larger batch-size leading to a worse train loss | I was just running a [benchmark to compare the speed up of enabling various `--gradient_accumulation_steps` levels](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537) and I have noticed that the lm loss gets progressively worse and by much with enlarging of `gradient_accumulation_steps`:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --gradient_accumulation_steps 1 | 135.85 | 100 | 2.21 |
| --gradient_accumulation_steps 2 | 156.95 | 116 | 2.29 |
| --gradient_accumulation_steps 4 | 167.65 | 123 | 2.42 |
| --gradient_accumulation_steps 8 | 175.02 | 129 | 2.62 |
| --gradient_accumulation_steps 16 | 179.15 | 132 | 2.86 |
(this is with `--per_device_train_batch_size 16`)
So something is strange here.
But re-testing with just the batchsize differences, it appears to exhibit a very similar behavior:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --per_device_train_batch_size 1 | 10.04 | 100 | 1.90 |
| --per_device_train_batch_size 2 | 19.39 | 193 | 2.01 |
| --per_device_train_batch_size 4 | 38.66 | 385 | 2.09 |
| --per_device_train_batch_size 8 | 77.52 | 772 | 2.17 |
| --per_device_train_batch_size 16 | 144.12 | 1435 | 2.26 |
So `--gradient_accumulation_steps` doesn't appear to be culprit, but somehow the model is super-sensitive to batchsize.
Any suggestions to why this is so?
The original cmd was:
```
CUDA_VISIBLE_DEVICES=0 examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 16 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 --gradient_accumulation_steps 1
```
and then just changing `--gradient_accumulation_steps` to higher numbers.
```
Software:
transformers: 4.16.0.dev0
torch : 1.10.1
cuda : 11.3
python : 3.8.11
Hardware:
1 GPUs : NVIDIA GeForce RTX 3090, 23.70GB
```
| 01-03-2022 23:15:04 | 01-03-2022 23:15:04 | OK, the issue was that my benchmark is very short and with less steps to take when the batches are larger, the model simply doesn't have a chance to step down far enough.
So such changes will require raising lr, but more realistically to increase the dataset size, since one can't make LR bigger proportionally to batch size increase and not get an overflow.
|
transformers | 15,019 | closed | Make OpenAIGPTTokenizer work with SpaCy 2.x and 3.x | SpaCy 3.x introduced an API change to creating the tokenizer that
breaks OpenAIGPTTokenizer. The old API for creating the tokenizer in
SpaCy 2.x no longer works under SpaCy 3.x, but the new API for creating
the tokenizer in SpaCy 3.x DOES work under SpaCy 2.x. Switching to the
new API should allow OpenAIGPTTokenizer to work under both SpaCy 2.x and
SpaCy 3.x versions.
Fixes https://github.com/huggingface/transformers/issues/14449
| 01-03-2022 22:44:39 | 01-03-2022 22:44:39 | I'm not able to test the changes locally. When I run `pip install -e ".[dev]"`, I encounter a build error and the install fails. It doesn't look like pytest coverage is sufficient here either since SpaCy and ftfy are not required by transformers, and the code can execute correctly without them being installed.<|||||>@patrickvonplaten @LysandreJik Not sure why but I can't tag you as reviewers on GitHub, so tagging you in comments here.<|||||>Yes, why not! GPT-2 is among our most used models, so I think testing that the tokenization behaves correctly across all possibilities is important. Would you like to take a stab at it @cody-moveworks?
In order to do so, you could start by adding an `is_spacy_available` method in `file_utils.py`, analog to other methods such as `is_vision_available` here: https://github.com/huggingface/transformers/blob/2e9af294940083915ccb2740a7c8d5b154194f15/src/transformers/file_utils.py#L507-L508
Then it would require defining a `require_spacy` unittest decorator in `testing_utils.py`, such as the `require_vision` here: https://github.com/huggingface/transformers/blob/2e9af294940083915ccb2740a7c8d5b154194f15/src/transformers/testing_utils.py#L404-L412
Thirdly, you can add a test in the `tests/test_tokenization_gpt2.py` file with the `@require_spacy` decorator, which will only run when SpaCy is installed.
And finally, we can modify this CircleCI run: https://github.com/huggingface/transformers/blob/2e9af294940083915ccb2740a7c8d5b154194f15/.circleci/config.yml#L538-L565
So that it:
1. Installs SpaCy
2. Runs the tokenization GPT-2 test file!<|||||>@LysandreJik Thanks for the detailed walkthrough of the changes needed to add testing. I'll take a stab at it.<|||||>@LysandreJik I made the changes and all checks are passing. Can you take a look? |
transformers | 15,018 | closed | [doc] Update parallelism.mdx | - I found some mistakes and fixed them. sorry @stas00
- In addition, `DataParallel` and `DistributedDataParallel` are marked as code because they are class names. (I was trying to switch DataParallel to DataParallelism, but I realized that this means name of class)
## Reviewers
@stas00 | 01-03-2022 20:04:43 | 01-03-2022 20:04:43 | updated.<|||||>@stas00 This can be merged ! |
transformers | 15,017 | closed | [Examples] Correct run ner label2id for fine-tuned models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes #14230 . I'm really not sure about this PR however, so I'd like to wait for @sgugger's review here.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-03-2022 19:06:05 | 01-03-2022 19:06:05 | @sgugger, could you take a 2nd look here? :-) |
transformers | 15,016 | closed | Fix doc examples: name 'torch' is not defined | # What does this PR do?
A one line fix for doc example in `modeling_wav2vec2.py`
## Who can review?
@patrickvonplaten | 01-03-2022 17:41:58 | 01-03-2022 17:41:58 | Thank you! |
transformers | 15,015 | closed | [Tests] Correct Wav2Vec2 & WavLM tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-03-2022 17:21:27 | 01-03-2022 17:21:27 | |
transformers | 15,014 | closed | Update check_repo.py | added new line

# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @LysandreJik | 01-03-2022 17:05:20 | 01-03-2022 17:05:20 | |
transformers | 15,013 | closed | [doc] Update parallelism.mdx | - Add OSLO to `parallelism` document.
- Small changes.
## Reviewers
@stas00 | 01-03-2022 16:07:34 | 01-03-2022 16:07:34 | updated. @stas00 |
transformers | 15,012 | closed | Remove old asserts. | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-03-2022 15:26:12 | 01-03-2022 15:26:12 | |
transformers | 15,011 | closed | ViTFeatureExtractor PyTorch Batch problem | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: macOS
- Python version: 3.8
- PyTorch version (GPU?): CPU
- Tensorflow version (GPU?):
- Using GPU in script?: no
- Using distributed or parallel set-up in script?:
### Who can help
Models: ViTFeaturesExtractor
Model hub: google/vit-base-patch16-224-in21k
I'm trying to use this model with a Batch of images but every time I'm getting the error: "TypeError: Cannot handle this data type: (1, 1, 427, 320), |u1". If I do not use a Dataloader for the images It works well. If I not adding a dimension to the tensor it works well but when I use the batch size in the position 0 of a torch tensor it does not work. With some debugging inxpection I have found the fact that this string of code return False : `is_batched = bool(
isinstance(images, (list, tuple))
and (isinstance(images[0], (Image.Image, np.ndarray)) or is_torch_tensor(images[0]))
)`
Bu If I'm using the is_torch_tensor method it give me that the images passed is a tensor of images but it is not.a list, I think that the check has to be changed like this:
`is_batched = bool(
(isinstance(images, (list, tuple))
and (isinstance(images[0], (Image.Image, np.ndarray))) or is_torch_tensor(images[0]))
)`
Whit this changes the model works. | 01-03-2022 14:45:35 | 01-03-2022 14:45:35 | Hi,
Currently the feature extractors only accept a list of individual PyTorch tensors/Numpy arrays/PIL images in case you want to preprocess a batch of images. One cannot provide a batch of images as a single PyTorch tensor to the feature extractor.
However, there's a feature request to add this (see #14650).
Note that feature extractors use PIL to resize images, so it's most efficient to pass a list of PIL images.<|||||>Thank you for your answer but in this way I can't use the GPU for training a model that uses the ViT and is moved on the GPU because every time I give it a list of Cuda tensor it says it is not possible and It suggest me to use Tensor.cpu() at the same time if I send a CPU tensor and the complete model is on the GPU It gives me the error that I have to use the same type of tensor<|||||>Have you taken a look at the [notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/VisionTransformer) to fine-tune ViT?<|||||>I proposed a way to use DataLoader in #15055<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,010 | closed | update the file list for doc testing | # What does this PR do?
As discussed in a previous email, I updated the file `utils/documentation_tests.txt`:
- change the current 2 `.rst` files to the new `.mdx` files
- add the pytorch modeling files that have no example issue (I tested them locally)
There are still some modeling files containing example issues, but we can fix them progressively.
I think it is a good start for 2022 :-)
## Who can review?
@LysandreJik @sgugger
| 01-03-2022 14:35:22 | 01-03-2022 14:35:22 | Oh wow that's quite a big change! Also cc @patrickvonplaten <|||||>Thanks for the change @ydshieh - I'm quite sure that most of the docstring tests in the mentioned files fail at the moment. Could you maybe do those changes: https://github.com/huggingface/transformers/pull/15031/files#r778215725 in your PR as well so that we can see which models have failing doc tests? :-)<|||||>> Thanks for the change @ydshieh - I'm quite sure that most of the docstring tests in the mentioned files fail at the moment. Could you maybe do those changes: https://github.com/huggingface/transformers/pull/15031/files#r778215725 in your PR as well so that we can see which models have failing doc tests? :-)
Hi, @patrickvonplaten , do you mean merge that PR into this one? Or just apply a specific change in that PR (if so, which one)?
I am not familiar with the test. The way I checked the doc examples locally is to extract those examples and run them (in some automatic way), and I only added the files without having doc example issue.
Thanks! <|||||>> Thanks for the change @ydshieh - I'm quite sure that most of the docstring tests in the mentioned files fail at the moment. Could you maybe do those changes: https://github.com/huggingface/transformers/pull/15031/files#r778215725 in your PR as well so that we can see which models have failing doc tests? :-)
Hi, I rebased my PR on your PR #15031. Let's see what it gives.
<|||||>@patrickvonplaten , in your PR, you added new lines in some doc examples, like
```
>>> list(last_hidden_states.shape)
{expected_output}
```
Is this necessary to make doctest work? Should I do the same for the doc examples in the files I added here?
(Could you point me a guide to deal with this doctest thing, please? Thanks)<|||||>Hey @ydshieh,
sorry for answering so late here. Let me discuss the doctests quickly internally and come back to you :-)<|||||>Hey @ydshieh,
We are currently working on adding a section that explains in detail how to test/check the doc tests. Would it be ok to close this PR for now and open a new one, once the documentation is up? <|||||>> Hey @ydshieh,
>
> We are currently working on adding a section that explains in detail how to test/check the doc tests. Would it be ok to close this PR for now and open a new one, once the documentation is up?
Sure! Thanks for this update :-) |
transformers | 15,009 | closed | Recovering on corrupted files on disk. | # What does this PR do?
Fixes #14603
When an error is triggered when loading weights, before finally
crashing, attempt to check disk corruption by actually checking the
SHA signature of files.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-03-2022 13:53:44 | 01-03-2022 13:53:44 | How long does this check take for large files?<|||||>@LysandreJik on my modest box (i7-4790, old sdd) it takes 800ms for 300Mo, which is on Par with the load time.
This only happens on failure too not when it works.<|||||>Sorry, coming back to this now.
If this only happens on failure then I think it's a nice added feature. An issue I have with the current implementation is that it's PyTorch-specific, whereas it should work for all type of files that we load.
Could you check if it's easy to do so for the other frameworks/tokenizers as well? If it requires a refactor, please let me know and I'll take a look at it to re-evaluate the situation.
Thank you!
Also cc @patrickvonplaten @sgugger @patil-suraj as it is very central<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>unstale
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,008 | closed | Fixing t2t pipelines lists outputs. | # What does this PR do?
Backward compatibility broken in
https://github.com/huggingface/transformers/pull/14988
This restabilishes the correct outputs.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-03-2022 13:23:57 | 01-03-2022 13:23:57 | |
transformers | 15,007 | closed | Unexpected outputs of randomly initialized `T5ForConditionalGeneration`. | null | 01-03-2022 09:58:31 | 01-03-2022 09:58:31 | |
transformers | 15,006 | closed | "total_flos" showing much bigger number than expected | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.14.1
- Platform: Ubuntu 20.04.1
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): n/a
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: DDP
### Who can help
Models:
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Trainer: @sgugger
Documentation: @sgugger
## Information
Model I am using (Bert, XLNet ...): ViT
The problem arises when using:
* [o ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [o ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. download transformers with git clone and pip install
2. run example script in transformers/examples/pytorch/image-classification/run_image_classification.py
3. disable pretrained weights loading to enable start pretraining of ViT-L/16
4. pretraining ViT-L/16 on 1 A100 GPU for 2 images (1 epoch only). In the below result, dividing "total_flos" by 2 would result in 273,934,171,766,784 floating point operations, or around 274 teraFLOPs. This is much bigger than the reported 122.9 GFLOPs in the "Scaling Vision Transformer" paper (https://arxiv.org/pdf/2106.04560.pdf).
```
{
"best_metric": null,
"best_model_checkpoint": null,
"epoch": 1.0,
"global_step": 2,
"is_hyper_param_search": false,
"is_local_process_zero": true,
"is_world_process_zero": true,
"log_history": [
{
"epoch": 1.0,
"step": 2,
"total_flos": 547868343533568.0,
"train_loss": 0.06623733788728714,
"train_runtime": 2.6909,
"train_samples_per_second": 0.743,
"train_steps_per_second": 0.743
}
],
"max_steps": 2,
"num_train_epochs": 1,
"total_flos": 547868343533568.0,
"trial_name": null,
"trial_params": null
}
```
## Expected behavior
"total_flos" divided by number of train images would match the reported model's GFLOPs in "Scaling Vision Transformer" paper. | 01-03-2022 07:47:54 | 01-03-2022 07:47:54 | cc @TevenLeScao since he implemented the feature.<|||||>Hey, sorry for the incorrect number - this function was originally written for language models, and might not transfer to vision models, for example. Do you need this number or is it just for checking?<|||||>Hello, I was trying to benchmark my A100 server, so having this FLOS number would be helpful for me to calculate the % of peak GPU FLOPS attained by ViT models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,005 | closed | Add Flax RoFormer | # What does this PR do?
This PR adds the flax implementation of `RoFormer` model.
Fixes #14605
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj
| 01-02-2022 16:15:56 | 01-02-2022 16:15:56 | Waiting for the CI to be green and then we can merge<|||||>Merging - thanks a lot for adding this model @stancld ! |
transformers | 15,004 | closed | How to define the self-attention layer with transformers | I want to use 'BertLayer'or 'RobertaLayer' to define a self-attention layer,but I can't find the definition of 'RobertaLayer'.Can you provide some relevant documents?Is this API cancelled? Thanks!
“”“
self.basicblocks = nn.ModuleList()
self.n_layers = 3
trans_heads = 8
trans_drop = 0.1
bert_config = BertConfig(hidden_size=self.config.hidden_size, num_attention_heads=trans_heads, attention_probs_dropout_prob=trans_drop)
for layer in range(self.n_layers):
self.basicblocks.append(BertLayer(bert_config))
”“” | 01-02-2022 15:15:28 | 01-02-2022 15:15:28 | Hey @193769981, `BertLayer` is defined here:
https://github.com/huggingface/transformers/blob/e68c3756fea7c811d02b8470539ae17ec3ec0e71/src/transformers/models/bert/modeling_bert.py#L445
`RobertaLayer` is a copy of `BertLayer`, so the two should be identical.
https://github.com/huggingface/transformers/blob/e68c3756fea7c811d02b8470539ae17ec3ec0e71/src/transformers/models/roberta/modeling_roberta.py#L385<|||||>Hello, I have solved this problem.
I want to know what form the attention_mask should be when I use Roberta_Layer, and what values the <pad> and non-pad should be set.

<|||||>HF has a [glossary](https://huggingface.co/docs/transformers/glossary/) that provides explanations and examples of terms like `attention_mask`. I hope this helps!
On a secondary note, I recommend that you check out the [HF discussion forum](https://discuss.huggingface.co), which is more suitable for asking questions. The GitHub issue page is dedicated to bug reports, feature requests, etc. |
transformers | 15,003 | closed | AlbertTokenizer doesn't decode special tokens properly | Related to #5142, `AlbertTokenizer` (which uses SentencePiece) doesn't decode special tokens (like [CLS], [MASK]) properly. This issue was discovered when adding the Nystromformer model (#14659), which uses this tokenizer.
To reproduce (Transformers v4.15 or below):
```
!pip install -q transformers sentencepiece
from transformers import AlbertTokenizer
tokenizer = AlbertTokenizer.from_pretrained("albert-base-v1")
text = "hello world"
encoding = tokenizer(text)
for id in encoding.input_ids:
print(id, tokenizer.decode([id]))
```
This prints:
```
2
10975 hello
126 world
3
```
As can be seen, the special tokens are added ([CLS] with ID=2 and [SEP] with id=3), but they are decoded to an empty string. This is because the `convert_tokens_to_string` [method](https://github.com/huggingface/transformers/blob/e68c3756fea7c811d02b8470539ae17ec3ec0e71/src/transformers/models/albert/tokenization_albert.py#L252) of `AlbertTokenizer` uses the decode method of Google's SentencePiece library, but this doesn't take into account special tokens.
The issue does not occur with the fast tokenizer:
```
from transformers import AlbertTokenizerFast
tokenizer = AlbertTokenizerFast.from_pretrained("albert-base-v1")
text = "hello world"
encoding = tokenizer(text)
for id in encoding.input_ids:
print(id, tokenizer.decode([id]))
```
Which prints:
```
2 [CLS]
10975 hello
126 world
3 [SEP]
```
A similar issue happened for T5, and this was fixed in #8435. | 01-02-2022 14:00:36 | 01-02-2022 14:00:36 | Update: there are many more SentencePiece tokenizers that may require an update, see #11716<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hello Niels! 👋
I started taking a look at this issue, and I do think that @patrickvonplaten's method would work in this instance as well, but as you have warned, this problem does indeed creep in at least 8 other SentencePiece-based tokenizers, not all of which have a `*Fast` flavor in order to mimic the test in the T5 case.
First, I imported all mentioned tokenizers in #11716 except `XLMRobertaTokenizer` (which I couldn't find a suitable checkpoint for) like so:
```python
albert_tokenizer = AlbertTokenizer.from_pretrained("albert-base-v1")
barthez_tokenizer = BarthezTokenizer.from_pretrained("moussaKam/barthez")
bert_generation_tokenizer = BertGenerationTokenizer.from_pretrained("google/roberta2roberta_L-24_discofuse")
bigbird_tokenizer = BigBirdTokenizer.from_pretrained("google/bigbird-roberta-base")
camembert_tokenizer = CamembertTokenizer.from_pretrained("camembert-base")
debertav2_tokenizer = DebertaV2Tokenizer.from_pretrained("microsoft/deberta-v2-xlarge")
m2m100_tokenizer = M2M100Tokenizer.from_pretrained("facebook/m2m100_418M")
marian_tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
mbart50_tokenizer = MBart50Tokenizer.from_pretrained("facebook/mbart-large-50", src_lang="en_XX", tgt_lang="ro_RO")
pegasus_tokenizer = PegasusTokenizer.from_pretrained("google/pegasus-large")
reformer_tokenizer = ReformerTokenizer.from_pretrained("google/reformer-crime-and-punishment")
speech2text_tokenizer = Speech2TextTokenizer.from_pretrained("facebook/s2t-small-librispeech-asr")
t5_tokenizer = T5Tokenizer.from_pretrained("t5-small")
xlm_prophet_net_tokenizer = XLMProphetNetTokenizer.from_pretrained("microsoft/xprophetnet-large-wiki100-cased")
xlnet_tokenizer = XLNetTokenizer.from_pretrained("xlnet-base-cased")
```
I encoded and decoded `"hello world"` and got the following table:
| Tokenizer | Output |
| --- | --- |
| AlbertTokenizer | 'hello world' |
| BarthezTokenizer | 'hello world' |
| BertGenerationTokenizer | 'hello world' |
| BigBirdTokenizer | '[CLS] hello world[SEP]' |
| CamembertTokenizer | 'hello world' |
| DebertaV2Tokenizer | 'hello world' |
| M2M100Tokenizer | '__en__ hello world' |
| MarianTokenizer | '▁hello▁world' |
| MBart50Tokenizer | 'en_XX hello world' |
| PegasusTokenizer | 'hello world' |
| ReformerTokenizer | 'hello world' |
| Speech2TextTokenizer | 'hello world' |
| T5Tokenizer | 'hello world\</s\>' |
| XLMProphetNetTokenizer | 'hello world[SEP]' |
| XLNetTokenizer | 'hello world\<sep\>\<cls\>' |
I can hack my way through this using an `if tokenizer in problematic_sp_tokenizers: ...`, but it'll hurt my soul a bit 😉
How do you think I should proceed from here?
Thanks for the opportunity to help,
Ben<|||||>Hi @NielsRogge 🙂
I hope it's okay I'm tagging you, I just don't know if this issue is still relevant. If it is, I'll gladly work on it, and I'd love your opinion on the above message (the code was ran using Transformers v4.17.0.dev0).
Thanks,
Ben<|||||>Hi Ben!
Sorry forgot to reply. Great summary table over there! And as you can see, there's some work to do. 😂
Feel free to open a PR!
Btw, `XLMRobertaTokenizer`'s corresponding checkpoint is xlm-roberta-base.<|||||>Great, will do!<|||||>Hi everyone!
I'm looking for some **Good first issues** for this project. It seems that the PR #15775 fixed this issue which can be **closed**.
I tested the snippet of code provided by @NielsRogge and it works fine!
Here the output:
```
2 [CLS]
10975 hello
126 world
3 [SEP]
```
* Python 3.9
* transformers 4.26.1
* sentencepiece 0.1.97
Thanks,
Damien G.<|||||>Thanks for flagging, closing this issue then :-) |
transformers | 15,002 | closed | Fix a little typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes a little typo in the docstring of a function.
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-01-2022 21:12:22 | 01-01-2022 21:12:22 | Thank you |
transformers | 15,001 | closed | Fix TFEncoderDecoder labels handling #14357 | # What does this PR do?
Fixes #14357
## Who can review?
@Rocketknight1 | 01-01-2022 16:02:42 | 01-01-2022 16:02:42 | |
transformers | 15,000 | closed | Fixing QA when `sequence_ids` is None (instead of 0, 1). | # What does this PR do?
Attempts to fix flaky test (and bug).
`tokenizer.sequence_ids(span)` was returning [None, 0, 0, 0, None, 1, 1, 1,1 , None]
and the test to make the mask for answers in question-answering is
[tok != 1 if question_first else 0].
1 and 0 are both != None so all special tokens would be included in
potential answers. With tests on random models it could happen triggereing
a bug where offsets would be out of bounds since the last None is out of bounds
for the offsets of the question for instance.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@stas00
@sgugger
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-01-2022 11:02:50 | 01-01-2022 11:02:50 | This is incorrect actually |
transformers | 14,999 | closed | fix model table cell text alignment | # What does this PR do?
The current new model table in [index.mdx](https://github.com/ydshieh/transformers/blob/master/docs/source/index.mdx) left align table cell texts. This PR makes them center-aligned.
On master:

On this PR [index.mdx](https://github.com/ydshieh/transformers/blob/center-align-model-table/docs/source/index.mdx):

(Hope centralized instead of decentralized is intended in this context 🤗 )
## Who can review?
@sgugger @LysandreJik | 12-31-2021 21:42:59 | 12-31-2021 21:42:59 | |
transformers | 14,998 | closed | How to boost the speed of one sentence Marian Translation(no batches)? | I am using a Marian Model for translating from English to Arabic. I want to use this translation per sentence (no batching).
I am using this simple code:
```
en_ar_tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-ar")
en_ar_model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-ar")
def transformer_en_ar(sen):
tokens = en_ar_tokenizer('>>ara<<' + sen, return_tensors="pt")
translated = en_ar_model.generate(**tokens)
decoded = en_ar_tokenizer.decode(translated[0], skip_special_tokens=True)
return decoded
queries = [ 'some dummy sentence1', 'some dummy sentence2']
for query in queries:
translation = transformer_en_ar(query)
```
Is there a way to increase the speed of inferring one sentence? Is there any software or hardware solution?
For example:
Shall I get rid of the for loop of the queries? How?
Will changing either (or both) interop and intraop threads have an effect?
Using GPU did not boost the speed of one sentence at a time? Am I doing something wrong?
Thanks | 12-31-2021 14:57:54 | 12-31-2021 14:57:54 | Not sure why you don't want to leverage batching, because it's always faster than passing each sentence at a time through the model:
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-ar")
model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-en-ar")
model.to(device)
sentences = ["Hello world",
"How are you doing?", # use different length sentences to test batching
]
inputs = tokenizer(sentences, padding=True, return_tensors="pt")
output = model.generate(
input_ids=inputs['input_ids'],
attention_mask=inputs['attention_mask'],
)
tokenizer.batch_decode(output, skip_special_tokens=True)
```
If you're interested in optimizing the inference speed on CPU, take a look at [ONNX](https://huggingface.co/docs/transformers/serialization) or [Torchscript](https://huggingface.co/docs/transformers/serialization#torchscript). We [recently added support](https://github.com/huggingface/transformers/pull/14586) for ONNX for MarianMT.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,997 | closed | No module named 'transformers.models.fnet.modeling_fnet' | transformers 4.12.5
from transformers import AutoModelForTokenClassification,AutoTokenizer,pipeline
model = AutoModelForTokenClassification.from_pretrained('uer/roberta-base-finetuned-cluener2020-chinese')
tokenizer = AutoTokenizer.from_pretrained('uer/roberta-base-finetuned-cluener2020-chinese')
ner = pipeline('ner', model=model, tokenizer=tokenizer)
print(ner("江苏警方通报特斯拉冲进店铺"))

| 12-31-2021 08:39:39 | 12-31-2021 08:39:39 | Hey @lonngxiang, can you double check your transformers package version? I'm unable to reproduce on a fresh [Colab notebook](https://colab.research.google.com/drive/1-FUJSSFAl88foTbyqGHyFtwgvHMXTyBV?usp=sharing).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,996 | open | [Benchmarks] index | This issue is to document the important `transformers` benchmarks in one place, so that they are easy to find.
To add a new benchmark entry post it in an Issue (separately or as a comment in an existing issue) and then link from here. If you have edit rights please add a link directly to this post, otherwise please add a note in the comments and I will update this post.
Please do not post actual benchmarks in the comments of this Issue. This is only an index.
Thank you!
## Fastest speed combinations
- [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005229426)
- [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005227577)
## Precision: fp16 vs bf16 vs tf32 vs fp32
- [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004390803)
- [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004543189)
## Batch size / gradient accumulation steps
- gradient accumulation steps: [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537), [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1004592231)
- batch size [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004470417), [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005033957)
## Gradient checkpointing
- [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004422281)
- [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005034578)
## Optimizers:
- Adam torch vs. apex vs HF vs adafactor: [RTX-3090](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005219385), [A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005220263)
- re-run the above a year later with the same list of optimizers, plus BNB's 8bit optimizer and fused torch AdamW [PCIe 80GB A100](https://github.com/huggingface/transformers/issues/22101)
## Network / Interconnects:
- [DP/DDP/NVLink](https://github.com/huggingface/transformers/issues/9371#issuecomment-768656711)
| 12-31-2021 06:41:00 | 12-31-2021 06:41:00 | |
transformers | 14,995 | closed | BertTokenizer can't split the string in the form of "word+special_token"correctly. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.2.1
- Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-debian-buster-sid
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.0+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: <yes>
- Using distributed or parallel set-up in script?: <no>
## To reproduce
Steps to reproduce the behavior: just run some tokenizing examples.
1.tokenizer = BertTokenizer.from_pretrained("models/pretrained_models/bert-base-chinese")
2.tokenizer.tokenize("↗↖")
`from transformers import BertTokenizer
if __name__ == "__main__":
tokenizer = BertTokenizer.from_pretrained("models/pretrained_models/bert-base-chinese")
# tokenizer.tokenize("↗↖")
# tokenizer.tokenize("happy↖")
tokenizer.tokenize("中↖")
`
more examples:
tokenizer.tokenize("中↖")
Out[58]: ['中', '[UNK]']
tokenizer.tokenize("a↖")
Out[59]: ['[UNK]']
tokenizer.tokenize("b↖")
Out[60]: ['[UNK]']
tokenizer.tokenize("abc↖")
Out[61]: ['[UNK]']
tokenizer.tokenize("happy↖")
Out[62]: ['[UNK]']
tokenizer.tokenize("↗↖")
Out[63]: ['[UNK]']
tokenizer.tokenize(",↖")
Out[64]: [',', '[UNK]']
tokenizer.tokenize("。↖")
Out[65]: ['。', '[UNK]']
tokenizer.tokenize("咕咕↖")
Out[66]: ['咕', '咕', '[UNK]']
tokenizer.tokenize("|↖")
Out[67]: ['|', '[UNK]']
tokenizer.tokenize("ك↖")
Out[68]: ['[UNK]']
tokenizer.tokenize("happy😚")
Out[69]: ['[UNK]']
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
tokenizer.tokenize("happy↖")-->[happy, unk]
But the tokenizer now treat "happy↖" as a whole token, so return only one unk.
And the tokenizer will add white spaces when tokenizing Chinese strings, so it can split "中↖" to [中,unk].
<!-- A clear and concise description of what you would expect to happen. -->
| 12-31-2021 04:17:23 | 12-31-2021 04:17:23 | Hello @catqaq,
Regarding the fact that `tokenizer.tokenize("happy↖")` outputs `['[UNK]']`, this behavior is specific to the **WordPiece** Tokenization algorithm, which is the algorithm used by `BertTokenizer`. If you want to find out more, I refer you to our course dealing exactly with these concepts ([chapter on wordpiece](https://huggingface.co/course/chapter6/7?fw=pt)).
Regarding the fact that `tokenizer.tokenize("中↖")` outputs `['中', '[UNK]']`, this behavior is due to the fact that by default the argument `tokenize_chinese_chars` is set to True in `BertTokenizer`. (see the corresponding doc [here](https://huggingface.co/docs/transformers/model_doc/bert#transformers.BertTokenizer.tokenize_chinese_chars)). If you set this argument to `False`, the outputs will become `['[UNK]']`.
I hope this has helped you!<|||||>Let me close this issue. Please feel free to reopen it if something is not clear in my answer. :blush: <|||||>@SaulLu Thanks for your advice. I know that tokenizer.tokenize("happy↖") outputs ['[UNK]'] is a feature of WordPiece Tokenization algorithm instead of a bug. But maybe tokenizer.tokenize("happy↖")-->[happy, unk] is more appropriate. I just wanna split the meaningful words with some special characters.<|||||>You can always modify the tokenizer by adding a custom pre-tokenizer. The pre-tokenizer will have the effect of splitting your sentence and WordPiece will only be applied to the individual parts of the sentence.
For Bert, the [BertPreTokenizer](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html?highlight=bertpretokenizer#tokenizers.pre_tokenizers.BertPreTokenizer) pre-tokenizer is already used but you could imagine adding a [Split pre-tokenization](https://huggingface.co/docs/tokenizers/python/latest/api/reference.html?highlight=bertpretokenizer#tokenizers.pre_tokenizers.Split) afterwards. You will however need to qualify in regex terms what you want to include in the "some special characters".
```python
from transformers import AutoTokenizer
import tokenizers
import re
tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
text = "happy↖"
print(tokenizer.tokenize(text))
# ['[UNK]']
tokenizer.backend_tokenizer.pre_tokenizer = tokenizers.pre_tokenizers.Sequence(
[
tokenizers.pre_tokenizers.BertPreTokenizer(),
tokenizers.pre_tokenizers.Split(r'↖', "isolated")
]
)
print(tokenizer.backend_tokenizer.pre_tokenizer.pre_tokenize_str(text))
# [('happy', (0, 5)), ('↖', (5, 6))]
print(tokenizer.tokenize(text))
# ['happy', '[UNK]']
```<|||||>Thanks for your detailed code. Yes, adding a custom pre-tokenizer is more suitable for spliting such special cases rather than being coupled to the base tokenizer. |
transformers | 14,994 | closed | Allow training to resume even if RNG states are not properly loaded | # What does this PR do?
This PR allows training to resume even if the loading of the RNG state fail in multi-GPU DataParallel mode because less GPUs are used than during the original training.
Fixes #14554 | 12-30-2021 21:40:02 | 12-30-2021 21:40:02 | |
transformers | 14,993 | closed | Unexpected usage of `next_token_scores` and `beam_scores` in `beam_sample()` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.2
- Platform: Linux
- Python version: 3.8.10
- PyTorch version (GPU?): 1.9.0+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Not sure if the `beam_sample()` function called by `generate()` is using `next_token_scores` and `beam_scores` as intended for sampling and score accumulation.
Model I am using (Bert, XLNet ...): distilgpt2
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```python3
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "distilgpt2"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "Cats meow, dogs"
input_ids = tokenizer.encode(text, return_tensors="pt")
input_len = input_ids.size()[-1]
max_new_tokens = 3
temperature = 0.2
sample_outputs = model.generate(
input_ids,
max_length=input_len + max_new_tokens,
do_sample=True,
num_beams=5,
top_k=0,
temperature=temperature,
return_dict_in_generate=True,
output_scores=True,
)
# Expect the generated sequence to be "Cats meow, dogs, and cats".
tokenizer.batch_decode(sample_outputs.sequences) # => ['Cats meow, dogs, and cats']
# The generated tokens are picked as the ones with the top `beam_scores`.
top_score_ids = [score.argmax().item() for score in sample_outputs.scores]
generated_token_ids = sample_outputs.sequences[0][input_len:].tolist()
top_score_ids == generated_token_ids # => True
beam_score = [score.max().item() for score in sample_outputs.scores]
beam_score # => [-6.53633451461792, -41.86376953125, -221.3215789794922]
log_probs = []
warped_log_probs = []
for i in range(max_new_tokens):
outputs = model(sample_outputs.sequences[:, : input_len + i])
sampled_id = sample_outputs.sequences[0][input_len + i]
next_token_logits = outputs.logits[0][-1].detach()
log_probs.append(next_token_logits.log_softmax(-1)[sampled_id].item())
warped_log_probs.append(
(next_token_logits / temperature).log_softmax(-1)[sampled_id].item()
)
log_probs # => [-1.3072705268859863, -1.8364229202270508, -2.400545835494995]
warped_log_probs # => [-0.005173391196876764, -0.08735305070877075, -0.016386810690164566]
def approx_eq_list(l1, l2):
return all(abs(x / y - 1) < 1e-4 for x, y in zip(beam_score_1[1:], beam_score))
# The expected `beam_score` based on Algorithm (1).
beam_score_1 = [0]
for p in log_probs:
beam_score_1.append((beam_score_1[-1] + p) / temperature)
beam_score_1 # => [0, -6.536352634429932, -41.86387777328491, -221.32211804389954]
approx_eq_list(beam_score_1[1:], beam_score) # => True
# This is fine if Algorithm (1) is intended.
# The expected `beam_score` based on Algorithm (2).
beam_score_2 = [0]
for p in warped_log_probs:
beam_score_2.append(beam_score_2[-1] + p)
beam_score_2 # => [0, -0.005173391196876764, -0.09252644190564752, -0.10891325259581208]
approx_eq_list(beam_score_2[1:], beam_score) # => False
# Expect this to be True if Algorithm (2) is intended.
sample_outputs.sequences_scores.tolist() # => [-24.591285705566406]
# (?) Seems unrelated to none of the values above.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
I found that `logits_warper`[ is applied after adding ](https://github.com/huggingface/transformers/blob/27b3031de2fb8195dec9bc2093e3e70bdb1c4bff/src/transformers/generation_utils.py#L2227)`beam_scores`, within which [the temperature is handled](https://github.com/huggingface/transformers/blob/27b3031de2fb8195dec9bc2093e3e70bdb1c4bff/src/transformers/generation_logits_process.py#L127). This way, we sample the next tokens using the beam_scores *plus* the current next_token_scores. The temperature term will telescope along the sequence, instead of applying to just the next token distribution.
That is, let `next_token_logits[i]` be the model's output logits at the `i`-th new token, `beam_score[i]` be the beam score after the `i`-th generated token, `beam_score[0] = 0`, and `sequence[i]` be the id of the `i`-th generated token. The current implementation seems to be doing `beam_scores[i] = (next_token_logits[i].log_softmax(-1) + beam_score[i-1]) / temperature`, and use `beam_scores[i]` to [sample next tokens](https://github.com/huggingface/transformers/blob/27b3031de2fb8195dec9bc2093e3e70bdb1c4bff/src/transformers/generation_utils.py#L2251) to get `sequence[i]` and `beam_score[i] = beam_scores[i][sequence[i]]`. The resulting `beam_score[num_new_tokens - 1]` would be equal to `next_token_logits[i].log_softmax(-1)[sequence[i]] / temperature ** (num_new_tokens - i)` summing over `i = 0 ... num_new_tokens - 1`. Let's call this Algorithm (1).
I'd like to know if this is the intended way of using temperature term for beam search. Based on what I understand from reading posts and papers, it seems like the temperature term should be applied to just the probability of each of the next tokens. That is, to use `(next_token_logits[i] / temperature).log_softmax(-1)` to sample next tokens to get `sequence[i]`, and set `beam_scores[i] = (next_token_logits[i] / temperature).log_softmax(-1) + beam_score[i-1]` and `beam_score[i] = beam_scores[i][sequence[i]]`. The resulting `beam_score[num_new_tokens - 1]` would be equal to `(next_token_logits[i] / temperature).log_softmax(-1)[sequence[i]]` summing over `i = 0 ... num_new_tokens - 1`. Let's call this Algorithm (2).
In code, this would mean to apply `log_softmax` after the [`logits_warper`](https://github.com/huggingface/transformers/blob/27b3031de2fb8195dec9bc2093e3e70bdb1c4bff/src/transformers/generation_utils.py#L2227) and delay [the addition of `beam_scores`](https://github.com/huggingface/transformers/blob/27b3031de2fb8195dec9bc2093e3e70bdb1c4bff/src/transformers/generation_utils.py#L2226) to [right before passing it to `beam_scorer`](https://github.com/huggingface/transformers/blob/27b3031de2fb8195dec9bc2093e3e70bdb1c4bff/src/transformers/generation_utils.py#L2263).
Algorithm (1) is also likely the cause for [issue#11267](https://github.com/huggingface/transformers/issues/11267), where telescopic `/ temperature` leads to all `-inf` `beam_scores` at some point, which leads to the all `-inf` `next_token_scores`, which leads to all `nan` `probs` tenor, which leads to the reported "RuntimeError: probability tensor contains either `inf`, `nan` or element < 0". | 12-30-2021 21:35:36 | 12-30-2021 21:35:36 | Hey @jmzhao,
Thanks for the in-detail description. What you are saying is correct and we have chose to stick to this algorithm simply because that's how we implemented it the first time and we didn't want to break backwards compatibility anymore afterwards. The problem with `beam_sample()` IMO is that there is no clear consensus on how to exactly implement this algorithm. Only very few people seem to use it in the first place because the other sampling algorithms like `beam_search`, `sample`, and `greedy_search` are very well-defined and mathematically sound algorithms.
In case you know of a paper that clearly defines beam sampling or even better the paper that created beam sampling I think we could change the algorithm here. IMO, there is no clear advantage why Algorithm 1 should be better than Algorithm 2. E.g., if we follow Algorithm 2, it would mean that previous beam scores are not taken into account when shaping the probability distribution of the next word, *e.g.* topk and topp would not take into account any previous beam scores in which case the algorithm would be very similar to just the `sample()` algorithm. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,992 | closed | Add model like | # What does this PR do?
This PR adds a new command `transformers-cli add-new-model-like` that creates a new model in the Transformers library exactly like an existing one, for models we want to add that are just tweaked versions of an existing model.
Fixes #14032
Note that this PR is still a draft, I'm just putting all of this here so that @stas00 can start playing around with it but there are still some rough edges, including:
- it only works with a config right now, plan is to create a simple questionnaire for the user to fill
- the doc page of the model is not created
- the command should issue some warnings/recommendations (see below)
- we can't filter out the frameworks yes (to just include PyTorch/TF/Flax)
Also it needs to be cleaned up and tested before a merge, but I'm going on vacation and this way there is a prototype available to play why I'm not here :-)
The whole thing works with string matches/replacements, so it can easily fall apart when a model has the same string of characters for its model type and checkpoint (like `gpt2`) or model camel-cased and upper-cased names (like GPT2). I have used gpt2 as a test model to make sure those are not out of control, but it's still a good idea to proofread the result.
Other than that the command:
- creates the necessary submodules and test files
- re-uses the tokenizer of the model we copy from if the user indicates they are the same (probably the most common use case)
- puts everything in the right places in the inits and auto modules
- add copied from statements everywhere it can (some will need to be removed in the submodules tweaked by the user)
- adds a draft of the doc file
- filters per selected frameworks (if given)
There are some rough edges still:
- the tokenization files won't contain the proper mappings if created, so they need to be manually fixed. Same if there is a fast tokenizer, the converter needs to be manually added (the command warns the user they have to do this in that case)
- some models import objects that are not prefixed with the model name in the main init like BERT (which imports BasicTokenizer), in that case the init needs to be manually fixed to avoid duplicate imports.
- with the copied from option activated, there might be some Copied from to manually remove because of fights with black/the doc-styler
Those can be fixed in followup PRs if necessary, but I think they are acceptable steps for a user to fix manually for now. The test added adds a DistilBert-like model (can't user Bert as seen above) and checks the model added passes all the quality checks and that its common PyTorch tests pass (running the TF/Flax tests is significantly slower but they pass too).
To test right now, use an env where the clone of the transformers repo you are working on to add the package is the transformers library registered (might work if it isn't since the transformers module is only used to get the constant in the auto modules, but untested).
The easiest is to run `transformers-cli add-new-model-like` and follow the prompts.
Otherwise, create a config.json file like the one "add_new_model_config.json" in the first commits of this PR (provided as an example, it has been removed in the final version) with content like this:
```python
{
"add_copied_from": true, # If false, won't add any Copied from statements
"old_model_type": "gpt2", # Needs to be a valid model type
"new_model_patterns": {
"model_name": "GPT-New new", # Model name for the doc
"checkpoint": "huggingface/gpt-new-base", # checkpoint to use in all examples
"model_type": "gpt-new-new", # Model type as saved in the configs
"model_lower_cased": "gpt_new_new", # Used for the function names and module name
"model_camel_cased": "GPTNewNew", # Used for the class names
"model_upper_cased": "GPT_NEW_NEW", # Used for the constant names
"config_class": "GPTNewNewConfig", # Config class, will default to {model_camel_cased}Config if not provided
"tokenizer_class": "GPT2Tokenizer" # Tokenizer class, will default to {model_camel_cased}Tokenizer if not provided (which creates a new tokenizer)
},
"frameworks": [
"pt",
"tf",
"flax"
]
}
```
then run
```bash
transformers-cli add-new-model-like --config_file path_to_config
```
Note that it's possible you have to redo an editable install of the repo with this branch checked out to properly register the new CLI command. | 12-30-2021 21:25:56 | 12-30-2021 21:25:56 | This is really cool, looking forward to it!<|||||>Once you're happy with it, we can then put it to practical use and give it a good real case test by re-doing https://github.com/huggingface/transformers/pull/14084 which got out of sync with all the recent revamps.
So basically cloning `GPT2` to create `GPTMeg` and then adding to it 3 small changes that is the real difference over `GPT2`<|||||>Note that GPT-2 is the model where this command is the most likely to fail as the checkpoint name is the same as the model type/model lower cased which then creates bad replacements I can't really control.
It will also duplicate the `GPT2DoubleHeadsModel` class, which I'm not sure you want for your new model. I would advise duplicating GPT-Neo or GPT-J<|||||>But I'm not modifying GPT-Neo or GPT-J. I don't understand why GPT to GPTMeg is different from GPTNeo to GPTMeg? Is it because GPTNeo has a postfix over the prefix GPT.
Could you please give an example of what you mean when you say it'd fail?
Perhaps it can be done in 2 steps? GPT to XYZ, and then a few one-liner to rename XYZ to the target?<|||||>Like I said, it's because the checkpoint for GPT2 is named gpt2, which is also the model type prefix for GPT2. It's the only model that conflates the two of them, which results in instances of the checkpoint not being replaced by the checkpoint of the new model, but the model type of the new model. So it needs careful proofreading of the generated files.
> Could you please give an example of what you mean when you say it'd fail?
Just run the command and look at the result.<|||||>This PR is now in a state that is good in my opinion. I have:
- revamped the core of the replacement script, to make it more resilient. This solves the issues you pointed out for RoBERTa naming and wrong checkpoints
- added support for non-NLP models
- added a whole test suite for the utilities the command uses.
Also there was a fix on the check-copies script on master which make all the quality tests pass when the new model added includes the `# Copied from` comments.
I don't have much more time to spend on this so for the following problems (or potential bugs) I'd like us to rely on the community. The test suite added is there to catch any regression.
> * It generates the conversion script, but I'm not sure this is relevant/exact in most cases. I would put it behind a question as well "Would you like to have the same conversion script as XXX in order to convert from an original checkpoint?"
This could be a nice feature to add, but it's also super easy to just remove the conversion scripts sine they are completely independent.
> * `# Copied from` statements appear twice in case we're adding a model like another which already has copied from statements. The two statements seem to copy to the appropriate model name chosen, however.
This is fixed.
> * The copyright at the top is a complex question IMO, as the code is definitely inheriting from another so the copyright should be kept - but shouldn't the script ask for the organization which authored the model so that the copyright to that org is also respected? Maybe not, if they don't modify much of it. Open question!
> * All copyrights should be changed to 2022
I haven't touched at all the copyrights. The new model author should add themselves manually, but if nothing much is changed, I think it's goo to keep the defaults to the authors of the copied model. For the change of year, we can make it a good first issue.
<|||||>Noted for the bug in `VisionEncoderDecoder`. It doesn't block the merge as it's not really a target of this PR, but good to keep it in mind!<|||||>Failure is unrelated so merging this! |
transformers | 14,991 | closed | Fix saving FlaubertTokenizer configs | All specific tokenizer config properties must be passed to its base
class (XLMTokenizer) in order to be saved. This was not the case for
do_lowercase config. Thus it was not saved by save_pretrained() method
and saving and reloading the tokenizer changed its behaviour.
This commit fixes it.
# What does this PR do?
Fixes # 14489
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@n1t0, @LysandreJik
## Comment:
As I have said in the issue, a similar problem may happen to other models. For me, it is not obvious, that one should pass a model-specific config to the general base class constructor for the config to be saved later.
I feel like some comment should be left somewhere for future developers, but I cannot come up with a good place for it. On the other hand, maybe it is obvious from all the base class' descriptions, and I'm just not experienced enough.
| 12-30-2021 16:20:39 | 12-30-2021 16:20:39 | |
transformers | 14,990 | closed | When using bert-base-chinese model, except for the first one, other uppercase English letters that are the same in succession will be ignored. And the input_id of different uppercase English characters is the same | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: **4.12.3**
- Platform: **macos**
- Python version: 1.7.0
- PyTorch version (GPU?):No
- Tensorflow version (GPU?):-
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
-
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
here is code:
1.tokenizer = BertTokenizer.from_pretrained("bert-base-chinese")
2.encode = tokenizer(["A", "B", "AAA"])
result:
1. The input_id of 'A' and 'B' are both 100.
2. After AAA is tokenized, only A is left.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
1. Don't omit any English letters after tokenized.
2. The input_id corresponding to different English letters should be different
<!-- A clear and concise description of what you would expect to happen. -->
| 12-30-2021 13:57:04 | 12-30-2021 13:57:04 | hi!
@vanpelt @pvl @arfon @xeb
can someone hele me?
thank you all very much.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,989 | closed | EncoderDecoderModel loss decreased unbelievably and the generated text were repetitive 4.12.0 | Hi,
I am training an EncoderDecoderModel with BertGenerationEncoder and BertGenerationDecoder from scratch (no pretrain) on a custom dataset with about 10k sequences. I found that the loss decreased unbelievably fast to a level which I thought was impossible for seq2seq task. And the generated texts were just repetition of the same token. When I downgrade from v4.12.0 to v4.11.3, this problem disappear.
I understand that in v4.12.0 I don't need to pass decoder_input_ids to EncoderDecoderModel as it will be generated automatically, but I wonder it introduced other changes/bugs.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.0
- Platform: Linux-4.18.0-305.12.1.el8_4.x86_64-x86_64-with-glibc2.28
- Python version: 3.9.5
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
EncoderDecoder, BertGeneration
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
My model was written like this:
import transformers as T
class Translation(nn.Module):
def __init__(self):
super().__init__()
self.encoder_config = T.BertConfig(vocab_size=30,
hidden_size=128,
num_hidden_layers=4,
num_attention_heads=4,
intermediate_size=512,
max_position_embeddings=512,
output_hidden_states=True)
self.decoder_config = T.BertConfig(vocab_size=30,
hidden_size=128,
num_hidden_layers=4,
num_attention_heads=4,
intermediate_size=512,
max_position_embeddings=512,
output_hidden_states=True,
is_decoder=True,
add_cross_attention=True)
self.encoder = T.models.bert_generation.BertGenerationEncoder(self.encoder_config)
self.decoder = T.models.bert_generation.BertGenerationDecoder(self.decoder_config)
self.transformer = T.models.encoder_decoder.EncoderDecoderModel(encoder=self.encoder, decoder=self.decoder)
def forward(self, input_ids, encoder_attention_mask=None, labels=None):
output = self.transformer(input_ids=input_ids,
attention_mask=encoder_attention_mask,
# decoder_input_ids=decoder_input_ids, # add this line in v4.11.3
labels=labels)
return output.loss
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I am working on a seq2seq task which "summarize" long input sequence to short one
## To reproduce
Steps to reproduce the behavior:
I trained the model in v4.11.3, v4.12.0 and v 4.15.0, the problem seemed to only appear since v4.12.0
I used the following tokenization:
{'CLS': 1, 'SEP': 2, 'PAD': 0, 'A': 3, 'B': 4, 'C': 5...}
1. in v4.11.3, I set `labels` to [1,3,4,5,...,2], `decoder_input_ids` to [1,3,4,5,...,2], when calculating loss in BertGenerationDecoder, the model finds the crossEntropy between `labels[:, 1:]` and `prediction_scores[:, :-1, :]` as stated in the source code of BertGenerationDecoder
2. since v4.12.0, I changed `labels` to [3,4,5,...,2] and the automatically generated `decoder_input_ids` are [1,3,4,5,...], when calculating loss in EncoderDecoderModel, the model finds the crossEntropy between `labels` and `decoder_outputs.logits` as stated in the source code of EncoderDecoderModel. So I think 1 and 2 are calculating the same thing.
3. I trained the model using pytorch-lightning trainer, but I don't think it caused the problem
4. I've also tried to provide `decoder_input_ids` to the model in v4.12.0, which should disable automatic generation of `decoder_input_ids` from `labels`, but the problem still exist.
5. I've also tried to calculate the loss by myself in both v4.11.3 and v4.12.0, and there was no problem in v4.11.3, but problem in v4.12.0
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
When I trained the model in v4.11.3, everything was fine, but in v4.12.0, the loss curve became strange:


blue curve is v4.11.3, red curve is v4.12.0. Both curves become flat after several epochs, so I don't think the blue curve means the model is underfitting.
Generated text also changed from ['CLS', 'B', 'C', 'A', ..., 'Sep'] to ['CLS', 'A', 'A', 'A', ..., 'A'] in v4.12.0
I can use v4.11.3 because it works well but I am just wondering why the update of EncoderDecoderModel in v4.12.0 changed the behavior.
Thanks a lot!
<!-- A clear and concise description of what you would expect to happen. -->
| 12-30-2021 12:19:17 | 12-30-2021 12:19:17 | Hey @zzhongzz,
Thanks for the issue! Sorry this will be quite difficult to debug from my side as I would have to guess what the difference might be. Could you try to find an **easy to reproduce** difference between 4.11.3 and 4.12.0? E.g. something like
```python
dummy_inputs = # some input_ids
lables = # some labels
encoder_decoder_model = EncoderDecoderModel.from_pretrained("path/to/your/model")
loss = encoder_decoder(input_ids=dummy_input_ids, labels=labels).loss
```
where the loss is different between 4.11.3 and 4.12.0?
Thank you!<|||||>I tried to train a model in v4.11.3, then load it and calculate the loss:
encoder = transformers.models.bert_generation.BertGenerationEncoder.from_pretrained('encoderModel.pt')
decoder = transformers.models.bert_generation.BertGenerationDecoder.from_pretrained('decoderModel.pt')
encoder_decoder_model = transformers.models.encoder_decoder.EncoderDecoderModel(encoder=encoder, decoder=decoder)
dummy_input_ids = torch.tensor([[2, 21, 8, 11, 8, 10, 10, 17, 8, 10, 10, 17, 7, 13, 16, 3]])
decoder_input_ids = torch.tensor([[2, 21, 12, 13, 10, 5, 11, 6, 19, 18, 6, 5, 3]])
labels = torch.tensor([[2, 21, 12, 13, 10, 5, 11, 6, 19, 18, 6, 5, 3]])
loss = encoder_decoder_model(input_ids=dummy_input_ids, decoder_input_ids=decoder_input_ids, labels=labels).loss
print(loss)
The output in v4.11.3 is like this:
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
tensor(3.0904, grad_fn=<NllLossBackward0>)
The output in v4.15.0 is like this:
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
/home/zzhong/miniconda3/lib/python3.9/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py:524: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
warnings.warn(DEPRECATION_WARNING, FutureWarning)
tensor(3.2755, grad_fn=<NllLossBackward0>)
The loss is different, then I remove `decoder_input_ids`, and remove `CLS` from `label`:
dummy_input_ids = torch.tensor([[2, 21, 8, 11, 8, 10, 10, 17, 8, 10, 10, 17, 7, 13, 16, 3]])
labels = torch.tensor([[21, 12, 13, 10, 5, 11, 6, 19, 18, 6, 5, 3]])
encoder_decoder_model.config.decoder_start_token_id = 2
encoder_decoder_model.config.pad_token_id = 0
loss = encoder_decoder_model(input_ids=dummy_input_ids, labels=labels).loss
print(loss)
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
/home/zzhong/miniconda3/lib/python3.9/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py:524: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
warnings.warn(DEPRECATION_WARNING, FutureWarning)
tensor(2.9198, grad_fn=<NllLossBackward0>)
The loss is still different, then I tried to calculate the loss as it in BertGenerationDecoder:
dummy_input_ids = torch.tensor([[2, 21, 8, 11, 8, 10, 10, 17, 8, 10, 10, 17, 7, 13, 16, 3]])
decoder_input_ids = torch.tensor([[2, 21, 12, 13, 10, 5, 11, 6, 19, 18, 6, 5, 3]])
labels = torch.tensor([[2, 21, 12, 13, 10, 5, 11, 6, 19, 18, 6, 5, 3]])
logits = encoder_decoder_model(input_ids=dummy_input_ids, decoder_input_ids=decoder_input_ids, labels=labels).logits
# we are doing next-token prediction; shift prediction scores and input ids by one
shifted_prediction_scores = logits[:, :-1, :].contiguous()
labels = labels[:, 1:].contiguous()
loss_fct = nn.CrossEntropyLoss()
loss = loss_fct(shifted_prediction_scores.view(-1, decoder.config.vocab_size), labels.view(-1))
print(loss)
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
/home/zzhong/miniconda3/lib/python3.9/site-packages/transformers/models/encoder_decoder/modeling_encoder_decoder.py:524: FutureWarning: Version v4.12.0 introduces a better way to train encoder-decoder models by computing the loss inside the encoder-decoder framework rather than in the decoder itself. You may observe training discrepancies if fine-tuning a model trained with versions anterior to 4.12.0. The decoder_input_ids are now created based on the labels, no need to pass them yourself anymore.
warnings.warn(DEPRECATION_WARNING, FutureWarning)
tensor(2.8944, grad_fn=<NllLossBackward0>)
Still different, then I switch back to v 4.11.3:
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
You are using a model of type bert to instantiate a model of type bert-generation. This is not supported for all configurations of models and can yield errors.
tensor(3.0904, grad_fn=<NllLossBackward0>)
It's now the same as my first try. So I think maybe there are something different in the model itself between 4.11.3 and 4.15.0 (or 4.12.0, they behave the same).<|||||>Hi,
Thanks for reporting. We indeed changed the behaviour of `EncoderDecoderModel` in v4.12. I've investigated this by training an `EncoderDecoderModel` in native PyTorch with the weights of 'bert-base-uncased' used to initialize the weights of the encoder as well as those of the decoder.
As always, I pick a tiny dataset and see if the model is able to properly overfit it (as this is a great way to [debug neural networks](http://karpathy.github.io/2019/04/25/recipe/)). For me, it seems to work fine, loss is going down nicely and the ROUGE metric is going up, after 30 epochs I get the following:
```
Train ROUGE precision: 0.4337499999999999
Train ROUGE recall: 0.9916624999999999
Train ROUGE F1: 0.5708249999999999
```
Here's the notebook: https://colab.research.google.com/drive/1YSfpbZbrjxpF11PU-b_dtO_OceRRlVBb?usp=sharing
However, I'm not able to reproduce this with `BertGeneration`, cc @patrickvonplaten. I guess that if you're going for 'bert-base-uncased', it makes more sense to use `BertModel` rather than `BertGeneration`. <|||||>Hi @NielsRogge,
Inspired by your last sentence (though I didn't use any pretrained model in my case),
> However, I'm not able to reproduce this with `BertGeneration`, cc @patrickvonplaten. I guess that if you're going for 'bert-base-uncased', it makes more sense to use `BertModel` rather than `BertGeneration`.
I did an experiment using `BertModel` as the encoder and `BertLMHeadModel` as the decoder in v4.15.0, and left all other settings unchanged, the problem seemed to disappear. So would it be a bug in `BertGeneration`?

The blue curve is `BertModel`, the red curve is `BertGeneration`.
<|||||>Hey @zzhongzz,
Could you provide a training command I could run to see what could be the problem with `BertGeneration` ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,988 | closed | Adding `num_return_sequences` support for text2text generation. | Co-Authored-By: Enze <[email protected]>
# What does this PR do?
Superseed https://github.com/huggingface/transformers/pull/14411 and adds support
for multiple returned sequences from generate in t2t pipeline.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-30-2021 10:31:20 | 12-30-2021 10:31:20 | |
transformers | 14,987 | closed | the shape of trainer.predict().predictions is inconsistent with the input dataset | when I run under the debug mode, the length of dataset is 500, but the length of prediction is 498, I can't find the causes.... | 12-30-2021 09:40:08 | 12-30-2021 09:40:08 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,986 | closed | can I use different pre-trained model for autoTokenizer and autoModel? | In the train_config.json file, I set:
"model_name_or_path": "ckiplab/albert-tiny-chinese",
"tokenizer_name": "bert-base-chinese",
this also make sense right? | 12-30-2021 09:36:58 | 12-30-2021 09:36:58 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Sorry just seeing this now - you can use different tokenizer/models, but be aware that if they were not trained together then the embeddings of the model will not correspond to the IDs output by the tokenizer. The result will be gibberish, unfortunately.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,985 | closed | very large model on multi gpu | I converted XLMR-xxl from fairseq to transformers, and the model is too large to load in one GPU (the model is about 40g). I have 4 V100-32G gpu and how can I fix this problem ? How can I split and load the model to 2 gpus? | 12-30-2021 05:40:38 | 12-30-2021 05:40:38 | Hi @ahtamjan, I would advise you to read this very complete document that @stas00 created: [Transformers docs: Performance and scalability](https://huggingface.co/docs/transformers/master/en/performance#performance-and-scalability-how-to-fit-a-bigger-model-and-train-it-faster)<|||||>Probably head directly to Deepspeed docs: https://huggingface.co/docs/transformers/master/en/main_classes/deepspeed#trainer-deepspeed-integration
I will update the performance doc to make the link as it mentions it but isn't linking to it directly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,984 | closed | Finetune M2M on multiple language pairs | Hi @all, I am able to finetune the M2M model for one lang pair using the below script.
CUDA_VISIBLE_DEVICES=0,1,2,3,6 python -m torch.distributed.run --nproc_per_node=5 run_translation.py --model_name_or_path=m2m100_418M_new_token --do_train --do_eval --source_lang ja --target_lang en --fp16=True --evaluation_strategy epoch --output_dir bigfrall --per_device_train_batch_size=48 --per_device_eval_batch_size=48 --overwrite_output_dir --forced_bos_token “en” --train_file orig_manga/orig/train_exp_frame_50k.json --validation_file orig_manga/orig/valid_exp_frame_50k.json --tokenizer_name tokenizer_new_token --num_train_epochs 50 --save_total_limit=5 --save_strategy=epoch --load_best_model_at_end=True --predict_with_generate
But I do not find a solution to finetune the model on more than one language pair simultaneously which I could easily do using FairSeq.
E.g. now I want to finetune it on ja-en and ja-zh pairs. How to pass these both languages? I could not find an option in the run_translation.py script. Please help. | 12-30-2021 04:47:14 | 12-30-2021 04:47:14 | hi, I don't think it's possible :( But you plan to make your model public?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I also want to do this, do you solve it? @Jourdelune <|||||>No :(<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,983 | closed | Fix Code block speech pretraining example | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-29-2021 22:15:04 | 12-29-2021 22:15:04 | |
transformers | 14,982 | closed | Resubmit changes after rebase to master | # What does this PR do?
This PR proposes to add a new section to TorchScript documentation "Deploying HuggingFace TorchScript models on AWS using the Neuron SDK".
Fixes # issue [14425](https://github.com/huggingface/transformers/issues/14425)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@philschmid , @LysandreJik
| 12-29-2021 20:32:29 | 12-29-2021 20:32:29 | @philschmid @LysandreJik I carried over all my work from the previous serialization.rst to serialization.mdx file per rebase to master. Looking forward to the approval and merge to master branch. <|||||>Hi @LysandreJik @philschmid please review this PR and let me know of any suggestions. Thank you. |
transformers | 14,981 | closed | Fixing a pathological case for slow tokenizers | # What does this PR do?
Fixes issue found here for slow tokenizers:
https://github.com/huggingface/tokenizers/issues/848
When using arbitrary tokens, a bug could occur where we would seed the new character, even when the lookahead attempted to skip over that part of the text. Thus we could have an extra match that didn't fit.
Other bug was during the lookahead we could undermatch (since we were iterating on new characters before checking termination).
The test added covers both cases.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-29-2021 18:29:51 | 12-29-2021 18:29:51 | |
transformers | 14,980 | closed | [Generate] correct encoder_outputs are passed without attention_mask | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Very very edge case is solved in this PR which occurs if `"encoder_outputs"` is passed in combination without passing `"attention_mask"`, but a model that accepts an `"attention_mask"`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-29-2021 17:13:36 | 12-29-2021 17:13:36 | |
transformers | 14,979 | closed | Add `with torch.no_grad()` to DistilBERT integration test forward pass | # What does this PR do?
This PR encapsulates forward passes in DistilBERT unit tests with `with torch.no_grad():` as per #14642.
Also, a unit test seemed to terminate early due to a `return` in a loop; this was replaced with `continue`. (If this was not what was intended, I will rollback the modification.)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge @LysandreJik @sgugger | 12-29-2021 16:09:48 | 12-29-2021 16:09:48 | Thank you for the feedback and confirmation! I've reverted the change. <|||||>Hello @jaketae, were your last changes committed? I still see a lot of non-integration tests with `torch.no_grad`<|||||>@LysandreJik Apologies for the delay! I thought I had properly committed and pushed the changes but seems like that wasn't the case. Reverted most of the `no_grad` context and modified only the integration test. Thank you! |
transformers | 14,978 | closed | How to instantiate custom T5ForConditionalGeneration | ## Information
I have problems using `T5ForConditionalGeneration`.
I want to evaluate performance of the model without any information from the encoder, i.e., figure out what is the lower bound of decoder performance. To do so, I use a model as follows during both training and validation:
```python
bs = labels.shape[0]
outputs = model(
input_ids=torch.zeros(bs, 1).long(),
attention_mask=torch.ones(bs, 1).float(),
labels=labels
)
```
where labels look like this:
``` python
[[ 103, 25, 214, 125, 34, 1416, 114, 3, 58, 1],
[ 11, 125, 103, 62, 217, 3, 5, 1, -100, -100],
[ 131, 600, 631, 21, 80, 568, 3, 5, 1, -100]]
```
so the sequences apparently end with `</s>` and have paddings replaced with `-100`.
I take the loss from outputs as `outputs.loss` and get the most probable tokens as `tokens = outputs.logits.argmax(-1)` to calculate token accuracy and get a better notion about what is happening.
**I do not want to use pre-trained weights**, so I instantiate the model as follows:
``` python
config = AutoConfig.from_pretrained('t5-small')
model = T5ForConditionalGeneration(config)
```
At the beginning of the training, everything seems to be ok. The token accuracy is **~41%** and output distributions make sense (the token distribution on the first token in labels should roughly reflect frequency of the first tokens in the dataset, the decoder does not have any information from the encoder). However, after a while of training (1k steps), the model improves rapidly, train and val losses drop, and the model achieves accuracy hitting **80%** which is not expected at all. It is absolutely certain about the first token and predicts it correctly (which is not possible!), so I suspect that the input tokens somehow leak into the predictions or that the masking in the decoder is not causal.
**The issue does not occur when using the pre-trained weights**, i.e., instantiating the model as:
``` python
model = T5ForConditionalGeneration.from_pretrained('t5-small')
```
And the token accuracy is constantly around **49%** (verified after 15k steps).
Any ideas why or what am I doing wrong? @patrickvonplaten
## Environment info
- `transformers` version: 4.9.2
- Python version: 3.7.0
- PyTorch version (GPU?): 1.10.1+cu113 (True)
| 12-29-2021 13:52:03 | 12-29-2021 13:52:03 | |
transformers | 14,977 | closed | Convering tf to torch: How to set custom embedding_size when using load_tf_weights_in_bert? | I'm trying to convert a bert-tiny model into torch version and I am using the "load_tf_weights_in_bert" scripts from transformers.
And when I executed the function and fed the right checkpoints and configs, the scripts reported:
ValueError: Pointer shape torch.Size([312]) and array shape (128,) mismatched
After debugging the code I realized that transformers uses "hidden_size" to initialize the embedding layer, which is:

However, in the bert model config it uses a different "embedding_size" to indicate the hidden size of the embedding layer, which is different from "hidden_size config", like the following:

Since the source code uses the config.hidden_size to initialize the embedding layer, is there any way I can pass a different number to initialize it? So that I can successfully convert the tf model to torch. Thanks! | 12-29-2021 11:04:57 | 12-29-2021 11:04:57 | Hi,
In that case, I would fork the library, and create a new branch that adjusts the word embeddings to be:
`self.word_embeddings = nn.Embedding(config.vocab_size, config.embedding_size, padding_idx=config.pad_token_id)`
If you then convert the checkpoint from your new branch, it will work.<|||||>Thanks. Will it be considered to add this feature into the main branch? Or shall I create a MR for it?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I don't think we are going to include this in the main branch. Therefore, closing this issue.
If you have comments, feel free to reopen. |
transformers | 14,976 | closed | Model stopped training once I introduced << report_to = 'wandb' >> in TrainingArguments | I am downloading the model https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384/tree/main microsoft/Multilingual-MiniLM-L12-H384 and then using it.
Transformer Version: '4.11.3'
I have written the below code:
```
import wandb
wandb.login()
%env WANDB_LOG_MODEL=true
model = tr.BertForSequenceClassification.from_pretrained("/home/pc/minilm_model",num_labels=2)
model.to(device)
print("hello")
training_args = tr.TrainingArguments(
report_to = 'wandb',
output_dir='/home/pc/proj/results2', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=1000, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=1000,
evaluation_strategy="epoch",
save_strategy="no"
)
print("hello")
trainer = tr.Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_data, # training dataset
eval_dataset=val_data, # evaluation dataset
compute_metrics=compute_metrics
)
```
After Executing this:
The model stuck at this point:
***** Running training *****
```
Num examples = 12981
Num Epochs = 20
Instantaneous batch size per device = 16
Total train batch size (w. parallel, distributed & accumulation) = 32
Gradient Accumulation steps = 1
Total optimization steps = 8120
Automatic Weights & Biases logging enabled, to disable set os.environ["WANDB_DISABLED"] = "true"
```
**What could be the possible solution?**
| 12-29-2021 11:00:40 | 12-29-2021 11:00:40 | @pratikchhapolika can you share a minimum example or a colab? Does it run ok with W&B disabled?
```
os.environ["WANDB_DISABLED"] = "true"
```
Also make sure you are on the latest version of W&B,
```
pip install wandb --upgrade
```<|||||>> @pratikchhapolika can you share a minimum example or a colab? Does it run ok with W&B disabled?
>
> ```
> os.environ["WANDB_DISABLED"] = "true"
> ```
>
> Also make sure you are on the latest version of W&B,
>
> ```
> pip install wandb --upgrade
> ```
Yes, when I do os.environ["WANDB_DISABLED"] = "true". It runs fine. <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,975 | closed | Custom constructed and trained `tokenizers.Tokenizer` for Albert error. | I created and traind a custom tokenizer for BERT following example in this page: https://huggingface.co/docs/tokenizers/python/latest/pipeline.html
```py
from tokenizers import Tokenizer
from tokenizers.models import WordPiece
bert_tokenizer = Tokenizer(WordPiece(unk_token="[UNK]"))
from tokenizers import normalizers
from tokenizers.normalizers import Lowercase, NFD, StripAccents
bert_tokenizer.normalizer = normalizers.Sequence([NFD(), Lowercase(), StripAccents()])
from tokenizers.pre_tokenizers import Whitespace
bert_tokenizer.pre_tokenizer = Whitespace()
from tokenizers.processors import TemplateProcessing
bert_tokenizer.post_processor = TemplateProcessing(
single="[CLS] $A [SEP]",
pair="[CLS] $A [SEP] $B:1 [SEP]:1",
special_tokens=[
("[CLS]", 1),
("[SEP]", 2),
],
)
from tokenizers.trainers import WordPieceTrainer
trainer = WordPieceTrainer(
vocab_size=30522, special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"]
)
# files = [f"data/wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]]
# bert_tokenizer.train(files, trainer)
# bert_tokenizer.save("data/bert-wiki.json")
bert_tokenizer.train(f_path_list, trainer) # my own file list
bert_tokenizer.save(tkn_path, pretty=True) # some .json on disk
```
Then, I loaded it from disk with this:
```py
tokenizer: Tokenizer = Tokenizer.from_file(tkn_path)
```
But, when I use it to tokenize some text, error appears:
```py
def tokenize_function(examples: Batch):
# Error: 'tokenizers.Tokenizer' object is not callable
return tokenizer(examples['text'], return_special_tokens_mask=True)
with training_args.main_process_first(desc="dataset map tokenization"):
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=8,
remove_columns=raw_datasets["train"].column_names,
desc="...",
)
````
Then, I changed `tokenizer(...)` with `tokenizer.encode(....)`, another error appears:
```py
def tokenize_function(examples: Batch):
# Error: Provided `function` which is applied to all elements of table returns a variable of type <class 'list'>
return tokenizer.encode(examples['text'])
with training_args.main_process_first(desc="dataset map tokenization"):
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=8,
remove_columns=raw_datasets["train"].column_names,
desc="...",
)
```
Then, I changed the way to load tokenizer to below:
```py
tokenizer = PreTrainedTokenizerFast(tokenizer_file=tkn_path)
```
But when tokenized stuff passed into model, this error appears:
```py
ValueError: This tokenizer does not have a mask token which is necessary for masked language modeling. You should pass `mlm=False` to train on causal language modeling instead.
```
How can I really train albert with my own tokenizer?
| 12-29-2021 09:37:27 | 12-29-2021 09:37:27 | Solved.
Tokenizer created、trained、saved in `tokenizers` library, can be loaded in `transformers` library in this way:
```
tokenizer = PreTrainedTokenizerFast(tokenizer_object=Tokenizer.from_file(tkn_path))
```
Since I'm using tokenizer for mlm tasks, I shall use it in this way:
```
tokenizer = BertTokenizerFast(tokenizer_object=Tokenizer.from_file(tkn_path))
```
Check this out: https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/tokenizer_training.ipynb |
transformers | 14,974 | closed | How to save the fine-tuned model | Hi,
I save the fine-tuned model with the `tokenizer.save_pretrained(my_dir)` and `model.save_pretrained(my_dir)`. Meanwhile, the model performed well during the fine-tuning(i.e., the loss remained stable at **0.2790**). And then, I use the `model_name.from_pretrained(my_dir)` and ` tokenizer_name.from_pretrained(my_dir)` to load my fine-tunned model, and test it in the training data. The value of loss surprised me because it was so high, i.e., **4.7**. I don't know why. Can you help me? Please reply to me at your convenience. Thank you very much. :) | 12-29-2021 08:35:53 | 12-29-2021 08:35:53 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,973 | closed | Issue with Jiva/xlm-roberta-large-it-mnli | When running the below piece of code:
`from transformers import pipeline
classifier = pipeline("zero-shot-classification",
model="Jiva/xlm-roberta-large-it-mnli", device=0, use_fast=True, multi_label=True)`
the following error occurs:
```
OSError: Can't load config for 'Jiva/xlm-roberta-large-it-mnli'. Make sure that:
- 'Jiva/xlm-roberta-large-it-mnli' is a correct model identifier listed on 'https://huggingface.co/models'
- or 'Jiva/xlm-roberta-large-it-mnli' is the correct path to a directory containing a config.json file
```
Any idea how to fix this issue?
Thank you in advance
Francesca | 12-29-2021 08:35:53 | 12-29-2021 08:35:53 | i was able to fix this issue by installing the correct version |
transformers | 14,972 | closed | Ability to save model outcomes in tabular format CSV file? | I am downloading the model https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384/tree/main microsoft/Multilingual-MiniLM-L12-H384 and then using it.
Transformer Version: '4.11.3'
I have written the below code:
```
model = tr.BertForSequenceClassification.from_pretrained("/home/pc/minilm_model",num_labels=2)
model.to(device)
print("hello")
training_args = tr.TrainingArguments(
output_dir='/home/pc/proj/results2', # output directory
num_train_epochs=10, # total number of training epochs
per_device_train_batch_size=16, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=1000, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs', # directory for storing logs
logging_steps=1000,
evaluation_strategy="epoch",
save_strategy="no"
)
print("hello")
trainer = tr.Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_data, # training dataset
eval_dataset=val_data, # evaluation dataset
compute_metrics=compute_metrics
)
```
Is there way to retrieve my ( **I do not want to use tensor-board**) :
1. **Training loss** for every epoch
2. **Validation loss** for every epoch
In a tabular format ( May be in CSV file)
Then I want to plot my train and validation loss graph.
**Please help me what changes do I need to bring in the above code in case its feasible.**
| 12-29-2021 07:42:11 | 12-29-2021 07:42:11 | You need to add your own logging handler.
Check this: https://github.com/huggingface/transformers/issues/10454<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,971 | closed | [Request] PerceiverForTokenClassification for NER task | # 🚀 Feature request
I'm reading [Perceiver IO](https://huggingface.co/blog/perceiver) blog, it says PeceiverIO could also do NER task. So I try to implement a `PerceiverForTokenClassification` model.
I use `conll2003` dataset to test my model. Here is my implement (the script is modified from [run_ner.py](https://github.com/huggingface/transformers/blob/master/examples/pytorch/token-classification/run_ner.py)): [run_perceiver_ner.py](https://github.com/Sanster/transformers-ocr/blob/master/run_perceiver_ner.py)
Train command:
```bash
python3 run_perceiver_ner.py \
--dataset_name conll2003 \
--output_dir /tmp/perceiver_io_ner \
--logging_steps 50 \
--num_train_epochs 50 \
--learning_rate 0.00005 \
--per_device_train_batch_size 32 \
--d_latents 768 \
--d_model 768 \
--do_train \
--do_eval \
--fp16
```
But I have no lucky to make it work, the f1 score is very low:
PereceiverIO train 10 epochs:

PereceiverIO train 50 epochs:

BERT result for reference:
BERT model train from scratch with 10 epochs

BERT model with pretrain weights train 10 epochs

| 12-29-2021 05:44:46 | 12-29-2021 05:44:46 | Hi,
Thanks for your interest in Perceiver and for trying it out on a task that is not implemented yet in the library!
However, for a token classification task such as NER, I would actually just take the same decoder settings as `PerceiverForMaskedLM` (as masked language modeling is also solved as a token classification problem). The only thing that you would need to change is replace the embedding decoder (`PerceiverEmbeddingDecoder`) by a simple linear layer (`nn.Linear`) that takes in the decoder output of shape (batch_size, seq_len, hidden_size) and turns it into a tensor of shape (batch_size, seq_len, num_labels). In other words, I would implement it as follows:
```
from transformers import PerceiverPreTrainedModel, PerceiverTextPreprocessor, PerceiverModel
class PerceiverForTokenClassification(PerceiverPreTrainedModel):
def __init__(self, config):
super().__init__(config)
self.num_labels = config.num_labels
text_preprocessor = PerceiverTextPreprocessor(config)
trainable_position_encoding_kwargs_decoder = dict(
num_channels=text_preprocessor.num_channels, index_dims=config.max_position_embeddings
)
self.perceiver = PerceiverModel(
config,
input_preprocessor=text_preprocessor,
decoder=PerceiverBasicDecoder(
config,
output_num_channels=config.d_latents,
output_index_dims=config.max_position_embeddings, # we need to define the seq_len of the inputs beforehand
num_channels=text_preprocessor.num_channels,
qk_channels=8 * 32,
v_channels=text_preprocessor.num_channels,
num_heads=8,
use_query_residual=False,
final_project=False,
trainable_position_encoding_kwargs=trainable_position_encoding_kwargs_decoder,
),
)
self.classifier = nn.Linear(text_preprocessor.num_channels, config.num_labels)
# Initialize weights and apply final processing
self.post_init()
```
Let me know how it goes, if it works well, we can add the model to the library.<|||||>Hi,
I did run a [modified version of your script](https://github.com/NielsRogge/transformers/blob/add_perceiver_ner/examples/pytorch/token-classification/run_ner_perceiver.py) with the model defined above. This is what I got:
```
***** eval metrics *****
epoch = 3.0
eval_LOC_f1 = 0.8895
eval_LOC_number = 11884
eval_LOC_precision = 0.8934
eval_LOC_recall = 0.8856
eval_MISC_f1 = 0.8058
eval_MISC_number = 6078
eval_MISC_precision = 0.8182
eval_MISC_recall = 0.7938
eval_ORG_f1 = 0.8215
eval_ORG_number = 8869
eval_ORG_precision = 0.8187
eval_ORG_recall = 0.8243
eval_PER_f1 = 0.8477
eval_PER_number = 10479
eval_PER_precision = 0.8598
eval_PER_recall = 0.8359
eval_loss = 0.1479
eval_overall_accuracy = 0.9594
eval_overall_f1 = 0.848
eval_overall_precision = 0.8539
eval_overall_recall = 0.8421
eval_runtime = 0:01:25.54
eval_samples = 3250
eval_samples_per_second = 37.99
eval_steps_per_second = 4.757
```
This is using the default settings (i.e. 3 epochs, linear learning rate schedule, etc.). These could perhaps still be improved using a validation loss.<|||||>Thanks for your script, I have reproduced your results and I will try it on my own dataset.
Share some result on conll2003 (train from scratch) using default settings (i.e. 3 epochs, linear learning rate schedule, lr 5e-5 etc.) :
- BERT: f1 67.5%
- PerceiverIO: f1 28.4%<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,970 | closed | Replace assertion with exception | # What does this PR do?
Replaces `assert` with `ValueError` as per #12789.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@sgugger @patil-suraj | 12-29-2021 04:28:53 | 12-29-2021 04:28:53 | |
transformers | 14,969 | closed | Documentation in CTRL linksto 404 | In https://huggingface.co/docs/transformers/master/en/model_doc/ctrl#overview, when clicking "See reusing the past in generative models", the user is redirected to a 404 (https://huggingface.co/docs/transformers/master/en/quickstart#using-the-past) | 12-28-2021 17:52:21 | 12-28-2021 17:52:21 | Nice catch! @stevhliu, would you like to take a look at that?<|||||>Looks like a [throwback](https://huggingface.co/transformers/v2.2.0/quickstart.html) to `transformers 2.2.0`! Should we point the user to [`CTRLModel.forward()`](https://huggingface.co/docs/transformers/master/en/model_doc/ctrl#transformers.CTRLModel.forward.past_key_values) where they can get more information about the parameter? It looks like I can also update it to say PyTorch accepts `past_key_values` and TF accepts `past`.
Another option could be to also add a code example in [`CTRLModel`](https://huggingface.co/docs/transformers/master/en/model_doc/ctrl#transformers.CTRLModel).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>What you propose sounds good to me @stevhliu!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This is now fixed after #15615 |
transformers | 14,968 | closed | GPT-2 generate degenerates into producing garbage after a while | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.9.2
- Platform: Windows 10
- Python version: 3.8.7
- PyTorch version (GPU?): 1.8.1+cu102
- Tensorflow version (GPU?): 2.6.0
- Using GPU in script?: ?
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @LysandreJik
## Information
Model I am using is GPT2. When I do generate with max_length of 100, the output starts to look correct, but then it goes into a bunch of nonsense which isn't simply non-grammatical, but rather barely English at all (it looks badly garbled). As an example:
> In soccer news, Bob Jackson scored 3 goals for Chlesea on October 12, 2021. "It's been a long time coming, but I'm happy to be here," he said. "I'm excited to get back on the field and be a part of this team. I've been working with the staff to put to minute farereens Ahead optimistic429 Congressionaloute moldedtxt variants ralliedinous afflicted unwilling Prediction isn 315 WWElike flown fingertipsazelstripLLOWPayregor utilized confidently028 GamerGate
The workaround I have now which largely looks like it works is to do an `rfind` on the rightmost period and then cut off everything to the right of that. Of course this would not be 100% successful.
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
This is my code:
```
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
input_ids = tokenizer(msg, return_tensors="pt")
beam_output = model.generate(
**input_ids,
do_sample=True,
max_length=100,
num_beams=5,
temperature=0.7,
early_stopping=True,
no_repeat_ngram_size=2
)
#generated_text = tokenizer.decode(beam_output[0], skip_special_tokens=True)[:280].replace('\n', ' ')
generated_text = tokenizer.decode(beam_output[0], skip_special_tokens=True).replace('\n', ' ')
```
## Expected behavior
I expect generated output of length 100 of grammatical text, even if it is repeated or off-topic. | 12-28-2021 16:55:43 | 12-28-2021 16:55:43 | Hey @demongolem-biz,
Could you try to turn of "beam sample" mode and just use "sample" mode instead? Beam sample mode isn't really common for models auch as GPT2.
```py
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
# add the EOS token as PAD token to avoid warnings
model = GPT2LMHeadModel.from_pretrained("gpt2", pad_token_id=tokenizer.eos_token_id)
input_ids = tokenizer(msg, return_tensors="pt")
beam_output = model.generate(
**input_ids,
do_sample=True,
max_length=100,
num_beams=1,
temperature=0.7,
early_stopping=True,
no_repeat_ngram_size=2
)
#generated_text = tokenizer.decode(beam_output[0], skip_special_tokens=True)[:280].replace('\n', ' ')
generated_text = tokenizer.decode(beam_output[0], skip_special_tokens=True).replace('\n', ' ')
```<|||||>Note how `num_beams` is set to 1 above<|||||>Thanks for the clarification @patrickvonplaten . I see that the I get proper English words now and the random blobs of text have disappeared. |
transformers | 14,967 | closed | Update run_speech_recognition_seq2seq.py (max_eval_samples instead of train_samples) | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 15:25:11 | 12-28-2021 15:25:11 | |
transformers | 14,966 | closed | huggingface_pytorch-pretrained-bert_bert.ipynb -- RuntimeError: Cannot find callable bertTokenizer in hubconf | ## Environment info
Google Colab
https://colab.research.google.com/github/pytorch/pytorch.github.io/blob/master/assets/hub/huggingface_pytorch-pretrained-bert_bert.ipynb#scrollTo=bb1RXLUYj_Wv
- Using GPU in script?: yes
Models:
BERT
## To reproduce
Steps to reproduce the behavior:
```
### First, tokenize the input
import torch
tokenizer = torch.hub.load('huggingface/pytorch-pretrained-BERT', 'bertTokenizer', 'bert-base-cased', do_basic_tokenize=False)
# Tokenized input
text = "[CLS] Who was Jim Henson ? [SEP] Jim Henson was a puppeteer [SEP]"
tokenized_text = tokenizer.tokenize(text)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
```
Get error stack
```
Downloading: "https://github.com/huggingface/pytorch-pretrained-BERT/archive/master.zip" to /root/.cache/torch/hub/master.zip
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-2-0cb4dd772680> in <module>()
1 ### First, tokenize the input
2 import torch
----> 3 tokenizer = torch.hub.load('huggingface/pytorch-pretrained-BERT', 'bertTokenizer', 'bert-base-cased', do_basic_tokenize=False)
4
5 # Tokenized input
2 frames
/usr/local/lib/python3.7/dist-packages/torch/hub.py in load(repo_or_dir, model, source, force_reload, verbose, skip_validation, *args, **kwargs)
397 repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, verbose, skip_validation)
398
--> 399 model = _load_local(repo_or_dir, model, *args, **kwargs)
400 return model
401
/usr/local/lib/python3.7/dist-packages/torch/hub.py in _load_local(hubconf_dir, model, *args, **kwargs)
425 hub_module = import_module(MODULE_HUBCONF, hubconf_path)
426
--> 427 entry = _load_entry_from_hubconf(hub_module, model)
428 model = entry(*args, **kwargs)
429
/usr/local/lib/python3.7/dist-packages/torch/hub.py in _load_entry_from_hubconf(m, model)
233
234 if func is None or not callable(func):
--> 235 raise RuntimeError('Cannot find callable {} in hubconf'.format(model))
236
237 return func
RuntimeError: Cannot find callable bertTokenizer in hubconf
``` | 12-28-2021 15:10:55 | 12-28-2021 15:10:55 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,965 | closed | [Speech Recognition Examples] Update README.md | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Minor updates in the README.md
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 12:33:58 | 12-28-2021 12:33:58 | Speech Seq2Seq can be advertised as soon as https://github.com/huggingface/transformers/pull/14881 is merged. |
transformers | 14,964 | closed | [Tests] Speed up tokenizer tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR speeds up the 7 slowest tokenizer tests significantly which should lead to a speed-up of ca. 3 minutes for every test that runs all tokenizer tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 11:37:27 | 12-28-2021 11:37:27 | |
transformers | 14,963 | closed | Add 'with torch.no_grad()' to BertGeneration integration test forward passes | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As proposed in #14642, this encapsulates the forward passes in the BertGeneration integration test with "with torch.no_grad():".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 10:44:58 | 12-28-2021 10:44:58 | |
transformers | 14,962 | closed | [Speech recognition examples] num_processing_workers not allowed to be set | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: master
- Platform: ubuntu
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name) commonvoice
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. run one of the official speech recognition examples with num_processing_workers specified
2.
3.
```
Traceback (most recent call last):
File "/mnt/f/Codes/Python Apps/asr/speech_recognition_seq2seq.py", line 504, in <module>
main()
File "/mnt/f/Codes/Python Apps/asr/speech_recognition_seq2seq.py", line 380, in main
vectorized_datasets = raw_datasets.map(
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 494, in map
{
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/dataset_dict.py", line 495, in <dictcomp>
k: dataset.map(
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2142, in map
shards = [
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2143, in <listcomp>
self.shard(num_shards=num_proc, index=rank, contiguous=True, keep_in_memory=keep_in_memory)
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 3168, in shard
return self.select(
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 485, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/fingerprint.py", line 411, in wrapper
out = func(self, *args, **kwargs)
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2760, in select
return self._new_dataset_with_indices(indices_buffer=buf_writer.getvalue(), fingerprint=new_fingerprint)
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 2671, in _new_dataset_with_indices
return Dataset(
File "/home/flozi/anaconda3/lib/python3.9/site-packages/datasets/arrow_dataset.py", line 661, in __init__
raise ValueError(
ValueError: External features info don't match the dataset:
Got
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': Audio(sampling_rate=16000, mono=True, _storage_dtype='string', id=None), 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: string, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
but expected something like
{'client_id': Value(dtype='string', id=None), 'path': Value(dtype='string', id=None), 'audio': {'path': Value(dtype='string', id=None), 'bytes': Value(dtype='binary', id=None)}, 'sentence': Value(dtype='string', id=None), 'up_votes': Value(dtype='int64', id=None), 'down_votes': Value(dtype='int64', id=None), 'age': Value(dtype='string', id=None), 'gender': Value(dtype='string', id=None), 'accent': Value(dtype='string', id=None), 'locale': Value(dtype='string', id=None), 'segment': Value(dtype='string', id=None)}
with type
struct<client_id: string, path: string, audio: struct<path: string, bytes: binary>, sentence: string, up_votes: int64, down_votes: int64, age: string, gender: string, accent: string, locale: string, segment: string>
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 12-28-2021 10:31:48 | 12-28-2021 10:31:48 | Actually, I've seen this error before as well and it confused me a lot as well - thanks a lot for flagging it here @flozi00! I think it only happens though when the audio has to be resampled (like for Common Voice).
@lhoestq - it's a weird bug with `datasets` I think. I'll try to get a minimum reproducible code-snippet.<|||||>@lhoestq @albertovilla @mariosasko here is a minimal reproducible bug report:
```python
from datasets import load_dataset, DatasetDict
import datasets
from transformers import AutoFeatureExtractor
raw_datasets = DatasetDict()
raw_datasets["train"] = load_dataset("common_voice", "ab", split="train")
feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
raw_datasets = raw_datasets.cast_column(
"audio", datasets.features.Audio(sampling_rate=feature_extractor.sampling_rate)
)
num_workers = 16
def prepare_dataset(batch):
sample = batch["audio"]
inputs = feature_extractor(sample["array"], sampling_rate=sample["sampling_rate"])
batch["input_values"] = inputs.input_values[0]
batch["input_length"] = len(batch["input_values"])
return batch
raw_datasets.map(
prepare_dataset,
remove_columns=next(iter(raw_datasets.values())).column_names,
num_proc=16,
desc="preprocess datasets",
)
```<|||||>Opened an issue on datasets: https://github.com/huggingface/datasets/issues/3497 (feel free to disregard issue here)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Think we can close this one no @lhoestq ?<|||||>Yes indeed, thanks for the heads up ! |
transformers | 14,961 | closed | Add 'with torch.no_grad()' to BEiT integration test forward passes | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
As proposed in #14642, this encapsulates the forward passes in the BEiT integration test with "with torch.no_grad():".
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@NielsRogge
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 10:16:52 | 12-28-2021 10:16:52 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>For some reason this wasn't merged, sorry about that! Just merged it. |
transformers | 14,960 | closed | Missing parameters when iterating over `module.parameters()` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `4.15.0`
- Python
- Implementation: CPython
- Version : 3.8.12
- Platform
- OS : Darwin
- Release : 20.6.0
- Machine : arm64
- Architecture: 64bit
- PyTorch version (GPU?): `torch==1.10.1` no GPU
- Tensorflow version (GPU?): None
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @LysandreJik
## Information
For some reason, the parameter count of an entire `GPT2LMHeadModel` is not considering the `lm_head`. This behavior might not be only specific to this class, however, this is the one I am working with.
## To reproduce
```python
from transformers import AutoModelForCausalLM
def count_params(module):
return sum(p.numel() for p in module.parameters())
name = "gpt2"
model = AutoModelForCausalLM.from_pretrained(name)
print(f"Whole model: {count_params(model)}")
print(f"Transformer: {count_params(model.transformer)}")
print(f"Head: {count_params(model.lm_head)}")
```
The standard output:
```bash
Whole model: 124439808
Transformer: 124439808
Head: 38597376
```
The core of the problem is that the `lm_head.weight` is not inside of `model.named_parameters()`.
## Expected behavior
The expected behavior is for the total number of parameters to be equal to the parameters inside of the `transformer` backbone PLUS the number of parameters inside of `lm_head`.
See relevant code here:
https://github.com/huggingface/transformers/blob/1c121916f3adee769eb43d4656b621be60427bbd/src/transformers/models/gpt2/modeling_gpt2.py#L951-L952
| 12-28-2021 10:06:35 | 12-28-2021 10:06:35 | Hey @jankrepl,
`self.lm_head` is identical to `self.transformer.wte` since those weights are tied (input and output word embeddings are the same for a lot of Transformer models) which is why `self.lm_head` doesn't show up here I think <|||||>```python
from transformers import AutoModelForCausalLM
def count_params(module):
return sum(p.numel() for p in module.parameters())
name = "gpt2"
model = AutoModelForCausalLM.from_pretrained(name, tie_word_embeddings=False)
print(f"Whole model: {count_params(model)}")
print(f"Transformer: {count_params(model.transformer)}")
print(f"Head: {count_params(model.lm_head)}")
```
should give
```
Whole model: 163037184
Transformer: 124439808
Head: 38597376
```<|||||>Oh wow! What a nice trick to reduce the number of parameters!
Thank you for the quick help:) |
transformers | 14,959 | closed | [Wav2Vec2] Rename model's feature extractor to feature encoder | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
In this PR all `<SpeechModel>FeatureExtractor` are renamed to `<SpeechModel>FeatureEncoder` and `--freeze_feature_extractor` is renamed to `--freeze_feature_encoder`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 09:36:22 | 12-28-2021 09:36:22 | |
transformers | 14,958 | closed | [WavLM] give model more precision tolerance in tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
WavLM tests have been flaky over the last week. This PR increases the tolerance for those tests.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-28-2021 09:33:04 | 12-28-2021 09:33:04 | cc @sgugger |
transformers | 14,956 | closed | [megatron convert] PYTHONPATH requirements | This PR documents how to tell the megatron conversion scripts to find the `Megatron-LM` repo, which is needed for recent checkpoints as it was reported at https://github.com/huggingface/transformers/issues/14939
Since one can't install Megatron-LM as a package we can't make the script require it, so documenting it here and will also document in the model cards.
Fixes: https://github.com/huggingface/transformers/issues/14939
@LysandreJik
----
@jdemouth already updated these:
- https://huggingface.co/nvidia/megatron-gpt2-345m/blob/main/README.md
- https://huggingface.co/nvidia/megatron-bert-uncased-345m/blob/main/README.md
- https://huggingface.co/nvidia/megatron-bert-cased-345m/blob/main/README.md
I need to figure out how to get perms to do so. | 12-28-2021 05:31:55 | 12-28-2021 05:31:55 | |
transformers | 14,955 | closed | [doc] :class: hunt | part 2 Doc clean up continued from https://github.com/huggingface/transformers/pull/14954
- `:meth:`, `:func` - all is good
- `:class:`
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's#:class: ?`([^`]+)`#[`$1`]#g' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's#:class: ?\*([^\*]+)\*#[`$1`]#g' {} \;
git checkout examples utils/check_repo.py
```
let me know if there are any other left-over rst tags to hunt down.
@sgugger
| 12-28-2021 00:01:22 | 12-28-2021 00:01:22 | |
transformers | 14,954 | closed | [doc] :obj: hunt | I think it'd be easier to review if we focus on one type at a time. So this PR is about `obj`:
it catches all the invalid `:obj:` versions, reformats the next field as if they were normal and then removes them all.
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's#:?obj: ?\*([^\*]+)\*#`$1`#g' {} \;
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's#:?obj: ?##g' {} \;
git checkout utils/check_repo.py src/transformers/tokenization_utils_base.py src/transformers/models/marian/convert_marian_tatoeba_to_pytorch.py
git checkout examples
make fixup
```
mods reset for `research_projects` as requested, but it should be easy to replay these later.
@sgugger | 12-27-2021 23:11:13 | 12-27-2021 23:11:13 | Redone w/o examples and utils |
transformers | 14,953 | closed | Doc styler examples | # What does this PR do?
Last bit of the new style-doc script, blackify all code examples which:
- insures they don't have bacis Python errors
- are on par with our usual code format
This PR fixes a few code examples mistakes (see first commit for the docstrings and [this commit](https://github.com/huggingface/transformers/pull/14953/commits/999b9fd9f4a00da10353b6b95a4c842d0153da0b) for the MDX files) that were identified by black not being happy. Only code samples marked with `py` or `python` are blackified, so it's easy to turn off this feature if it becomes too annoying on some docstrings. | 12-27-2021 22:44:00 | 12-27-2021 22:44:00 | Cool! Good to merge for me once all tests pass |
transformers | 14,952 | closed | Convert last rst file | # What does this PR do?
For some reason we forgot the `auto.rst` doc file in the mass conversion. This PR fixes that problem. | 12-27-2021 21:53:35 | 12-27-2021 21:53:35 | |
transformers | 14,951 | closed | [doc] consistent True/False/None default format | This PR sets True/False/None default format in a consistent manner of backticks.
```
find . -type d -name ".git" -prune -o -type f -exec perl -pi -e 's#\*(True|False|None)\*#`$1`#g' {} \;
```
Fixes: https://github.com/huggingface/transformers/issues/14949
@sgugger
| 12-27-2021 21:12:36 | 12-27-2021 21:12:36 | Done |
transformers | 14,950 | closed | Doc styler v2 | # What does this PR do?
This PR rewrites the doc-styler to work with the Markdown docstrings and re-enables the checks.
Merging and inspecting the doc is alright as the diff is... consequent :-) | 12-27-2021 20:57:42 | 12-27-2021 20:57:42 | |
transformers | 14,949 | closed | [doc] post-conversion: incosistent "default to" | After the conversion we ended up with inconsistent values for `defaults to` - sometimes it's `formatted`, other times it's *italics*, with former being the prevailing form. Example:
```
src/transformers/generation_tf_utils.py: use_cache: (`bool`, *optional*, defaults to `True`):
src/transformers/generation_tf_utils.py: output_attentions (`bool`, *optional*, defaults to *False*):
```
but there is a lot more of these
@sgugger | 12-27-2021 20:13:10 | 12-27-2021 20:13:10 | It's not the conversion fault, those were inconsistent before. Meaning some were properly set with an :obj: marker or double backticks while other had simple backticks and were thus converted in italics.
By all means, if you want to do a batch conversion, the proper format is \`False\` (or \`True\`)<|||||>Will do,
Also while fixing this I see there are still quite a few `:obj:` leftovers - is it still a WIP?
```
grep -Ir :obj: src
src/transformers/models/t5/modeling_t5.py: assert self.is_decoder, f":obj:`use_cache` can only be set to `True` if {self} is used as a decoder"
src/transformers/models/tapas/modeling_tapas.py: input_mask_float (:obj: *torch.FloatTensor* of shape `(batch_size, seq
[...]
```<|||||>No. The first example is not a docstring, so not touched by the conversion scripts. The second one seemed to have a space between the :obj: and the \` so was not properly converted (the torch.FloatTensor is not supposed to be in italics).
You should wait a tiny bit more before doing any conversion as I'm in the process of changing lots of docstrings in #14950 (will merge it when it's green).<|||||>That was just a small snippet, the full match is 44 lines. should be easy to reproduce.
Please take your time, I was just flagging these in case it was missed.<|||||>OK PR is merged!
If you want to hunt down all the remaining :obj:/:class:/:meth:/:func: that would be amazing (and leave me more time to work on #14032 this week :-) )<|||||>Sure, once you approve https://github.com/huggingface/transformers/pull/14951 and it is merged to avoid conflicts.<|||||>Done. I think there might also be some incomplete :obj or obj: in the wild (saw one on #14951).<|||||>So what of all these `:obj:` that are not in the doc string - do we simply remove those?
e.g.:
```
src/transformers/models/t5/modeling_t5.py: assert self.is_decoder, f":obj:`use_cache` can only be set to `True` if {self} is used as a decoder"
examples/research_projects/quantization-qdqbert/utils_qa.py: prefix (:obj:`str`, `optional`):
examples/research_projects/deebert/src/modeling_highway_roberta.py: loss (:obj:`torch.FloatTensor` of shape :obj:`(1,)`, `optional`, returned when :obj:`label` is provided):
```
<|||||>Yes, we use Markdown style everywhere. Would just not touch the research projects. as I haven't converted the docstrings there. |
transformers | 14,948 | closed | [deepspeed] saving checkpoint fallback when fp16 weights aren't saved | `_save_checkpoint` saves the deepspeed checkpoint, but this path:
```
push_to_hub => save_model => questionable outcome
```
for z3 if `stage3_gather_fp16_weights_on_model_save=false` will not save the model, so this PR adds a fallback to saving the full checkpoint instead from which weights can be recovered.
Blocking events:
- [x] https://github.com/microsoft/DeepSpeed/pull/1663 merged
- [x] new deepspeed version released
- [x] our dep table has deepspeed version adjusted to the above version
all resolved
@sgugger
| 12-27-2021 18:59:07 | 12-27-2021 18:59:07 | So the idea is that `deepspeed.save_fp16_model()` will return True if the weights have already been saved at the given path in which case there is no need to call `deepspeed.save_checkpoint()`?<|||||>That's right. I just want to make sure that someone won't lose their work in case they misconfigured their DS setup.
We could check the DS config on our side as well, but I think all these things should be Deespeed's business.<|||||>Looking at your DS pull request, I wonder what do you think about using a new method to tell us if a checkpoint actually exists on disk (perhaps using the tag argument and checking the contents of the `latest` file).
That new method could give us more flexibility/control when calling save_checkpoint()<|||||>That would be an ambiguous API, since the checkpoint on disc could pre-exist from an old run. So checking timestamps will be required and it quickly becomes complicated and uncertain.
Given that the saving is super-fast, even for huge 100B models, I think it's no problem if on a rare occasion it will get saved twice.
I think we could have worked out something better in the HF Trainer normal path, but if someone uses parts of the API, that's when things become potentially "unsafe".
If a user uses parts of the HF Trainer API but builds their own training loop and doesn't care for the saved model then they won't call it.
Am I missing some path that will do an inefficient double saving in the normal case?<|||||>It's just my OCD, I need to get that fixed at some point :-). As far as I'm concerned, the normal case works 100% correctly. Again, many thanks for your work!<|||||>IMHO, while OCD in life can be harmful at times, OCD in software should be the standard and not considered a handicap. Especially in a library used by tens of thousands of users.
In other words if you see a path that is invalid we should fix it.<|||||>I'm still interested in this. In the mean time, the associated deepspeed change has been included in release 0.5.9. <|||||>Thank you, @MihaiBalint - the version update has to happen in `setup.py` and the dependency table is automatically updated. I adjusted things. testing now as CI doesn't test deepspeed here. |
transformers | 14,947 | closed | Improve truncation_side | # What does this PR do?
Largely taken from #859, but redone in order to avoid documentation issues since the
switch to MD. @NielsRogge you are co-authored, since it's mainly your PR.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 12-27-2021 18:36:28 | 12-27-2021 18:36:28 | > Thanks for working on this. I don't think you have properly put Niels as Co-author, there needs to be a special syntax Co-authored-by xxx in a commit description for GitHub to recognize it.
The email I used was wrong, I had to dive into other git commits to find the proper email. |
transformers | 14,946 | closed | Fix duplicate call to save_checkpoint when using deepspeed | # What does this PR do?
Drop duplicate call to deepspeed.save_checkpoint(), the trainer.save_model() function already handles that case.
Following this change: https://github.com/huggingface/transformers/pull/14652/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR1986
The call to save_checkpoint() was duplicated.
I found this issue after seeing the following logs (note the last 4 lines):
```
[INFO|trainer.py:2033] 2021-12-26 19:42:00,421 >> Saving model checkpoint to finetuned-ro-en-dev/checkpoint-2
[INFO|configuration_utils.py:425] 2021-12-26 19:42:00,423 >> Configuration saved in finetuned-ro-en-dev/checkpoint-2/config.json
[INFO|modeling_utils.py:1070] 2021-12-26 19:44:09,064 >> Model weights saved in finetuned-ro-en-dev/checkpoint-2/pytorch_model.bin
[INFO|tokenization_utils_base.py:2043] 2021-12-26 19:44:09,110 >> tokenizer config file saved in finetuned-ro-en-dev/checkpoint-2/tokenizer_config.json
[INFO|tokenization_utils_base.py:2049] 2021-12-26 19:44:09,112 >> Special tokens file saved in finetuned-ro-en-dev/checkpoint-2/special_tokens_map.json
[2021-12-26 19:44:09,596] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: finetuned-ro-en-dev/checkpoint-2/global_step2/mp_rank_00_model_states.pt
[2021-12-26 19:59:09,484] [INFO] [engine.py:2964:_save_zero_checkpoint] zero checkpoint saved finetuned-ro-en-dev/checkpoint-2/global_step2/zero_pp_rank_0_mp_rank_00_optim_states.pt
[2021-12-26 19:59:09,575] [INFO] [logging.py:69:log_dist] [Rank 0] Saving model checkpoint: finetuned-ro-en-dev/checkpoint-2/global_step2/mp_rank_00_model_states.pt
[2021-12-26 20:16:17,005] [INFO] [engine.py:2964:_save_zero_checkpoint] zero checkpoint saved finetuned-ro-en-dev/checkpoint-2/global_step2/zero_pp_rank_0_mp_rank_00_optim_states.pt
```
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
- @stas00 @LysandreJik
| 12-27-2021 17:52:26 | 12-27-2021 17:52:26 | sorry, didn't realize that the 2 PRs were the same, just different source branch. ok, let's work on this one.
So indeed there is a duplication as you discovered https://github.com/huggingface/transformers/pull/14652/files#diff-ed55888e6665791fe92cc8fc0c499da54f4ace6738551cd9a2591881cda076deR1986
So it should be removed and not the way this PR proposes. Would you like to fix that and then I will merge it?
Basically revert the change you proposed and then this PR should revert my change you linked to where a duplication was added.
--------------
Notes for myself:
So after merging this the only issue is:
```
push_to_hub => save_model => questionable outcome
```
for z3 if `stage3_gather_fp16_weights_on_model_save=false`.
otherwise this path:
```
_save_checkpoint => save_model
```
conditionally saves the model but certainly saves the deepspeed checkpoint inside `_save_checkpoint`
I'm thinking that perhaps `save_model` should have the logic to use `self.deepspeed.save_checkpoint(output_dir)` as a saving grace for z3+ `stage3_gather_fp16_weights_on_model_save=false`, since weights can be recovered in this case.
I will probably make a change on the deepspeed side and then it'll be easier for the Trainer to know whether to fall back or not.
It will be resolved in this PR https://github.com/huggingface/transformers/pull/14948<|||||>@stas00 many thanks for the review! |
transformers | 14,945 | closed | Unable to see total_flos in ViT training logs | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.14.1
- Platform: Ubuntu 20.04.1
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.0 (True)
- Tensorflow version (GPU?): n/a
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: DDP
### Who can help
Models:
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
Library:
- Trainer: @sgugger
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): ViT
The problem arises when using:
* [ o] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ o] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. download transformers with git clone and pip install
2. run example script in transformers/examples/pytorch/image-classification/run_image_classification.py
3. the resulting logs always give "total_flos=0.0"
```
"log_history": [
{
...
"step": 17,
"total_flos: 0.0,
"train_loss": 0.54699146...,
...
```
## Expected behavior
Being able to see total floating points operations of the whole training (total_flos) in the logs
| 12-27-2021 16:29:45 | 12-27-2021 16:29:45 | Could you update your Transformers to the last version? I think this was fixed recently.<|||||>I updated Transformers to the latest (master) version and I can see total_flos now. Thank you for your help. |
transformers | 14,944 | closed | Map model_type and doc pages names | # What does this PR do?
As discussed on [moon-landing](https://github.com/huggingface/moon-landing/pull/1692#discussion_r774957604), it's not possible to automatically generate the link to the documentation from the configuration of a model, due to multiple inconsistencies between model type and doc page name.
This PR fixes that and adds a script to make sure we have a model type for every documented model (for instance currently we have a mising model type for models having a tokenizer only, like Wav2Vec2-Phoeneme and they don't appear on the AutoTokenizer doc as a result) and that the doc page name matches the model type.
It breaks some links to the doc, but we (I and Julien) think this is acceptable. | 12-27-2021 16:28:23 | 12-27-2021 16:28:23 | very neat!<|||||>Sounds good to me. Will resolve conflicts and merge. |
transformers | 14,943 | closed | Different evaluation results | ## Environment info
- `transformers` version: 4.10.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
--> @patrickvonplaten, @sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ㅇ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ㅇ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. download transformer with `git clone`
2. move to ~/transformers/examples/pytorch/question-answering
3. run example code with below code
4.
```
python run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/
```
## Expected behavior
1. My result indicates the results below
```
***** eval metrics *****
epoch = 2.0
eval_exact_match = 79.8108
eval_f1 = 87.5898
eval_samples = 10784
```
but, as mentioned at `README.md` of question-answering, expected result is `f1 = 88.52 \ exact_match = 81.22`
I want to know why there is a difference between them.
2. When I run
```
python run_qa.py --model_name_or_path bert-base-uncased --dataset_name squad --do_train --do_eval --per_device_train_batch_size 12 --learning_rate 3e-5 --num_train_epochs 2 --max_seq_length 384 --doc_stride 128 --output_dir /tmp/debug_squad/
```
I face the message, ` [INFO|trainer.py:521] 2021-12-27 20:42:58,092 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: example_id, offset_mapping.`.
I want to run `BertForQuestionAnswering.forward` but I don't know how to. ;<
| 12-27-2021 12:12:50 | 12-27-2021 12:12:50 | I'm not entirely sure the result section is up to date, so it may have been with slightly different hyperparameters (like a batch size of 8 which is the default). It might also be an error in the base checkpoint used, as I may have used the `bert-base-cased` checkpoint to get those results.
For your second question, this log is completely normal. If you look at the script, it processes the dataset and there will be in it columns the model expects and two it does not, which are dropped when running evaluation.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.