repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
โ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 16,653 | closed | Remove parent/child tests in auto model tests | # What does this PR do?
This PR removes the tests that checks whether models/configs/tokenizers are sorted in the auto mappings such that children classes are before parent classes. While it may have been necessary in the pasts for the `AutoXxx` APIs to work, this is not longer the case and this actually makes it harder on contributors to add new models.
Also, in the future we could have a script like isort that enforces all those auto-mappings are alphabetically sorted. | 04-07-2022 14:56:32 | 04-07-2022 14:56:32 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16653). All of your documentation changes will be reflected on that endpoint. |
transformers | 16,652 | closed | Update run_translation_no_trainer.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/blob/main/examples/pytorch/translation/run_translation_no_trainer.py
line384 args.model_name_or_path -> args.config_name
fix it
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-07-2022 14:05:05 | 04-07-2022 14:05:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,651 | closed | ModelForSequenceClassification (Roberta/Bert) don't support mean_pool | # ๐ Feature request
I hope I did not miss it, but it looks like there is no way to use `RobertaForSequenceClassification` or `BertForSequenceClassification` with mean_pool rather than `[CLS]` or `<s>` tokens as pooler.
## Motivation
It would be useful for benchmarking and in some cases it can work better than the start of sequence token I believe?
## Your contribution
I can try to make the PR if it makes sense for the repo to add it | 04-07-2022 13:48:21 | 04-07-2022 13:48:21 | I think @patrickvonplaten might be the person to tag to check if this slipped through.
If this doesn't make sense I'll close the issue!<|||||>Think in this case the best is to do a little tweak to the original modeling file and use this one instead :-) <|||||>cc @sgugger - think that's also a good use case for: https://huggingface.co/docs/transformers/custom_models? <|||||>BERT and RoBERTa supports the pooling that was introduced in their paper. Transformers is not a modular toolbox, so as Patrick said, if you want to tweak this, you will need to edit the modeling file, but we won't accept a change in the code to support this. And as Patrick pointed out, you can still share your tweaked model if it performs better!<|||||>Ok thanks! I will close the issue then |
transformers | 16,650 | closed | Finetuned Wav2Vec2+LM Inference on my `wav` audio files | ## Environment info
- `transformers` version: 4.17.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.13
- PyTorch version (GPU?): 1.9.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten
## Information
The model I am using (Wav2Vec2.0 Large XLS-R 53 English):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I am [fine-tuning Wav2Vec with LM Head](https://huggingface.co/blog/fine-tune-wav2vec2-english) using WikiText to produce tri-grams LM. I downloaded the fine-tuned model dir locally and tried to perform inference on my audio `.wav` file
2. Please find [here](https://drive.google.com/drive/folders/1IBUTglXLw4IX8uKC0qmGKKhkoCvc3s94?usp=sharing), model files, test audio file, and requirements.txt if needed to reproduce the problem
### Code snippet
```{python}
import torchaudio
import torch
from transformers import Wav2Vec2ForCTC, Wav2Vec2ProcessorWithLM
from os import getcwd
from os.path import join as path_join
model_name = "jonatasgrosman/wav2vec2-large-xlsr-53-english"
model = Wav2Vec2ForCTC.from_pretrained(model_name)
processor_path = path_join(getcwd(), "stt_assets", "stt_model")
processor = Wav2Vec2ProcessorWithLM.from_pretrained(processor_path)
audio_file = 'test_audio_send.wav'
speech_array, sampling_rate = torchaudio.load(audio_file)
resampler = torchaudio.transforms.Resample(48_000, 16_000)
speech = resampler(speech_array).squeeze().numpy()
features = processor(speech, sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(**features).logits
transcription = processor.batch_decode(logits.numpy())
stt_output = transcription.text[0].lower()
print(stt_output)
```
## Expected behavior
- The expected behavior was to print out the transcription but `logits = model(**features).logits` raises the following Exception:
`RuntimeError: Expected 3-dimensional input for 3-dimensional weight [512, 1, 10], but got 4-dimensional input of size [1, 1, 2, 50582] instead`
| 04-07-2022 13:24:32 | 04-07-2022 13:24:32 | Hey @elsheikh21,
We cannot really guarantee that costume models work correctly. For the future, could you instead please post a **minimal** reproducible code snippet here? E.g. the error happens in the line `model(**features).logits` and it should be necessary to download a whole directory to reproduce the error. Could you maybe try to shorten your code example to 4,5 lines including a path to download the `test_audio_sent.wav` file?
Regarding your error, the problem is that the audio file is not mono-channel but has a duo-channel format. E.g. the following snippet gives:
```py
import torchaudio
audio_file = 'test_audio_send.wav'
speech_array, sampling_rate = torchaudio.load(audio_file)
speech_array.shape
```
```
torch.Size([2, 151744])
```
So your code would probably work if you replace:
```py
speech_array, sampling_rate = torchaudio.load(audio_file)
```
by
```py
speech_array, sampling_rate = torchaudio.load(audio_file)
speech_array = speech_array[0]
```
<|||||>Hello @patrickvonplaten
Thanks a lot for your help, I have thought that the code snippet I added in the question is reproducible, please correct me if I am wrong for future instances, I have added the `ipynb` that I followed to produce the `wav2vec2 with LM head` if needed
I have followed your correction and it is working fine, yet I have an error when I try to decode the output.
The first one that when I use
```
transcription = processor.batch_decode(logits.numpy())
```
it gives me the following exception `cannot find context for 'fork'` which is related to multiprocessing IMHO,
so I changed that to
```
transcription = processor.decode(logits.numpy())
```
and this line raises the following exception
```
Traceback (most recent call last):
File "C:\Users\AhmedElSheikh\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\213.6777.50\plugins\python\helpers\pydev\pydevd.py", line 1483, in _exec
pydev_imports.execfile(file, globals, locals) # execute the script
File "C:\Users\AhmedElSheikh\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\213.6777.50\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "D:/SVC/stt_server_side/main_stt.py", line 69, in <module>
main_stt()
File "D:/SVC/stt_server_side/main_stt.py", line 55, in main_stt
transcription = processor.decode(logits=logits_)
File "C:\ProgramData\Miniconda3\envs\stt_ws\lib\site-packages\transformers\models\wav2vec2_with_lm\processing_wav2vec2_with_lm.py", line 462, in decode
decoded_beams = self.decoder.decode_beams(
File "C:\ProgramData\Miniconda3\envs\stt_ws\lib\site-packages\pyctcdecode\decoder.py", line 527, in decode_beams
decoded_beams = self._decode_logits(
File "C:\ProgramData\Miniconda3\envs\stt_ws\lib\site-packages\pyctcdecode\decoder.py", line 347, in _decode_logits
char = self._idx2vocab[idx_char]
KeyError: 33
```
After some debugging I found the following,
```
print(logits shape)
>>> (1, 157, 33)
```
And my tokenizer vocabulary size
```
len(tokenizer.processor.get_vocab())
>>> 33
```
the vocabulary itself is
```
print(tokenizer,processor.get_vocab())
>>> '<pad>': 0, '<s>': 1, '</s>': 2, '<unk>': 3, '|': 4, "'": 5, '-': 6, 'a': 7, 'b': 8, 'c': 9, 'd': 10, 'e': 11, 'f': 12, 'g': 13, 'h': 14, 'i': 15, 'j': 16, 'k': 17, 'l': 18, 'm': 19, 'n': 20, 'o': 21, 'p': 22, 'q': 23, 'r': 24, 's': 25, 't': 26, 'u': 27, 'v': 28, 'w': 29, 'x': 30, 'y': 31, 'z': 32}
```<|||||>For future readers
we had here 2 problems
1. During transcribing the audio via my model was expecting 3D input and received 4D input `[1, 1, 2, 50582]`
2. During decoding the model logits we had a problem that our model produced logits `shape=(1, 157, 33)`
**Solution**:
1. Input should have been parsed and resampled as follows
```{python}
import torchaudio
# audio_file = path/to/your/wav/audio/file
speech_array, sampling_rate = torchaudio.load(audio_file)
# Audio has to be mono-channel
speech_array = speech_array[0]
resample = torchaudio.transforms.Resample(sampling_rate, 16_000)
speech = resample(speech_array).squeeze().numpy()
```
2. Passing audio to the model as follows
```{python}
features = processor(speech, sampling_rate=req_sampling_rate, return_tensors="pt")
logging.info("Transcribing the audio file")
with torch.no_grad():
_logits = model(**features).logits
# so instead of logits_ of shape [1, 157, 33], it should be `[157, 33]`
# in order to remove this `1`, I resorted to using `squeeze()`
logits_ = _logits.numpy().squeeze()
transcription = processor.decode(logits=logits_)
print(transcription.text.lower())
``` |
transformers | 16,649 | closed | Improve no-CUDA detecton, add gloo | # What does this PR do?
Previously, if `no_cuda` was set to False and `local_rank!=-1`, `TrainingArguments._setup_devices` would still try to use a `cuda` device despite it being non-available, causing an exception. This PR fixes this issue, and adds `gloo` to allowed `xpi_backend`s.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-07-2022 12:35:47 | 04-07-2022 12:35:47 | cc @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I am unsure what you are trying to fix, as it is the intended behavior. The user has to explicitly opt-in to DDP on CPU with the flag `--no_cuda` and should receive an error otherwise if the GPUs are not visible for some reason.<|||||>Hmm, I see. From the description of the `no_cuda` argument, I inferred that the default behavior is to use CUDA if available, and fall back to CPU otherwise. This behavior is also seen with the `DataParallel` case. If it is intended that the user has to set this explicitly, then feel free to close the PR. |
transformers | 16,648 | closed | Fix QA sample | # What does this PR do?
Try to address the issue about `PT_QUESTION_ANSWERING_SAMPLE` discussed in the following 2 comments:
https://github.com/huggingface/transformers/pull/16523#issuecomment-1088635272
https://github.com/huggingface/transformers/pull/16523#issuecomment-1090346578 | 04-07-2022 12:29:17 | 04-07-2022 12:29:17 | @patrickvonplaten @sgugger
I would like to have a feedback to see if this is in good direction.
@sgugger `make style` will break the line into 2 lines. But when replacing the `qa_target_start_index` and `qa_target_end_index` by `14` & `15`, it actually fits in one line. Should I use a shorter name like `qa_target_start_idx` or `qa_start_index` for example?<|||||>cc @vumichien and @bhadreshpsavani to keep them updated<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I should do the same for `TF_QUESTION_ANSWERING_SAMPLE`. Currently just to have some feedbacks from you if any :-)<|||||>I also changed `TF_QUESTION_ANSWERING_SAMPLE`.
Ran locally for `Roberta` and `TFRoberta`, and their doctest pass.
Ready for @sgugger to give a final look :-)<|||||>@vumichien @bhadreshpsavani
Could you pull the latest change into the master/main branch of your local clone, rebase your working branch for this sprint, and check if the QA doc example can now pass the doctest, please? Thanks a lot! |
transformers | 16,647 | closed | added changes in deberta dropout | # What does this PR do?
This PR does the foillowing.
1- Add a task dropout to deberta config, that is, cls_dropout. This is similar to what Bert and others have. This is justified since in Deberta paper adjusting task dropout is very important, therefore I think it shouldn't be just be set to the hidden_dropout_prob. While the latter should, in general, not be changed, cls_dropout should be tuned to obtain best results, as stated in: https://arxiv.org/abs/2006.03654
2. Armonize the use of Dropout. In general, Deberta uses Stable Dropout, which was also used in Classification. However, I don't understand why, on Token Classification normal Dropout was used. In some experiments I carried out, StableDropout worked better than normal dropout also for token classification, and it is the type of dropout that is used in all other Deberta layers, therefore I think it is better to have the same type of dropout in all task layers.
I also changed a little bit how the dropout is created to remove redundant lines, but as you see there are only a few changes, they're not many lines of code, but for clarity and simplicity I decided to change that also; e.g (instead of first getting cls_dropout if existing and then if it is None take hidden_dropout_prob, I do it all in one line in getattr).
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR
@BigBird01 @LysandreJik @patrickvonplaten | 04-07-2022 12:00:54 | 04-07-2022 12:00:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16647). All of your documentation changes will be reflected on that endpoint.<|||||>Hi,
Just as an FYI, changing `nn.Dropout` to `StableDropout` will stop the models supporting `saved_model` for TensorFlow when trying to save them on disk for TensorFlow serving. This is already an issue in `TFDebertaV2ForSequenceClassification` which both `nn.Dropout` and `StableDropout` were tested on 4 different datasets for multi-class classification and the results were pretty much the same between these two. (I already changed it to nn.Dropout so I can save them as saved_model, the performance in production is exactly the same as StableDropout at least for me)
reference: https://github.com/huggingface/transformers/issues/16484
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,646 | closed | [Doctests] Fix all T5 doc tests | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Corrects T5 model docs and adds them to doc tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-07-2022 10:04:33 | 04-07-2022 10:04:33 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,645 | closed | A tool to generate this library with (a) specific model(s) | # ๐ Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
A tool that allows users (mostly researchers) to get a specific version of (a) transformer model(s) depending on framework, hugging face version.
For example, I want only the source code of BART model for PyTorch in the latest version of HuggingFace, but I don't want to break other HuggingFace feature. The tool should remove any files related to other models, and any frameworks such as TensorFlow. I could run `gettransformers bart pytorch`. This command should generate for me a huggingface/transformers library that only has BART model written with PyTorch.
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
To let the researchers obtain less line of codes that they can reuse.
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md -->
I propose the feature.
| 04-07-2022 10:02:12 | 04-07-2022 10:02:12 | Hey @dinhanhx! The transformers library is made to have models be independent of each other, so that when modifying a model file, you're certain that it will not break other models. You can therefore feel free to modify the code of BART for your use-case, and rest-assured that it will not break other models.
If you'd like to create a brand new model from BART and tweak it to your use-case while keeping BART intact, I recommend you use the `add-new-model-like` command line feature. You can read more about it [here](https://github.com/huggingface/transformers/tree/main/templates/adding_a_new_model#add-new-model-like-command).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,644 | closed | [Doctests] Correct task summary | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR rounds the results of the image classification pipeline to make it easier to test.
Fixing `task_summary.mdx` doc test
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-07-2022 07:51:27 | 04-07-2022 07:51:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,643 | closed | Updated _load_pretrained_model_low_mem to check if keys are in the state_dict | # What does this PR do?
This PR checks if any key is in the `state_dict` before attempting to load it. If we have multiple checkpoints, not all keys are in every checkpoint.
TODO
- [ ] tests
| 04-07-2022 06:27:14 | 04-07-2022 06:27:14 | I am wondering what is the correct place to add a test for this function<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>Your plan works for me, Sylvain. I will work on the low mem test then today. |
transformers | 16,642 | closed | After using HFTracer, the Bert model can't be trained | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: ubuntu18.04
- Python version: 3.6.9
- PyTorch version (GPU?): 1.10.0 GPU
- Tensorflow version (GPU?): none
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
@LysandreJik @sgugger
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Bert
The problem arises when using:
* [ ] the official example scripts: (give details below)
* my own modified scripts: (give details below)
The tasks I am working on is:
* an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I traced a BERT model using "HFTracer". And i removed "labels" from "concrete_args" to use the "trainer.train".
2. After training of the modified model, i want to export an onnx model.
3. But "labels" has been fixed as an input of the modified model. And it is difficult to remove it. May i trace the bert model(or a part of it) without "labels", and then using the "trainer.train"?
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
May i trace the bert model(or a part of it) without "labels", and then using the "trainer.train"?
<!-- A clear and concise description of what you would expect to happen. -->
| 04-07-2022 04:37:04 | 04-07-2022 04:37:04 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,641 | closed | Multiclass evaluation not working | Hello,
I am new to Transformers library and I'm trying to do Sequence Classification. I have 24 labels and I am getting the following error:
> ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
Even though I already added the averaging method as such:
`metric = load_metric('precision', average='weighted')`
Would someone kindly point me towards whatever I'm doing wrong? I'm able to fine-tune the pre-trained BERT model if I use 'accuracy' as the metric. But somehow it's not accepting my `average='weighted'` argument. | 04-07-2022 04:01:57 | 04-07-2022 04:01:57 | Hello! Could you provide more information, for example by filling the issue template for bugs, including the code sample and versions? Thanks.<|||||>No problem. Sorry, I wasn't sure if this was a bug or my own mistake so I didn't use the bug template. Here you go:
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-4.14.252-131.483.amzn1.x86_64-x86_64-with-glibc2.9
- Python version: 3.6.13
- PyTorch version (GPU?): 1.10.2+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
1. @sgugger
2. @LysandreJik
3. @sgugger
## Information
Model I am using (Bert, XLNet ...): bert-based-cased
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Load own data set using `Dataset.from_pandas(my_data)`
2. Tokenize it with `AutoTokenizer.from_pretrained('bert-base-cased')`
3. Create training arguments, metric, and Trainer object to start training.
```
model = AutoModelForSequenceClassification.from_pretrained("bert-base-cased", num_labels=24)
training_args = TrainingArguments(output_dir='./checkpoints/my_model', evaluation_strategy="epoch")
metric = load_metric('precision', average='weighted')
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
return metric.compute(predictions=predictions, references=labels)
trainer = Trainer(model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
compute_metrics=compute_metrics)
trainer.train()
```
and I got this error:
```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-46-5aef28bcb00d> in <module>
14 eval_dataset=eval_dataset,
15 compute_metrics=compute_metrics)
---> 16 trainer.train()
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in train(self, resume_from_checkpoint, trial, ignore_keys_for_eval, **kwargs)
1488
1489 self.control = self.callback_handler.on_epoch_end(args, self.state, self.control)
-> 1490 self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
1491
1492 if DebugOption.TPU_METRICS_DEBUG in self.args.debug:
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in _maybe_log_save_evaluate(self, tr_loss, model, trial, epoch, ignore_keys_for_eval)
1600 metrics = None
1601 if self.control.should_evaluate:
-> 1602 metrics = self.evaluate(ignore_keys=ignore_keys_for_eval)
1603 self._report_to_hp_search(trial, epoch, metrics)
1604
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in evaluate(self, eval_dataset, ignore_keys, metric_key_prefix)
2262 prediction_loss_only=True if self.compute_metrics is None else None,
2263 ignore_keys=ignore_keys,
-> 2264 metric_key_prefix=metric_key_prefix,
2265 )
2266
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/transformers/trainer.py in evaluation_loop(self, dataloader, description, prediction_loss_only, ignore_keys, metric_key_prefix)
2503 # Metrics!
2504 if self.compute_metrics is not None and all_preds is not None and all_labels is not None:
-> 2505 metrics = self.compute_metrics(EvalPrediction(predictions=all_preds, label_ids=all_labels))
2506 else:
2507 metrics = {}
<ipython-input-46-5aef28bcb00d> in compute_metrics(eval_pred)
7 logits, labels = eval_pred
8 predictions = np.argmax(logits, axis=-1)
----> 9 return metric.compute(predictions=predictions, references=labels)
10
11 trainer = Trainer(model=model,
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/datasets/metric.py in compute(self, predictions, references, **kwargs)
428 inputs = {input_name: self.data[input_name] for input_name in self.features}
429 with temp_seed(self.seed):
--> 430 output = self._compute(**inputs, **compute_kwargs)
431
432 if self.buf_writer is not None:
~/.cache/huggingface/modules/datasets_modules/metrics/precision/bfadb1cf35fe89242263de7dc028b248827c08ba075659c0e812d0fc6e5237c9/precision.py in _compute(self, predictions, references, labels, pos_label, average, sample_weight)
116 def _compute(self, predictions, references, labels=None, pos_label=1, average="binary", sample_weight=None):
117 score = precision_score(
--> 118 references, predictions, labels=labels, pos_label=pos_label, average=average, sample_weight=sample_weight
119 )
120 return {"precision": float(score) if score.size == 1 else score}
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/metrics/_classification.py in precision_score(y_true, y_pred, labels, pos_label, average, sample_weight, zero_division)
1660 warn_for=('precision',),
1661 sample_weight=sample_weight,
-> 1662 zero_division=zero_division)
1663 return p
1664
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/utils/validation.py in inner_f(*args, **kwargs)
61 extra_args = len(args) - len(all_args)
62 if extra_args <= 0:
---> 63 return f(*args, **kwargs)
64
65 # extra_args > 0
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/metrics/_classification.py in precision_recall_fscore_support(y_true, y_pred, beta, labels, pos_label, average, warn_for, sample_weight, zero_division)
1463 raise ValueError("beta should be >=0 in the F-beta score")
1464 labels = _check_set_wise_labels(y_true, y_pred, average, labels,
-> 1465 pos_label)
1466
1467 # Calculate tp_sum, pred_sum, true_sum ###
~/anaconda3/envs/pytorch_p36/lib/python3.6/site-packages/sklearn/metrics/_classification.py in _check_set_wise_labels(y_true, y_pred, average, labels, pos_label)
1294 raise ValueError("Target is %s but average='binary'. Please "
1295 "choose another average setting, one of %r."
-> 1296 % (y_type, average_options))
1297 elif pos_label not in (None, 1):
1298 warnings.warn("Note that pos_label (set to %r) is ignored when "
ValueError: Target is multiclass but average='binary'. Please choose another average setting, one of [None, 'micro', 'macro', 'weighted'].
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect the model to complete training without an error. I am able to do this if I used `metric = load_metric('accuracy')` but not with precision, recall, or f1.
<!-- A clear and concise description of what you would expect to happen. -->
<|||||>Same here<|||||>Hi there!
You are passing the ``average`` argument when you load the metric, instead, you should pass it to the `compute` method like this:
```
metric = load_metric('precision')
metric.compute(predictions=[0,1,2,3,4,4,4,4], references=[2,2,2,3,4,1,1,4], average="weighted")
Output:
----------------------
>>> {'precision': 0.625}
```
Hope this helps!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,640 | closed | Wav2Vec2 Conformer Encoder | # ๐ Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Implementing Conformer version of context encoder for [Wav2Vec2](https://github.com/pytorch/fairseq/blob/main/fairseq/models/wav2vec/wav2vec2.py#L390).
## Motivation
Fairseq has recently updated their Wav2Vec2 codebase, incorporating [Conformer-based](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/config/pretraining/wav2vec2_conformer_large_librivox.yaml) encoder as an option with already existing Transformer-based version.
As long as a decent number of the new SOTA-matching papers are utilizing Conformers for either of ctc or transducer models, it would be a great addition to Transformers library to have an option of using transformer or conformer based encoder for [Wav2Vec2ForCTC](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L1647)
Some of the papers: [JUST](https://arxiv.org/pdf/2111.08137v1.pdf), [Pushing the Limits of Semi-Supervised Learning for ASR](https://arxiv.org/pdf/2010.10504.pdf)
| 04-06-2022 23:34:09 | 04-06-2022 23:34:09 | Thanks for letting me know - we should definitely add these conformer models asap. I can take care of this next Tuesday :-) <|||||>Also cc @anton-l <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,639 | closed | [megatron-bert-uncased-345m] fix conversion | Fixes: https://github.com/huggingface/transformers/issues/16638
The original conversion script made an assumption that all released `megatron-bert-*-345m` checkpoints had the same vocab, but https://huggingface.co/nvidia/megatron-bert-cased-345m/blob/main/vocab.txt and https://huggingface.co/nvidia/megatron-bert-uncased-345m/blob/main/vocab.txt are quite different.
This PR sets `config.vocab_size` to the actual size of one of the params of vocab dimension.
I tested that both checkpoints mentioned above convert and load correctly:
```
python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron-bert-cased-345m/checkpoint.zip
python -c 'from transformers import MegatronBertForMaskedLM; MegatronBertForMaskedLM.from_pretrained("megatron-bert-cased-345m")'
python src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py megatron-bert-uncased-345m/checkpoint.zip
python -c 'from transformers import MegatronBertForMaskedLM; MegatronBertForMaskedLM.from_pretrained("megatron-bert-uncased-345m")'
```
both succeed.
Before this PR only the former worked, and the 2nd failed with:
```
RuntimeError: Error(s) in loading state_dict for MegatronBertForMaskedLM:
size mismatch for cls.predictions.bias: copying a param with shape torch.Size([30592]) from checkpoint, the shape in current model is torch.Size([29056])
```
29056 is the vocab size of `megatron-bert-cased-345m`
@LysandreJik, @sgugger
| 04-06-2022 23:05:08 | 04-06-2022 23:05:08 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,638 | closed | MegatronBertForMaskedLM | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux
- Python version: 3.6
- PyTorch version (GPU?): 1.10.0+cu102
- Tensorflow version (GPU?):2.6
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik @stas00
## Information
Model I am using MegatronBERT from https://huggingface.co/nvidia/megatron-bert-uncased-345m. The model loads correctly in the following way :
```
from transformers import BertTokenizer, MegatronBertModel
model = MegatronBertModel.from_pretrained("megatron_model_here")
```
but throws a RuntimeError for size mismatch while using MegatronBertForMaskedLM
## To reproduce
`model = MegatronBertForMaskedLM.from_pretrained("megatron_model_here")
`
Error :
```
RuntimeError: Error(s) in loading state_dict for MegatronBertForMaskedLM:
size mismatch for cls.predictions.bias: copying a param with shape torch.Size([30592]) from checkpoint, the shape in current model is torch.Size([29056])
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
MaskedLM model loads properly
| 04-06-2022 22:25:06 | 04-06-2022 22:25:06 | I was able to reproduce the issue.
Please try: https://github.com/huggingface/transformers/pull/16639
<|||||>@stas00 Thanks!!! Works like a charm now. |
transformers | 16,637 | closed | `transformers.ViltProcessor` requires torch >= 1.10 | If you follow the instructions in the docs for ViltProcessor:
```python
processor = ViltProcessor.from_pretrained("dandelin/vilt-b32-mlm")
model = ViltModel.from_pretrained("dandelin/vilt-b32-mlm")
inputs = processor(image, text, return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
```
With an older version of torch, you will get:
```
TypeError: meshgrid() got an unexpected keyword argument 'indexing'
```
Maybe there should be a warning when someone tries to import ViltProcessor with an older version of torch? | 04-06-2022 20:41:30 | 04-06-2022 20:41:30 | Hi,
Yes it would be great to add a warning. Are you interested in opening a PR?<|||||>@NielsRogge I created one herE :https://github.com/huggingface/transformers/pull/16756<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,636 | closed | add vit tf doctest with @add_code_sample_docstrings | # What does this PR do?
Creates doctest for TF version of ViT. It is a cleaner version of PR #16462 which had a messed up history.
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@ydshieh | 04-06-2022 19:38:24 | 04-06-2022 19:38:24 | _The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM, thank you @johko !
I ran it locally and the the doctests passed!
@sgugger Could you have a final look too? |
transformers | 16,635 | closed | Bump notebook from 6.4.1 to 6.4.10 in /examples/research_projects/visual_bert | Bumps [notebook](http://jupyter.org) from 6.4.1 to 6.4.10.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 04-06-2022 19:31:52 | 04-06-2022 19:31:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,634 | closed | Bump notebook from 6.4.1 to 6.4.10 in /examples/research_projects/lxmert | Bumps [notebook](http://jupyter.org) from 6.4.1 to 6.4.10.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 04-06-2022 19:22:24 | 04-06-2022 19:22:24 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,633 | closed | Update audio examples with MInDS-14 | This PR updates the audio examples in the docs with the smaller more lightweight MInDS-14 dataset. | 04-06-2022 19:00:52 | 04-06-2022 19:00:52 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,632 | closed | [modeling_utils] rearrange text | this PR moves the `_fast_init`-related note close to the argument itself as it got pushed away.
@sgugger | 04-06-2022 15:03:55 | 04-06-2022 15:03:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,631 | closed | Allow the same config several times in the auto mapping | # What does this PR do?
Currently you can't map a model type with another config. For instance TAPEX would like to map tapex with BartConfig.
This PR fixes that.
| 04-06-2022 13:59:34 | 04-06-2022 13:59:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,630 | closed | Add semantic segmentation example script, no trainer | # What does this PR do?
This PR adds an example script regarding fine-tuning any model supported by the `AutoModelForSemanticSegmentation` API on a semantic segmentation dataset, including regularly pushing to the hub during training as well as WandB logging.
I switched to using Accelerate as I had a bug with the Trainer.
To do:
- [ ] fix Tensorboard logs | 04-06-2022 13:33:34 | 04-06-2022 13:33:34 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16630). All of your documentation changes will be reflected on that endpoint.<|||||>I've added all requested changes. As you can see, I'm updating the repo using commits rather than creating subfolders.<|||||>Updated the script to create subfolders like the other examples (and these aren't pushed to the hub based on the gitignore file).<|||||>Some issue with git, closing this PR in favor of a new one. |
transformers | 16,629 | closed | add a warning in `SpmConverter` for sentencepiece's model using the byte fallback feature | # What does this PR do?
This PR proposes 2 changes:
1. Updating the protobuf model of the sentencepiece models
2. Raise a warning when the sentencepiece tokenizer that is being converted by `SpmConverter` to a fast tokenizer uses the byte fallback feature - which is not yet implemented in HF tokenizers cf [this issue](https://github.com/huggingface/tokenizers/issues/929#issuecomment-1088525287).
For context, this PR is motivated by the PR #15529 for adding `DebertaV2TokenizerFast` because the [deberta-v3 tokenizer uses the byte fallback feature](https://github.com/huggingface/transformers/pull/15529#pullrequestreview-930000241) and we need a way to warn the user that the fast tokenizer will not be equivalent to the slow sentencepiece tokenizer. Personally, I think this warning is beneficial in a more general case than deberta-v3 because the byte fallback feature was introduced a long time ago in the sentencepiece library.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Would love to have your feebacks @LysandreJik , @sgugger , @patrickvonplaten , @patil-suraj and/or @Narsil :hugs:
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-06-2022 13:05:31 | 04-06-2022 13:05:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,628 | closed | [DocTest] Doctest for Electra Pytorch | # What does this PR do?
Adds Doctest fo Electra Pytorch
Issue: #16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@ydshieh
| 04-06-2022 11:24:54 | 04-06-2022 11:24:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hi @ydshieh,
The issue we discussed in Issue:16292
For PyTorch its running fine,
only In case of TensorFlow its giving issue,
I am sharing [notebook](https://colab.research.google.com/drive/1buzYMKgm3b8kfTcyVvnToJCFLqdnZDxM?usp=sharing) for reproducing the issue.<|||||>> Hi @ydshieh, The issue we discussed in Issue:16292 For PyTorch its running fine, only In case of TensorFlow its giving issue, I am sharing [notebook](https://colab.research.google.com/drive/1buzYMKgm3b8kfTcyVvnToJCFLqdnZDxM?usp=sharing) for reproducing the issue.
Hi, I am not able to open it. Could you make it public?<|||||>Hi @ydshieh,
I have updated the link<|||||>@bhadreshpsavani Thanks. It looks like PyTorch somehow made it to run, despite the indices are wrong and out of range.
It gave strange loss value though.
I opened a PR to address this situation
https://github.com/huggingface/transformers/pull/16648. Let's wait it being reviewed and merged to proceed.
Thank you for your patience!<|||||>Hi @ydshieh,
When I used this code and rebase at the end I messed up the commits, is there any way to remove earlier commits from this PR?
I think I need to close this PR and create a separate one<|||||>Hi, @bhadreshpsavani
I think close this PR and create a new one is the easiest way (considering the change in your PR is not so large).
Thank you for the effort.
BTW, I think you used `git merge` rather than `git rebase`. In general, `git rebase` could avoid this situation :-).
(There might be other ways of doing things, but I am not a real git expert)<|||||>I followed this,
```
git checkout main # or `master`, depends on your local clone
git fetch upstream
git pull upstream main # Hugging Face `transformers` renamed the default branch to `main` recently
git checkout your_working_branch_for_this_sprint
git rebase main # or `master`
```
In the end, there was a merge conflict so i use something like `git rebase --continue` after solving the conflict and sync the changes. It got messed up!
I need to be more careful!
<|||||>> I followed this,
>
> ```
> git checkout main # or `master`, depends on your local clone
> git fetch upstream
> git pull upstream main # Hugging Face `transformers` renamed the default branch to `main` recently
> git checkout your_working_branch_for_this_sprint
> git rebase main # or `master`
> ```
>
> In the end, there was a merge conflict so i use something like `git rebase --continue` after solving the conflict and sync the changes. It got messed up!
>
> I need to be more careful!
OK, conflict is not easy thing. |
transformers | 16,627 | open | Add missing tokenizer test files [:building_construction: in progress] | # ๐ Add missing tokenizer test files
Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
## Tokenizers concerned
### not yet claimed
_none_
### claimed
- [ ] LED @nnlnr
- [ ] Flaubert @anmolsjoshi
- [ ] Electra @Rajathbharadwaj
- [ ] ConvBert @elusenji
- [ ] RemBert @IMvision12
- [ ] Splinter @ashwinjohn3
### with an ongoing PR
_none_
### with an accepted PR
- [x] Longformer @tgadeliya #17677
- [x] MobileBert @leondz #16896
- [x] RetriBert @mpoemsl #17017
## How to contribute?
1. Claim a tokenizer
a. Choose a tokenizer from the list of "not yet claimed" tokenizers
b. Check that no one in the messages for this issue has indicated that they care about this tokenizer
c. Put a message in the issue that you are handling this tokenizer
2. Create a local development setup (if you have not already done it)
I refer you to section ["start-contributing-pull-requests" of the Contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests) where everything is explained. Don't be afraid with step 5. For this contribution you will only need to test locally the tests you add.
3. Follow the instructions on the readme inside [the `templates/adding_a_missing_tokenization_test` folder](https://github.com/huggingface/transformers/tree/main/templates/adding_a_missing_tokenization_test) to generate the template with cookie cutter for the new test file you will be adding. Don't forget to move the new test file at the end of the template generation to the sub-folder named after the model for which you are adding the test file in the `tests` folder. Some details about questionnaire - assuming that the name of the lowcase model is `brand_new_bert`:
- "has_slow_class": Set true there is a `tokenization_brand_new_bert.py` file in the folder `src/transformers/models/brand_new_bert`
- "has_fast_class": Set true there is a `tokenization_brand_new_bert_fast.py` file the folder `src/transformers/models/brand_new_bert`.
- "slow_tokenizer_use_sentencepiece": Set true if the tokenizer defined in the `tokenization_brand_new_bert.py` file uses sentencepiece. If this tokenizer don't have a ``tokenization_brand_new_bert.py` file set False.
4. Complete the `setUp` method in the generated test file, you can take inspiration for how it is done for the other tokenizers.
5. Try to run all the added tests. It is possible that some tests will not pass, so it will be important to understand why, sometimes the common test is not suited for a tokenizer and sometimes a tokenizer can have a bug. You can also look at what is done in similar tokenizer tests, if there are big problems or you don't know what to do we can discuss this in the PR (step 7.).
6. (Bonus) Try to get a good understanding of the tokenizer to add custom tests to the tokenizer
7. Open an PR with the new test file added, remember to fill in the RP title and message body (referencing this PR) and request a review from @LysandreJik and @SaulLu.
## Tips
Do not hesitate to read the questions / answers in this issue :newspaper: | 04-06-2022 10:07:24 | 04-06-2022 10:07:24 | Hi, I would like to add tests for `Longformer` tokenizer<|||||>@SaulLu I would like to add tests for Flaubert<|||||>Hey I would like to contribute for `Electra`,Pointers please!<|||||>Thank you all for offering your help!
@Rajathbharadwaj ,sure! what do you need help with? Do you need more details on any of the steps listed in the main post?<|||||>Hi, first time contributor here-could I add tests for ```Splinter```?<|||||>Is anyone else encountering this error with the cookiecutter command? my dev environment set up seemed to have went all fine...
Also I had run the command inside the ```tests/splinter``` directory

<|||||>@faiazrahman , thank you so much for working on this! Regarding your issue, if you're in the `tests/splinter` folder, can you try to run `cookiecutter ../../templates/adding_a_missing_tokenization_test/` ?
You should have a newly created folder `cookiecutter-template-BrandNewBERT` inside `tests/splinter`. :slightly_smiling_face:
If that's the case, you'll need after to do something like:
```
mv cookiecutter-template-BrandNewBERT/test_tokenization_brand_new_bert.py .
rm -r cookiecutter-template-BrandNewBERT/
```
Keep me posted :smile: <|||||>Thanks so much @SaulLu turns out it was due to not recognizing my installed cookiecutter so i sorted it out there. ๐ <|||||>Hi @anmolsjoshi, @tgadeliya, @Rajathbharadwaj and @farahdian,
Just a quick message to see how things are going for you and if you have any problems. If you do, please share them! :hugs: <|||||>Thanks @SaulLu ! I've been exploring the tokenization test files in the repo just trying to figure out which ones would be a good basis for writing a tokenization test for splinter... if you have any guidance on this it would be super helpful!<|||||>Hey @SaulLu my apologies, been a bit busy. I'll get started ASAP, however, I still didn't understand where exactly I should run the `cookie cutter`
Help on this would be helpful ๐ <|||||>Hi @farahdian ,
Thank you very much for the update! To know where you stand, have you done step 3)? Is it for step 4) that you are looking for a similar tokenizer? :slightly_smiling_face: <|||||>Hi @Rajathbharadwaj ,
Thank you for the update too!
> I still didn't understand where exactly I should run the cookie cutter
You can run the `cookie cutter` command anywhere as long as the command is followed by the path to the [folder `adding_a_missing_tokenization_test`](https://github.com/huggingface/transformers/tree/main/templates/adding_a_missing_tokenization_test) in the transformers repo that you have cloned locally.
When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the `tests/electra` folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.
Below is an example of the sequence of bash commands I would personally use:
```bash
(base) username@hostname:~$ cd ~/repos
(base) username@hostname:~/repos$ git clone [email protected]:huggingface/transformers.git
[Install my development setup]
(transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
[Answer the questionnaire]
(transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
(transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/
```
Hope that'll help you :smile: <|||||>Appreciate your patience @SaulLu ! Yup I've done step 3 and generated a test tokenization file with cookiecutter. Now onto working on the setUp method ๐ <|||||>@farahdian , this is indeed a very good question: finding the closest tokenizer to draw inspiration from and identifying the important difference with that tokenizer is the most interesting part.
For that there are several ways to start:
1. Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the `PRETRAINED_VOCAB_FILES_MAP` global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a `vocab` file, with both a `vocab` and a `merges` files, with a `sentencepiece` binary file or with only a `tokenizer.json` file).
2. Read the high level explanation of the model in the transformers documentation (e.g. for [Splinter](https://huggingface.co/docs/transformers/model_doc/splinter))
3. Read the paper corresponding to the model
4. Look at the implementation in transformers lib
5. Look at the original implementation of the model (often mentioned in the paper)
6. Look at the discussions on the PR in which the model was added
For the model you're in charge @farahdian:
- Transformers's doc mention that:
> Use [SplinterTokenizer](https://huggingface.co/docs/transformers/v4.18.0/en/model_doc/splinter#transformers.SplinterTokenizer) (rather than [BertTokenizer](https://huggingface.co/docs/transformers/v4.18.0/en/model_doc/bert#transformers.BertTokenizer)), as it already contains this special token. Also, its default behavior is to use this token when two sequences are given (for example, in the run_qa.py script).
- Splinter's paper mention that:
> Splinter-base shares the same architecture (transformer encoder (Vaswani et al., 2017)), **vocabulary (cased wordpieces)**, and number of parameters (110M) with SpanBERT-base (Joshi et al., 2020).
- And SpanBERT's paper mention that:
> We reimplemented BERTโs model and pre-training method in fairseq (Ott et al., 2019). We used the model configuration of BERT large as in Devlin et al. (2019) and also pre-trained all our models on the same corpus: BooksCorpus and English Wikipedia using **cased Wordpiece tokens**.
- and the vocabulary files of `bert-base-cased` ([vocab file](https://huggingface.co/bert-base-cased/raw/main/vocab.txt)) and of `splinter-base` ([vocab file](https://huggingface.co/tau/splinter-base/raw/main/vocab.txt)) look very similar
Given these mentions, it seems that Splinter's tokenizer is very similar to Bert's one. It would be interesting to confirm this impression and to understand all the differences between SplinterTokenizer and BertTokenizer so that it is well reflected in the test :slightly_smiling_face: <|||||>> Hi @Rajathbharadwaj ,
>
> Thank you for the update too!
>
> > I still didn't understand where exactly I should run the cookie cutter
>
> You can run the `cookie cutter` command anywhere as long as the command is followed by the path to the [folder `adding_a_missing_tokenization_test`](https://github.com/huggingface/transformers/tree/main/templates/adding_a_missing_tokenization_test) in the transformers repo that you have cloned locally.
>
> When you run the command, it will create a new folder at the location you are. In this folder you will find a base for the python test file that you need to move inside the `tests/electra` folder of the transformers local clone. Once this file is moved, you can delete the folder that was created by the cookie cutter command.
>
> Below is an example of the sequence of bash commands I would personally use:
>
> ```shell
> (base) username@hostname:~$ cd ~/repos
> (base) username@hostname:~/repos$ git clone [email protected]:huggingface/transformers.git
> [Install my development setup]
> (transformers-dev) username@hostname:~/repos$ cookiecutter transformers/templates/adding_a_missing_tokenization_test/
> [Answer the questionnaire]
> (transformers-dev) username@hostname:~/repos$ mv cookiecutter-template-Electra/test_tokenization_electra.py transformers/tests/Electra
> (transformers-dev) username@hostname:~/repos$ rm -r cookiecutter-template-Electra/
> ```
>
> Hope that'll help you smile
Thank you so much @SaulLu
I understood now, however, I am skeptical about `slow_tokenizer_use_sentencepiece` question, but I set it to True as it had the `tokenization_electra.py` file but I didn't understand
> "Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece"
So did I select correctly? Or should I set it to False? Apologies for asking so many questions ๐
However now I've started adding tests for `Electra` will keep you posted if I run into something I don't understand.
Thanks for helping once again!<|||||>Hi @SaulLu,
I think my case the easiest one, because `Longformer` model uses actually the same tokenizer as `RoBERTa` with no differences. So, I adapted tests(small refactor and changes) from `RoBERTa` tokenizer and prepare branch with tests. Nevertheless, I really want to dive deeper and study code of `TokenizerTesterMixin` and if after that I will find some untested behaviour, I will add new tests.
But I think I have one doubt, that you can resolve. Are you anticipating from `Longformer` tests to have different toy tokenizer example than in `RoBERTa` tests? Or maybe I should write my own tests from scratch?
<|||||>@Rajathbharadwaj , I'm happy to help! Especially as your questions will surely be useful for other people
> however, I am skeptical about slow_tokenizer_use_sentencepiece question, but I set it to True as it had the tokenization_electra.py file but I didn't understand
> "Set true if the tokenizer defined in the tokenization_brand_new_bert.py file uses sentencepiece"
So did I select correctly? Or should I set it to False? Apologies for asking so many questions smile
Some `XxxTokenizer` (without the Fast at the end, implemented in the `tokenization_xxx.py` file), use a backend based on the [sentencepiece](https://github.com/google/sentencepiece) library. For example `T5Tokenizer` uses a backend based on sentencepiece: you can see this import at the beginning of the `tokenization_t5.py` file:
https://github.com/huggingface/transformers/blob/3dd57b15c561bc26eb0cde8e6c766e7533284b0f/src/transformers/models/t5/tokenization_t5.py#L24
and you can see that the backend is instantiated here:
https://github.com/huggingface/transformers/blob/3dd57b15c561bc26eb0cde8e6c766e7533284b0f/src/transformers/models/t5/tokenization_t5.py#L151-L152
On the contrary, [BertTokenizer](https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert/tokenization_bert.py) for example does not use a sentencepiece backend.
I hope this helped you!<|||||>Hi @tgadeliya ,
Thanks for the update!
> But I think I have one doubt, that you can resolve. Are you anticipating from Longformer tests to have different toy tokenizer example than in RoBERTa tests? Or maybe I should write my own tests from scratch?
In your case, I wouldn't be surprised if Longformer uses the same tokenizer as RoBERTa. In this case, it seems legitimate to use the same toy tokenizer. Maybe the only check you can do to confirm this hypothesis is comparing the vocabularies of the 'main" checkpoints of both models:
```bash
!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/merges.txt
!wget https://huggingface.co/allenai/longformer-base-4096/raw/main/vocab.json
!wget https://huggingface.co/roberta-base/raw/main/merges.txt
!wget https://huggingface.co/roberta-base/raw/main/vocab.json
!diff merges.txt merges.txt.1
!diff vocab.json vocab.json.1
```
Turn out the result confirms it!<|||||>Hi, I'm happy to take `MobileBert`<|||||>I'd like to work on ConvBert.<|||||>> Identify the high level features of the tokenizer by looking at the contents of the model's "reference" checkpoint files (listed inside the PRETRAINED_VOCAB_FILES_MAP global variables in the tokenizer's files) on the hub. A similar model would most likely store the tokenizer vocabulary in the same way (with only a vocab file, with both a vocab and a merges files, with a sentencepiece binary file or with only a tokenizer.json file).
@SaulLu I'm having trouble identifying ConvBert's 'reference' checkpoint files on the hub. Would you kindly provide more guidance on this?<|||||>Hi @elusenji ,
In the `src/transformers/models/convbert/tokenization_convbert.py` file you can find the global variable `PRETRAINED_VOCAB_FILES_MAP`:
https://github.com/huggingface/transformers/blob/6d90d76f5db344e333ec184cc6f414abe2aa6559/src/transformers/models/convbert/tokenization_convbert.py#L24-L30
In particular [YituTech/conv-bert-base](https://huggingface.co/YituTech/conv-bert-base) is a reference checkpoint for ConvBert.
Is this what you were having trouble with? :relaxed: <|||||>Yes, this helps! <|||||>Hi @SaulLu, I am happy to write tests for `RemBert`. Thanks.<|||||>Hi @SaulLu, I would like to work on `RetriBert`.<|||||>Hi @SaulLu, I'd be happy to work on `LED` - Thanks!!<|||||>Thanks @SaulLu, I'm working on this `RemBert` :)<|||||>Hello to all!
Two first PRs have been merged into master! Many thanks to @leondz and @mpoemsl! :confetti_ball:
@nnlnr, @anmolsjoshi, @tgadeliya, @Rajathbharadwaj, @farahdian, @elusenji, and @danhphan, I wanted to take this opportunity to ask you how things are going for you? Are you experiencing any particular difficulties?<|||||>@SaulLu, I've made some progress. Would it be okay to send in a work-in-progress pull request?<|||||>^was wondering if I could do the same, could use a second pair of eyes on it <|||||>Yes sure! <|||||>Hi @SaulLu
Apologies for the delayed response - I've been making some progress with ```LED``` and will be hopefully submitting a WIP-PR in the coming week. Thanks for following up<|||||>Hi @nnlnr, @anmolsjoshi, @Rajathbharadwaj, @elusenji and @danhphan,
I come to the news to know how the integration of the tests is going for you :hugs: <|||||>Hi @SaulLu, Can I work on Splinter if no one is working it? I believe it's not claimed yet<|||||>Hi @ashwinjohn3,
Absolutely! Don't hesitate if you are having difficulties<|||||>@SaulLu Thank you so much. Will do :)<|||||>Hi @SaulLu , sorry for late response and being quite slow. I am still working on RemBert and will try to finish it soon in the coming weeks. Thank you.<|||||>@SaulLu are there any tokenizers left???<|||||>Hi @IMvision12, I am busy on the deadline of a couple of other projects, so can you work on `RemBert`? Thanks!<|||||>Yeah sure @danhphan Thanks. <|||||>Thank you @IMvision12 !<|||||>Seems like a bit late to the party ๐
. Is there any tokenizer not listed here that I can write tests for? Or maybe if some tokenizer becomes available here. Please let me know @SaulLu I would love to contribute ๐<|||||>Unfortunately, I don't have much time left to help with transformers now. But let me ping @ArthurZucker for visibility<|||||>Hey @y3sar thanks for wanting to contribute. I think that the RemBert tests PR was close, you can probably take that over if you want!
Other tests that might be missing:
- [ ] ./tests/models/flaubert
- [ ] ./tests/models/convbert
- [ ] ./tests/models/splinter
- [ ] ./tests/models/gpt_neox
- [ ] ./tests/models/rembert<|||||>@ArthurZucker thanks for your reply. I will start working on RemBert tests. |
transformers | 16,626 | closed | [Docs] Correct quicktour minds14 dataset | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixing a typo
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-06-2022 08:16:09 | 04-06-2022 08:16:09 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,625 | closed | Moved functions to pytorch_utils.py | # What does this PR do?
Moved all pruning stuff, apply_chunking_to_forward to pytorch_utils.py
Fixes #15543
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @sgugger | 04-06-2022 06:49:35 | 04-06-2022 06:49:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten I'm new to contributing to this repo. What other updates are required?<|||||>Those should be removed entirely honestly (components should be in model files, not modeling_utils), so I care little where they are :-)<|||||>@patrickvonplaten won't moving Conv1D back to modeling_utils.py cause circular imports?<|||||>@patrickvonplaten any thoughts?<|||||>@sgugger any more changes to complete on this PR?<|||||>Good to go for me!<|||||>Waiting for final review from @sgugger <|||||>I'm good on my side, was waiting for you Patrick ;-) |
transformers | 16,624 | closed | fix `rum_clm.py` seeking text column name twice | # What does this PR do?
fix `rum_clm.py` seeking text column name twice
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-06-2022 06:09:06 | 04-06-2022 06:09:06 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,623 | closed | Moved functions and classes to pytorch_utils.py | # What does this PR do?
Moved all pruning stuff, apply_chunking_to_forward, get_parameter_device, get_parameter_dtype to pytorch_utils.py
Fixes #15543
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @sgugger | 04-06-2022 05:39:46 | 04-06-2022 05:39:46 | |
transformers | 16,622 | closed | BART can only generate a maximum of 20 tokens | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.18.0.dev0
- Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (tpu)
- Jax version: 0.3.4
- JaxLib version: 0.3.2
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj @patrickvonplaten
## Information
Model I am using: BART
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BartTokenizer, BartForConditionalGeneration
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartForConditionalGeneration.from_pretrained('facebook/bart-base')
sentences = ['At the launch of the latest report by the Intergovernmental Panel on Climate Change, on the mitigation of climate change, the UN Secretary-General called for an urgent shift of investments and subsidies from fossil fuels to renewable energy, warning that investing in new fossil fuels infrastructure is moral and economic madness.']
inputs = tokenizer(sentences, return_tensors='pt')
print('Input shape:', inputs.input_ids.shape)
generate_ids = model.generate(inputs.input_ids, num_beams=5, min_length=50)
print('Generated shape:', generate_ids.shape)
print(tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0])
```
Output:
```
Input shape: torch.Size([1, 60])
Generated shape: torch.Size([1, 20])
At the launch of the latest report by the Intergovernmental Panel on Climate Change, on
```
## Expected behavior
The output should not be truncated.
## Actual behavior
The output is truncated.
Note that the output is truncated even if `min_length=50` is specified. | 04-06-2022 03:52:01 | 04-06-2022 03:52:01 | Hi @ayaka14732 ๐ That happens because the stopping conditions take precedence over anything else. The default for `max_length` is `20`, so that's why you see 20 generated tokens. In your example, if you rewrite the generate line into `generate_ids = model.generate(inputs.input_ids, num_beams=5, min_length=50, max_length=100)`, you'll get the results you expect.
@patrickvonplaten @patil-suraj should we raise an exception in this case? (`min_length` > `max_length`) <|||||>@gante, yes this would work for me! Let's maybe do this in `generate()` before we just into the sub-generation methods<|||||>@ayaka14732 if you pull from master (or install `transformers==4.19.0.dev0`), you shall see an informative Exception if you try to run your original script.
Thank you for reporting this issue :D |
transformers | 16,621 | closed | [modeling_utils] typo | a small formatting typo fix
@sgugger | 04-06-2022 03:06:13 | 04-06-2022 03:06:13 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,620 | closed | state.best_metric does not update in EarlyStoppingCallback | https://github.com/huggingface/transformers/blob/b18dfd95e1f60ae65a959a7b255fc06522170d1b/src/transformers/trainer_callback.py#L534
I print the `state.best_metric`, but it is always None. I wonder whether the `state.best_metric` should be updated if the condition is satisfied, like the following:
```
if state.best_metric is None or (
operator(metric_value, state.best_metric)
and abs(metric_value - state.best_metric) > self.early_stopping_threshold
):
self.early_stopping_patience_counter = 0
state.best_metric = metric_value
``` | 04-06-2022 00:03:02 | 04-06-2022 00:03:02 | No, the best metric is updated by the `Trainer` [here](https://github.com/huggingface/transformers/blob/51fa7191b10a13f655d7ab19c7ea10e10078d668/src/transformers/trainer.py#L1758), it's not this callback responsibility to use it.
As mentioned in the [docstring](https://github.com/huggingface/transformers/blob/b18dfd95e1f60ae65a959a7b255fc06522170d1b/src/transformers/trainer_callback.py#L521), are you sure you set `load_best_model_at_end` to `True`?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I have set `load_best_model_at_end` to `True`, but it doesn't work. The following is my code:
```
args_list=[...]
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
model_args, data_args, training_args = parser.parse_args_into_dataclasses(args_list)
training_args.load_best_model_at_end = True
training_args.metric_for_best_model = 'eval_rouge-l'
training_args.greater_is_better = True
```
However, when I add `control.should_save` in https://github.com/huggingface/transformers/blob/b18dfd95e1f60ae65a959a7b255fc06522170d1b/src/transformers/trainer_callback.py#L531
it works. Just like this:
```
def check_metric_value(self, args, state, control, metric_value):
# best_metric is set by code for load_best_model
operator = np.greater if args.greater_is_better else np.less
if state.best_metric is None or (
operator(metric_value, state.best_metric)
and abs(metric_value - state.best_metric) > self.early_stopping_threshold
):
self.early_stopping_patience_counter = 0
control.should_save = True
else:
self.early_stopping_patience_counter += 1
control.should_save = False
```<|||||>I am facing the same issue (`transformers==4.20.0`), again despite setting `load_best_model_at_end` to `True`. What did the trick for me is also setting the `save_steps` argument in `TrainingArguments` (to a multiplicative of `eval_steps`). This makes sense since [this line in trainer.py](https://github.com/huggingface/transformers/blob/51fa7191b10a13f655d7ab19c7ea10e10078d668/src/transformers/trainer.py#L1635) only gets executed when reaching a step on which the model should be saved.
I wonder though if this is the desired behaviour. Although I agree with @sgugger that the `best_metric` value should be updated in trainer and not in the callback, in the current behaviour it only starts monitoring the early stopping values **after** saving the model for the first time. In my case, it sort of forces me to save model checkpoints just to get the early stopping going.
One solution would be to set an initial `best_metric` value of +/-INF (currently `None`). Wonder what you think/if you agree.
<|||||>I'm happy to look at a PR :-)<|||||>Cool, I'll be working on one!<|||||>me too<|||||>how to set?<|||||>> I have set `load_best_model_at_end` to `True`, but it doesn't work. The following is my code:
>
> ```
> args_list=[...]
> parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
> model_args, data_args, training_args = parser.parse_args_into_dataclasses(args_list)
> training_args.load_best_model_at_end = True
> training_args.metric_for_best_model = 'eval_rouge-l'
> training_args.greater_is_better = True
> ```
>
> However, when I add `control.should_save` in
>
> https://github.com/huggingface/transformers/blob/b18dfd95e1f60ae65a959a7b255fc06522170d1b/src/transformers/trainer_callback.py#L531
>
>
> it works. Just like this:
> ```
> def check_metric_value(self, args, state, control, metric_value):
> # best_metric is set by code for load_best_model
> operator = np.greater if args.greater_is_better else np.less
> if state.best_metric is None or (
> operator(metric_value, state.best_metric)
> and abs(metric_value - state.best_metric) > self.early_stopping_threshold
> ):
> self.early_stopping_patience_counter = 0
> control.should_save = True
> else:
> self.early_stopping_patience_counter += 1
> control.should_save = False
> ```
not work for me |
transformers | 16,619 | closed | Added Annotations for PyTorch models | # What does this PR do?
Adds annotations as described in https://github.com/huggingface/transformers/issues/16059:
* Updated type annotations for CTRL (PT), RAG(PT)
* added type annotations for FSMT(PT), LED(PT), M2M(PT), MPNet(PT) , Nystromformer(PT), OpenAI(PT)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 04-05-2022 22:03:34 | 04-05-2022 22:03:34 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@sgugger not sure why the tests are failing. I'm pretty sure they aren't related to added annotations<|||||>Thanks! We just need a last `make style` on your branch for the code quality check :-) |
transformers | 16,618 | closed | Can't load the model for 'bert-base-uncased'. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux
- Python version: 3.9
- PyTorch version (GPU?): 1.9.1 (yes)
- Tensorflow version (GPU?):
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
Error:
When I used "BertModel.from_pretrained", it would show "raise EnvironmentError(
OSError: Can't load the model for 'bert-base-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-base-uncased' is the correct path to a directory containing a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack." | 04-05-2022 21:06:28 | 04-05-2022 21:06:28 | Hi @xishanhan ๐ In order for us to pinpoint the issue and help you, we need a script that reproduces it. Also, for completeness -- can you confirm that you do **_not_** have a local folder called `bert-base-uncased`?<|||||>> Hi @xishanhan ๐ In order for us to pinpoint the issue and help you, we need a script that reproduces it. Also, for completeness -- can you confirm that you do **_not_** have a local folder called `bert-base-uncased`?
Thanks for your reply! I do not have a local folder called bert-base-uncased.
I tried this https://github.com/alirezazareian/ovr-cnn/blob/master/ipynb/003.ipynb to split COCO datasets, and it used BERT to embedding the name of the classes.
Because my transformer version is 4.17.0, I changed "from transformers.tokenization_bert import BasicTokenizer" to "from transformers.models.bert.tokenization_bert import BasicTokenizer" when using BasicTokenizer, otherwise "no module named transformers.tokenization_bert" would appear. I don't know if this change affected something and caused the error.<|||||>Hi @xishanhan -- precision is important for debugging. Without an exact script to reproduce, I'm afraid I won't be able to help :)<|||||>Do you mean this? It's the py file I run and it needs the source code from https://github.com/alirezazareian/ovr-cnn to import.
```
import json
import numpy as np
import torch
from maskrcnn_benchmark.config import cfg
from maskrcnn_benchmark.modeling.language_backbone.transformers import BERT
with open('../datasets/coco/annotations/instances_train2017.json', 'r') as fin:
coco_train_anno_all = json.load(fin)
with open('../datasets/coco/annotations/instances_train2017.json', 'r') as fin:
coco_train_anno_seen = json.load(fin)
with open('../datasets/coco/annotations/instances_train2017.json', 'r') as fin:
coco_train_anno_unseen = json.load(fin)
with open('../datasets/coco/annotations/instances_val2017.json', 'r') as fin:
coco_val_anno_all = json.load(fin)
with open('../datasets/coco/annotations/instances_val2017.json', 'r') as fin:
coco_val_anno_seen = json.load(fin)
with open('../datasets/coco/annotations/instances_val2017.json', 'r') as fin:
coco_val_anno_unseen = json.load(fin)
with open('../ovr-cnn-master/mscoco_seen_classes.json', 'r') as fin:
labels_seen = json.load(fin)
with open('../ovr-cnn-master/mscoco_unseen_classes.json', 'r') as fin:
labels_unseen = json.load(fin)
print(len(labels_seen), len(labels_unseen))
labels_all = [item['name'] for item in coco_val_anno_all['categories']]
set(labels_seen) - set(labels_all)
set(labels_unseen) - set(labels_all)
class_id_to_split = {}
class_name_to_split = {}
for item in coco_val_anno_all['categories']:
if item['name'] in labels_seen:
class_id_to_split[item['id']] = 'seen'
class_name_to_split[item['name']] = 'seen'
elif item['name'] in labels_unseen:
class_id_to_split[item['id']] = 'unseen'
class_name_to_split[item['name']] = 'unseen'
class_name_to_glove = {}
with open('../datasets/glove_6B/glove.6B.300d.txt', 'r') as fin:
for row in fin:
row_tk = row.split()
if row_tk[0] in class_name_to_split:
class_name_to_glove[row_tk[0]] = [float(num) for num in row_tk[1:]]
bert = BERT(cfg)
_ = bert.to('cuda')
class_name_to_bertemb = {}
for c in class_name_to_split:
if c not in bert.tokenizer.vocab:
print(f'{c} not found')
continue
cid = bert.tokenizer.vocab[c]
class_name_to_bertemb[c] = bert.embeddings[cid]
class_list = list(class_name_to_split.keys())
encoded_class_list = bert(class_list)
mask = (1 - encoded_class_list['special_tokens_mask']).to(torch.float32)
mask.sum(-1)
embeddings = (encoded_class_list['input_embeddings'] * mask[:, :, None]).sum(1) / mask.sum(1)[:, None]
embeddings = embeddings.cpu().numpy()
print(embeddings.shape)
class_name_to_bertemb = {}
for c, emb in zip(class_list, embeddings.tolist()):
class_name_to_bertemb[c] = emb
print(len(class_name_to_bertemb), len(class_name_to_glove), len(class_name_to_split))
def filter_annotation(anno_dict, split_name_list):
filtered_categories = []
for item in anno_dict['categories']:
if class_id_to_split.get(item['id']) in split_name_list:
item['embedding'] = {}
item['embedding']['GloVE'] = class_name_to_glove[item['name']]
item['embedding']['BertEmb'] = class_name_to_bertemb[item['name']]
item['split'] = class_id_to_split.get(item['id'])
filtered_categories.append(item)
anno_dict['categories'] = filtered_categories
filtered_images = []
filtered_annotations = []
useful_image_ids = set()
for item in anno_dict['annotations']:
if class_id_to_split.get(item['category_id']) in split_name_list:
filtered_annotations.append(item)
useful_image_ids.add(item['image_id'])
for item in anno_dict['images']:
if item['id'] in useful_image_ids:
filtered_images.append(item)
anno_dict['annotations'] = filtered_annotations
anno_dict['images'] = filtered_images
filter_annotation(coco_train_anno_seen, ['seen'])
filter_annotation(coco_train_anno_unseen, ['unseen'])
filter_annotation(coco_train_anno_all, ['seen', 'unseen'])
filter_annotation(coco_val_anno_seen, ['seen'])
filter_annotation(coco_val_anno_unseen, ['unseen'])
filter_annotation(coco_val_anno_all, ['seen', 'unseen'])
print(len(coco_val_anno_seen['categories']), len(coco_val_anno_unseen['categories']), len(coco_val_anno_all['categories']))
with open('../datasets/coco/zero-shot/instances_train2017_seen_2.json', 'w') as fout:
json.dump(coco_train_anno_seen, fout)
with open('../datasets/coco/zero-shot/instances_train2017_unseen_2.json', 'w') as fout:
json.dump(coco_train_anno_unseen, fout)
with open('../datasets/coco/zero-shot/instances_train2017_all_2.json', 'w') as fout:
json.dump(coco_train_anno_all, fout)
with open('../datasets/coco/zero-shot/instances_val2017_seen_2.json', 'w') as fout:
json.dump(coco_val_anno_seen, fout)
with open('../datasets/coco/zero-shot/instances_val2017_unseen_2.json', 'w') as fout:
json.dump(coco_val_anno_unseen, fout)
with open('../datasets/coco/zero-shot/instances_val2017_all_2.json', 'w') as fout:
json.dump(coco_val_anno_all, fout)
```<|||||>@xishanhan that is better :) But I am probably still missing information. I apologize for being assertive about this, providing support without an exact script is near impossible in a large library like this one (our [issues guide](https://github.com/huggingface/transformers/blob/main/ISSUES.md) provides more detail).
One of two things is happening:
1. You are using exactly that script. In that case, you'll have to open an issue in that repo, as they are using an old version of transformers (`3.0.2`), which likely explains the issue;
2. You are using a modified version of the script above, that uses `transformers==4.17.0`. In that case, I'm going to ask you for the part of your script that attempts to load the model, and that raises the error you see, so I can reproduce the issue.<|||||>Sorry, I didn't explain clearly before. I am using this script under transformer version 4.17.0 without any modifying.
The part of my script that attempts to load the model is line 47: `bert = BERT(cfg)`.
'BERT' is import from [class BERT](https://github.com/alirezazareian/ovr-cnn/blob/master/maskrcnn_benchmark/modeling/language_backbone/transformers.py) and in this transformers.py, the part that attempts to load model is line 14:
`self.bert_model = BertModel.from_pretrained('bert-base-uncased', config=self.bert_config) `
<|||||>Awesome, all explained now ๐ The root cause stems from the `BERT` class within the repository you linked, and you should try to raise an issue with them.
The error _should_ go away if you remove the `config` argument in [this line in that project](https://github.com/alirezazareian/ovr-cnn/blob/master/maskrcnn_benchmark/modeling/language_backbone/transformers.py#L15), but I have no idea whether the project still makes sense if you make that change -- this is why I'm suggesting raising the issue with the author of the code :)
Since the bug is external to `transformers`, I'm closing this issue (we reserve issues to bugs in the code). Feel free to ask for further help in [our forums](https://discuss.huggingface.co/) or to reopen this issue if you find `transformers`-related bugs ๐ <|||||>Thanks for your reply! I will try your suggested.<|||||>In my case, tqdm is missing, and it caused the downloading progress failed |
transformers | 16,617 | closed | Update no_trainer scripts with new Accelerate functionalities | # Update the `no_trainer` scripts to keep aligned with Accelerate capabilities
## What does this add?
Updates all `no_trainer` scripts to use the latest capabilities.
## Why is it needed?
Accelerate had a number of new capabilities added, including better saving/loading, experiment tracking, and support for LR Schedulers. As a result, much of the current scripts can either be simplified from their hard-coded behaviors, or have these features added
Modified scripts with potential major changes:
- `language-modeling`
- `multiple-choice`
- `question-answering`
- `summarization`
- `text-classification`
- `token-classification`
- `translation`
The speech fine-tuning will be updated in a later PR
## Basic usage examples:
* Saving checkpoints each epoch or number of steps:
```bash
accelerate launch language-modeling/run_clm_no_trainer --checkpointing_steps "epoch"
```
```bash
accelerate launch language-modeling/run_clm_no_trainer --checkpointing_steps 100
```
* Resuming training from a saved checkpoint:
```bash
accelerate launch language-modeling/run_clm_no_trainer --resume_from_checkpoint "epoch_1"
```
* Use any available trackers that Accelerate can automatically pick up including Weights and Biases, TensorBoard, and CometML
```bash
accelerate launch language_modeling/run_clm_no_trainer --with_tracking
```
## Anticipated maintence burden? (What will happen in say, 3 months if something changes)
As it gets more widly used, these scripts might need small updates if we find the end-users prefer a different experience when it comes to logging, or other small bugfixes we find as time goes on. | 04-05-2022 20:58:54 | 04-05-2022 20:58:54 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,616 | closed | `bigscience/T0` multi-gpu inference exits with return code -9 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Yes (deepspeed)
### Who can help
Library:
- Deepspeed: @stas00
- Text generation: @patrickvonplaten @Narsil
## Information
Model I am using: T0
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I want to load T0 across two 24GB GPUs with DeepSpeed in order to run inference. I followed the example code given [here](https://github.com/huggingface/transformers/issues/15399#issuecomment-1025240005) in issue #15399.
When running the code below, after the model says `finished initializing model with 11.14B parameters`, it quits without outputting a model response. It does not give an error or traceback, just a return code of -9:
```
[2022-04-05 16:18:09,845] [WARNING] [runner.py:155:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2022-04-05 16:18:09,912] [INFO] [runner.py:438:main] cmd = /home/aadelucia/miniconda3/envs/fda_cersi_tobacco/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 multi_gpu_T0.py
[2022-04-05 16:18:10,635] [INFO] [launch.py:103:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2022-04-05 16:18:10,635] [INFO] [launch.py:109:main] nnodes=1, num_local_procs=2, node_rank=0
[2022-04-05 16:18:10,635] [INFO] [launch.py:122:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2022-04-05 16:18:10,635] [INFO] [launch.py:123:main] dist_world_size=2
[2022-04-05 16:18:10,635] [INFO] [launch.py:125:main] Setting CUDA_VISIBLE_DEVICES=0,1
[2022-04-05 16:18:11,702] [INFO] [distributed.py:46:init_distributed] Initializing torch distributed with backend: nccl
[2022-04-05 16:18:56,295] [INFO] [partition_parameters.py:456:__exit__] finished initializing model with 11.14B parameters
[2022-04-05 16:19:40,754] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 406939
[2022-04-05 16:19:40,754] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 406940
[2022-04-05 16:19:40,754] [ERROR] [launch.py:184:sigkill_handler] ['/home/aadelucia/miniconda3/envs/fda_cersi_tobacco/bin/python', '-u', 'multi_gpu_T0.py', '--local_rank=1'] exits with return code = -9
```
Here is the code. Run with `deepspeed --num_gpus 2 <script.py>`
```python
"""
Example code to load a PyTorch model across GPUs
Code from https://github.com/huggingface/transformers/issues/15399
"""
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, AutoConfig
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import torch
import pdb
import os
from tqdm import tqdm
import re
seed = 42
torch.manual_seed(seed)
###
# Deepspeed setup
###
# To avoid warnings about parallelism in tokenizers
os.environ["TOKENIZERS_PARALLELISM"] = "false"
# distributed setup
local_rank = int(os.getenv('LOCAL_RANK', '0')) # TODO use this
world_size = int(os.getenv('WORLD_SIZE', '1'))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
ds_config = {
"fp16": {
"enabled": False,
},
"bf16": {
"enabled": True,
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
# batch size has to be divisible by world_size, but can be bigger than world_size
"train_batch_size": 1 * world_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# Initialize model
# must setup HfDeepSpeedConfig before instantiating the model
# ds_config is deepspeed config object or path to the file
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
tokenizer = AutoTokenizer.from_pretrained(model_name, model_max_length=1024) # should be 1024
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# we are ready to initialise deepspeed ZeRO now
ds_engine = deepspeed.initialize(model=model,
config_params=ds_config,
model_parameters=None,
optimizer=None,
lr_scheduler=None)[0]
ds_engine.module.eval() # inference
rank = torch.distributed.get_rank()
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
# Generation options
# https://huggingface.co/docs/transformers/v4.16.1/en/main_classes/model#transformers.generation_utils.GenerationMixin.generate
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, max_length=256)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
T0 should load across 2 GPUs, generate an answer, and then quit.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-05-2022 20:39:09 | 04-05-2022 20:39:09 | Please try again with the exact code from https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference as I cleaned it up a bit more - I have just re-tested with it - it works just fine on 2x rtx 3090 gpus.
But it's using the smaller `bigscience/T0_3B`. But still let's validate that it works for you as a baseline.
`bigscience/T0` is ~4x bigger (11B) - I will try it next once 42GB get downloaded.
I'm using master/main version of transformers/deepspeed and pt-1.11.<|||||>OK, I managed to crash my system with the 11B version with 2 gpus.
Need to figure out cgroup v2 as I moved to Ubuntu 21.10 and my v1 setup no longer works.
Meanwhile I figured out how to run a shell that will not any processes started from it use more memory than I told it to and thus not kill the host:
```
systemd-run --user --scope -p MemoryHigh=100G -p MemoryMax=110G -p MemorySwapMax=60G bash
```
but since we have this huge checkpoint of 42GB I don't have enough RAM to load it twice in 2 processes. We have just added sharded checkpoints so need to switch T0 to it.
And meanwhile I'm trying to figure out how to get this to run with nvme offload.
I will update more once I have something running.
<|||||>> Please try again with the exact code from https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference as I cleaned it up a bit more - I have just re-tested with it - it works just fine on 2x rtx 3090 gpus.
Thanks for your help!
I've tried to run the example at the link, and now I get another error, related to Ninja--full traceback below. This is an error I have seen before when trying to run the script I provided in my initial post. The errors seemed to alternate between the return code -9 and this Ninja error, without changing anything in the code.
If the example works for you, I can't figure out what's going wrong on my end. Ninja is installed in my environment, and `pip install ninja` says that the requirement is already satisfied.
I am going to set up a new environment and see if that has better results.
```
Traceback (most recent call last):
File "deepspeed_example.py", line 116, in <module>
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/deepspeed/__init__.py", line 119, in initialize
engine = DeepSpeedEngine(args=args,
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 296, in __init__
self.optimizer = self._configure_zero_optimizer(optimizer=None)
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/deepspeed/runtime/engine.py", line 1394, in _configure_zero_optimizer
optimizer = DeepSpeedZeroOptimizer_Stage3(
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/deepspeed/runtime/zero/stage3.py", line 608, in __init__
util_ops = UtilsBuilder().load()
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 403, in load
return self.jit_load(verbose)
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 435, in jit_load
op_module = load(
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1124, in load
return _jit_compile(
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1337, in _jit_compile
_write_ninja_file_and_build_library(
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1418, in _write_ninja_file_and_build_library
verify_ninja_availability()
File "/home/username/miniconda3/envs/my_env/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 1474, in verify_ninja_availability
raise RuntimeError("Ninja is required to load C++ extensions")
RuntimeError: Ninja is required to load C++ extensions
Loading extension module utils...
Time to load utils op: 0.10305166244506836 seconds
[2022-04-05 20:50:51,338] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 419852
[2022-04-05 20:50:51,338] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 419853
[2022-04-05 20:50:51,339] [ERROR] [launch.py:184:sigkill_handler] ['/home/username/miniconda3/envs/my_env/bin/python', '-u', 'deepspeed_example.py', '--local_rank=1'] exits with return code = 1
```<|||||>What's the output of this? I included the output on my conda env:
```
$ which ninja
/home/stas/anaconda3/envs/py38-pt111/bin/ninja
```
Perhaps your `PATH` env var is broken. check that it includes your conda's bin path, `/home/stas/anaconda3/envs/py38-pt111/bin` in my example.
try `echo $PATH`<|||||>one of the deepspeed devs was able to reproduce your original error - seems to be related to the `deepspeed` launcher.
So until they figure it out the quick fix is not to use it ;) Instead use the `torch` launcher
```
python -m torch.distributed.run --nproc_per_node=2 <script.py>
```
cc: @jeffra
<|||||>> What's the output of this? I included the output on my conda env:
>
> ```
> $ which ninja
> /home/stas/anaconda3/envs/py38-pt111/bin/ninja
> ```
I don't see any output when I give this command, and `echo $PATH` does list the environment I'm working in, but it doesn't list a ninja directory. I'll look into adding it to the `PATH`.
Thanks for the `python -m torch.distributed.run --nproc_per_node=2 <script.py>`. I tried it out and got the same Ninja issue. Once I fix the `PATH`, I'll try it again.
I will say that I was able to launch T0 and get it working several times last week and early this week, so I'm not sure why the Ninja error is suddenly appearing.
<|||||>>I don't see any output when I give this command, and echo $PATH does list the environment I'm working in, but it doesn't list a ninja directory
No output means it can't find it in `$PATH`.
There could be 2 issues:
1. ninja is installed but your `$PATH` is incorrect
2. ninja is not fully installed
Let's look at each case:
1. What is your conda environment's path, you can get all the envs with:
```
conda info --envs
```
e.g. in my case:
```
$ conda info --envs | grep py38-pt111
py38-pt111 * /home/stas/anaconda3/envs/py38-pt111
```
So the `bin` path that should be in `$PATH` is `/home/stas/anaconda3/envs/py38-pt111/bin`
Typically conda pushes that path into `$PATH` when you activate your environment.
2. If your `$PATH` is correct, then you can also try forcing the reinstall:
```
pip install ninja --force
```<|||||>I was able to finally get past the Ninja problem by force installing (`pip install ninja --force`) it in my original environment. Thanks for that.
I also made a new environment and installed all the necessary packages. Here's the information for the new environment:
```
- `transformers` version: 4.19.0.dev0
- Platform: Linux-5.13.0-37-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes, DeepSpeed
```
For both my original and new environment, I can get T0_3B to work on the [Custom DeepSpeed ZeRO Inference example](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference).
However, the Custom DeepSpeed ZeRO Inference with the T0 model still finishes with exit code -9 and now mentions `ChildFailedError`. I'm running it with `python -m torch.distributed.run --nproc_per_node=2 <script.py>`:
```
[2022-04-07 20:19:38,188] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 480668 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 1 (pid: 480669) of binary: /home/gwightman/miniconda3/envs/confidence_estimation/bin/python
Traceback (most recent call last):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 728, in <module>
main()
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
deepspeed_example.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-04-07_20:20:20
host : lambda1
rank : 1 (local_rank: 1)
exitcode : -9 (pid: 480669)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 480669
=======================================================
```
Something else to note is that I was able to successfully run T0 and get output last week, around March 31st. In that case, I had two processes running at the same time, sending the same example to both processes, and output would be generated. When I sent different examples to each process, it appeared that rank=0 would finish before rank=1, and the input at rank=1 would be hanging. <|||||>glad to hear you figured out `ninja`.
the traceback you pasted if from the launcher, not the actual program. there are 2 independent programs, the launcher starts your actual program and the traceback is that it detected that your program has failed, but do you have the traceback from your program?
> In that case, I had two processes running at the same time, sending the same example to both processes, and output would be generated. When I sent different examples to each process, it appeared that rank=0 would finish before rank=1, and the input at rank=1 would be hanging.
I understand the symptom. It means that the gpus synchronisation code in `generate` didn't work and one gpu finished running while the other is waiting for its data shard from gpu that already finished. When you either send the same input or such input that leads to the output of the same length token-wise it'd work too.
So we need to figure out why the sync didn't kick in. The sync is enabled here:
https://github.com/huggingface/transformers/blob/33cb21150c034aae0f11b9ab6e38752a7c6d1784/src/transformers/trainer_seq2seq.py#L161
which tells me that `is_deepspeed_zero3_enabled()` returned false, which tells me that either:
1. you have used a config file which didn't have `stage: 3` set
2. or you haven't created or kept alive `dschf = HfDeepSpeedConfig(ds_config)` which tells `transformers` that deepspeed is used and its stage.
Could you insert:
```
from transformers.deepspeed import is_deepspeed_zero3_enabled
print(f"Deepspeed 3 is enabled: {is_deepspeed_zero3_enabled()}")
```
before: `ds_engine.module.generate`, so that your code looks like:
```
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
from transformers.deepspeed import is_deepspeed_zero3_enabled
print(f"Deepspeed 3 is enabled: {is_deepspeed_zero3_enabled()}")
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
and see if it reports: "Deepspeed 3 is enabled: True"
<|||||>Further, for now please switch to this branch of `deepspeed` https://github.com/microsoft/DeepSpeed/pull/1884 as it has essential inference offloading bugs that have been fixed in this branch. As the Deepspeed team is on a Spring break it'll take a few weeks before it's merged into master:
You can install it directly like so:
```
pip install git+https://github.com/microsoft/DeepSpeed@olruwase/zero_inference_type_mismatch
```
Please install this branch and then try again.
Note: this branch is a bit slow at the moment as prefetch is currently not working, but it'll get fixed once the Deepspeed team is back to work. So it'll be faster once it's enabled again.<|||||>Here is the nvme offload version that I tested with. Works great even with 1x or 2x tiny gpu - I didn't see more than 3GB used on each, but it's slow of course.
```
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
world_size = int(os.getenv("WORLD_SIZE", "1"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0"
#model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size
# ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For indepth info on Deepspeed config see
# https://huggingface.co/docs/transformers/main/main_classes/deepspeed
# XXX: modified this script to use nvme offload so need to explain the new configs, but the key is
# to change the path to `nvme_path`
# keeping the same format as json for consistency, except it uses lower case for true/false
# fmt: off
ds_config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "nvme",
"nvme_path": "/mnt/nvme0/offload",
"pin_memory": True,
"buffer_count": 6,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": False,
"overlap_events": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.1 * model_hidden_size * model_hidden_size,
"stage3_max_live_parameters": 1e8,
"stage3_max_reuse_distance": 1e8,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# fmt: on
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#, low_cpu_mem_usage=True)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
#from transformers.deepspeed import is_deepspeed_zero3_enabled
#print(f"Deepspeed 3 is enabled: {is_deepspeed_zero3_enabled()}")
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```<|||||>> the traceback you pasted if from the launcher, not the actual program. there are 2 independent programs, the launcher starts your actual program and the traceback is that it detected that your program has failed, but do you have the traceback from your program?
That is the full output when I run the program to use the T0 model. There are a few additional lines above what I posted, but there is no additional traceback info. I'll post the full output here (this is before I executed `pip install git+https://github.com/microsoft/DeepSpeed@olruwase/zero_inference_type_mismatch`):
```
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[2022-04-12 09:16:43,200] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 621696 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 621695) of binary: /home/gwightman/miniconda3/envs/confidence_estimation/bin/python
Traceback (most recent call last):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 728, in <module>
main()
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
deepspeed_example.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-04-12_09:17:25
host : lambda1
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 621695)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 621695
=======================================================
```
> Further, for now please switch to this branch of `deepspeed` [microsoft/DeepSpeed#1884](https://github.com/microsoft/DeepSpeed/pull/1884) as it has essential inference offloading bugs that have been fixed in this branch.
I've switched over to this branch.
> Could you insert:
>
> ```
> from transformers.deepspeed import is_deepspeed_zero3_enabled
> print(f"Deepspeed 3 is enabled: {is_deepspeed_zero3_enabled()}")
> ```
>
> before: `ds_engine.module.generate`
> ...
> and see if it reports: "Deepspeed 3 is enabled: True"
When running the [zero inference example](https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference) with T0_3B, the program outputs "Deepspeed 3 is enabled: True" (twice) and successfully returns predictions for the two examples.
When I try to use the same zero inference example with T0, I get the same error as above (still without any extra traceback info). It does not output "Deepspeed 3 is enabled: True", so it must be exiting the program before it reaches that line.
```
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[2022-04-12 09:27:00,049] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 622851 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -9) local_rank: 0 (pid: 622850) of binary: /home/gwightman/miniconda3/envs/confidence_estimation/bin/python
Traceback (most recent call last):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 728, in <module>
main()
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=======================================================
deepspeed_example.py FAILED
-------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
-------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-04-12_09:27:43
host : lambda1
rank : 0 (local_rank: 0)
exitcode : -9 (pid: 622850)
error_file: <N/A>
traceback : Signal 9 (SIGKILL) received by PID 622850
=======================================================
```<|||||>> Here is the nvme offload version that I tested with. Works great even with 1x or 2x tiny gpu - I didn't see more than 3GB used on each, but it's slow of course.
I tried to run this example and got another error when running it as `python -m torch.distributed.run --nproc_per_node=2 <script.py>`:
```
WARNING:__main__:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
[2022-04-12 09:35:17,339] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
Traceback (most recent call last):
File "nvme_offload_example.py", line 130, in <module>
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#, low_cpu_mem_usage=True)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1838, in from_pretrained
with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 702, in __init__
self.param_swapper = AsyncPartitionedParameterSwapper(_ds_config, self.dtype)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 40, in __init__
aio_op = AsyncIOBuilder().load(verbose=False)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 463, in load
return self.jit_load(verbose)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 467, in jit_load
raise RuntimeError(
RuntimeError: Unable to JIT load the async_io op due to it not being compatible due to hardware/software issue.
Traceback (most recent call last):
File "nvme_offload_example.py", line 130, in <module>
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#, low_cpu_mem_usage=True)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1838, in from_pretrained
with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 702, in __init__
self.param_swapper = AsyncPartitionedParameterSwapper(_ds_config, self.dtype)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 40, in __init__
aio_op = AsyncIOBuilder().load(verbose=False)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 463, in load
return self.jit_load(verbose)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 467, in jit_load
raise RuntimeError(
RuntimeError: Unable to JIT load the async_io op due to it not being compatible due to hardware/software issue.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 623509) of binary: /home/gwightman/miniconda3/envs/confidence_estimation/bin/python
Traceback (most recent call last):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 728, in <module>
main()
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 345, in wrapper
return f(*args, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 724, in main
run(args)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/run.py", line 715, in run
elastic_launch(
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
nvme_offload_example.py FAILED
------------------------------------------------------------
Failures:
[1]:
time : 2022-04-12_09:35:49
host : lambda1
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 623510)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-04-12_09:35:49
host : lambda1
rank : 0 (local_rank: 0)
exitcode : 1 (pid: 623509)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================
```
Now that I've switched to a new branch of `deepspeed`, can I once again use the `deepspeed` command to run the program?
If so, here's the output I get when running that example with `deepspeed --num_gpus 2 <script.py>`:
```
[2022-04-12 09:43:19,472] [WARNING] [runner.py:155:fetch_hostfile] Unable to find hostfile, will proceed with training with local resources only.
[2022-04-12 09:43:19,524] [INFO] [runner.py:453:main] cmd = /home/gwightman/miniconda3/envs/confidence_estimation/bin/python -u -m deepspeed.launcher.launch --world_info=eyJsb2NhbGhvc3QiOiBbMCwgMV19 --master_addr=127.0.0.1 --master_port=29500 nvme_offload_example.py
[2022-04-12 09:43:19,983] [INFO] [launch.py:103:main] WORLD INFO DICT: {'localhost': [0, 1]}
[2022-04-12 09:43:19,983] [INFO] [launch.py:109:main] nnodes=1, num_local_procs=2, node_rank=0
[2022-04-12 09:43:19,983] [INFO] [launch.py:122:main] global_rank_mapping=defaultdict(<class 'list'>, {'localhost': [0, 1]})
[2022-04-12 09:43:19,983] [INFO] [launch.py:123:main] dist_world_size=2
[2022-04-12 09:43:19,983] [INFO] [launch.py:125:main] Setting CUDA_VISIBLE_DEVICES=0,1
[2022-04-12 09:43:22,140] [INFO] [distributed.py:48:init_distributed] Initializing torch distributed with backend: nccl
Traceback (most recent call last):
File "nvme_offload_example.py", line 130, in <module>
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#, low_cpu_mem_usage=True)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1838, in from_pretrained
with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 702, in __init__
self.param_swapper = AsyncPartitionedParameterSwapper(_ds_config, self.dtype)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 40, in __init__
aio_op = AsyncIOBuilder().load(verbose=False)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 463, in load
Traceback (most recent call last):
File "nvme_offload_example.py", line 130, in <module>
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#, low_cpu_mem_usage=True)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1838, in from_pretrained
return self.jit_load(verbose)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 467, in jit_load
raise RuntimeError(
RuntimeError: Unable to JIT load the async_io op due to it not being compatible due to hardware/software issue.
with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/zero/partition_parameters.py", line 702, in __init__
self.param_swapper = AsyncPartitionedParameterSwapper(_ds_config, self.dtype)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 40, in __init__
aio_op = AsyncIOBuilder().load(verbose=False)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 463, in load
return self.jit_load(verbose)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/deepspeed/ops/op_builder/builder.py", line 467, in jit_load
raise RuntimeError(
RuntimeError: Unable to JIT load the async_io op due to it not being compatible due to hardware/software issue.
[2022-04-12 09:44:02,034] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 624172
[2022-04-12 09:44:02,034] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 624173
[2022-04-12 09:44:02,034] [ERROR] [launch.py:184:sigkill_handler] ['/home/gwightman/miniconda3/envs/confidence_estimation/bin/python', '-u', 'nvme_offload_example.py', '--local_rank=1'] exits with return code = 1
```
I saw your post [DeepSpeed #1037](https://github.com/microsoft/DeepSpeed/issues/1037) saying that I might need to do `apt install libaio-dev`, but I see this:
```
E: Could not open lock file /var/lib/dpkg/lock-frontend - open (13: Permission denied)
E: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), are you root?
```
I'm going to check if this is just a permissions issue--hopefully that will fix it.<|||||>> I saw your post https://github.com/microsoft/DeepSpeed/issues/1037 saying that I might need to do apt install libaio-dev, but I see this:
Yes, you need to `sudo apt install libaio-dev`
If for any reason you have an issue with installing libaio system-wide here is how to install it via conda if you use the latter: https://github.com/microsoft/DeepSpeed/issues/1890
So let's try the nvme solution once you installed `libaio-dev`
------------
wrt to failing to start with T0, I wonder if your kernel kills the program because it tries to use 4x cpu memory (over 3B that works) and on 2 gpus that's a huge amount of additional memory (64GB more). Perhaps something gets logged in `/var/log/syslog`?
How much cpu memory do you have on this host?
Perhaps, try the low_cpu_mem approach:
```
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, low_cpu_mem_usage=True)
```
but the `main` branch currently doesn't work for all models (it fails to use less memory silently), I have this PR that fixes the problem for all models:
https://github.com/huggingface/transformers/pull/16657
Deepspeed should really have a parameter that defines how much CPU memory can be used.
<|||||>> So let's try the nvme solution once you installed `libaio-dev`
I was able to install `libaio-dev` and tried to run the nvme offload solution again.
I'm getting a permission error related to nvme: `PermissionError: [Errno 13] Permission denied: '/mnt/nvme0'`
```
PermissionError: [Errno 13] Permission denied: '/mnt/nvme0'
Traceback (most recent call last):
File "nvme_offload_example.py", line 130, in <module>
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)#, low_cpu_mem_usage=True)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/models/auto/auto_factory.py", line 446, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/transformers/modeling_utils.py", line 1838, in from_pretrained
with deepspeed.zero.Init(config_dict_or_path=deepspeed_config()):
File "/home/gwightman/DeepSpeed/deepspeed/runtime/zero/partition_parameters.py", line 702, in __init__
self.param_swapper = AsyncPartitionedParameterSwapper(_ds_config)
File "/home/gwightman/DeepSpeed/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 43, in __init__
self._configure_aio(ds_config)
File "/home/gwightman/DeepSpeed/deepspeed/runtime/swap_tensor/partitioned_param_swapper.py", line 90, in _configure_aio
os.makedirs(self.swap_folder, exist_ok=True)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/os.py", line 213, in makedirs
makedirs(head, exist_ok=exist_ok)
[Previous line repeated 1 more time]
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/os.py", line 223, in makedirs
mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/mnt/nvme0'
[2022-04-18 17:18:00,169] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 776505
[2022-04-18 17:18:00,169] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 776506
[2022-04-18 17:18:00,169] [ERROR] [launch.py:184:sigkill_handler] ['/home/gwightman/miniconda3/envs/confidence_estimation/bin/python', '-u', 'nvme_offload_example.py', '--local_rank=1'] exits with return code = 1
```
> How much cpu memory do you have on this host?
```
128663 M total memory
19816 M used memory
41032 M active memory
31464 M inactive memory
54663 M free memory
751 M buffer memory
53431 M swap cache
2047 M total swap
2033 M used swap
14 M free swap
```
Here's the output of `/var/log/syslog`:
```
Apr 18 17:25:16 lambda1 kernel: [1661615.493653] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1010.slice/session-744.scope,task=python,pid=777012,uid=1010
Apr 18 17:25:16 lambda1 kernel: [1661615.493753] Out of memory: Killed process 777012 (python) total-vm:73502680kB, anon-rss:46787484kB, file-rss:68712kB, shmem-rss:8063664kB, UID:1010 pgtables:112820kB oom_score_adj:0
Apr 18 17:25:16 lambda1 kernel: [1661615.496313] Cannot map memory with base addr 0x7f9adc000000 and size of 0x8000 pages
Apr 18 17:25:17 lambda1 kernel: [1661616.474204] oom_reaper: reaped process 777012 (python), now anon-rss:0kB, file-rss:68876kB, shmem-rss:8063796kB
```
> Perhaps, try the low_cpu_mem approach:
>
> ```
> model = AutoModelForSeq2SeqLM.from_pretrained(model_name, low_cpu_mem_usage=True)
> ```
>
> but the `main` branch currently doesn't work for all models (it fails to use less memory silently), I have this PR that fixes the problem for all models: #16657
>
> Deepspeed should really have a parameter that defines how much CPU memory can be used.
I tried the low memory approach, and I got a message saying that `low_cpu_mem_usage` is not available with DeepSpeed 3, so I changed it to DeepSpeed 2. I got this error:
```
File "/home/gwightman/miniconda3/envs/confidence_estimation/lib/python3.8/site-packages/torch/nn/modules/module.py", line 905, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError : return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)CUDA out of memory. Tried to allocate 64.00 MiB (GPU 1; 23.70 GiB total capacity; 21.99 GiB already allocated; 52.81 MiB free; 21.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
RuntimeError
: CUDA out of memory. Tried to allocate 64.00 MiB (GPU 0; 23.69 GiB total capacity; 21.93 GiB already allocated; 48.44 MiB free; 21.93 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[2022-04-18 17:36:36,198] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 777975
[2022-04-18 17:36:36,198] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 777976
[2022-04-18 17:36:36,198] [ERROR] [launch.py:184:sigkill_handler] ['/home/gwightman/miniconda3/envs/confidence_estimation/bin/python', '-u', 'deepspeed_example.py', '--local_rank=1'] exits with return code = 1
```
<|||||>> > So let's try the nvme solution once you installed `libaio-dev`
>
> I was able to install `libaio-dev` and tried to run the nvme offload solution again.
>
> I'm getting a permission error related to nvme: `PermissionError: [Errno 13] Permission denied: '/mnt/nvme0'`
Apologies if it wasn't obvious you were meant to edit the path to some path on your filesystem. It just happened to be `/mnt/nvme0` on my setup.
> > How much cpu memory do you have on this host?
>
> ```
> 128663 M total memory
So 128MB of CPU RAM.
When dealing with huge models it always helps to have some swap memory, which extends your effective CPU memory.
> Here's the output of /var/log/syslog:
>
> Apr 18 17:25:16 lambda1 kernel: [1661615.493653] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=/,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1010.slice/session-744.scope,task=python,pid=777012,uid=1010
> Apr 18 17:25:16 lambda1 kernel: [1661615.493753] Out of memory: Killed process 777012 (python) total-vm:73502680kB, anon-rss:46787484kB, file-rss:68712kB, shmem-rss:8063664kB, UID:1010 pgtables:112820kB oom_score_adj:0
So yes, as expected your system kills the process, as it consumes too much CPU memory.
> I tried the low memory approach, and I got a message saying that `low_cpu_mem_usage` is not available with DeepSpeed 3, so I changed it to DeepSpeed 2. I got this error:
Ah, yes, sorry, that is still a work in progress. I will need to work on having `low_cpu_mem_usage` support Deepspeed stage-3 or we are also discussing other ways of loading directly on gpu and not require 2x model size on cpu memory, which gets further multiplied by the number of gpus. So here it tries to allocate `40 * 2 * 2` 160GB of CPU memory and of course it fails.
Hmm, staggered loading should overcome this issue as well, basically having the 2nd instance of the script insert a delay before `from_pretrained` so that both don't try to load at the same time. It may or may not run into barriers. Have to have a look. But something like:
```
import time
[...]
if local_rank == 1: # stagger the loading
time.sleep(120) # should be long enough for rank 0 to finish `from_pretrained`
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
```
Actually the staggering most likely won't work, since deepspeed's `zero.Init` will want all gpus to run in sync to shard the weights across GPUs. So don't waste time on trying this one.
<|||||>Here is a recipe to add swap memory, of course, edit the path and the desired amount of GBs
```
### Add a new swap file or extend one ###
# turn off all swap processes
sudo swapoff -a
# add 128GB file (or resize it if it already exists)
sudo dd if=/dev/zero of=/mnt/nvme0/swapfile bs=1G count=128
# prep as swap
sudo chmod 600 /mnt/nvme0/swapfile
sudo chown root.root /mnt/nvme0/swapfile
sudo mkswap /mnt/nvme0/swapfile
# activate the swap file
sudo swapon /mnt/nvme0/swapfile
# check the amount of swap available
grep SwapTotal /proc/meminfo
# to make permanent add to /etc/fstab if it isnโt already there
/mnt/nvme0/swapfile none swap sw 0 0
```<|||||>1. OK, so first we want to shard the T0 checkpoint which then allows us to have smaller chunks to keep in memory while loading the model.
```
# shard it to 10GB / shard
python -c "from transformers import AutoModelForSeq2SeqLM; model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0'); model.save_pretrained('t0-sharded')"
```
now use "t0-sharded" as a model name
(at some point we will have a sharded version on the hub)
you can shard it into even smaller chunks, say of 5GB:
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B"); \
model.save_pretrained("t0-sharded", max_shard_size="5GB")'
```
I'd say do the latter for this experiment.
and of course `'t0-sharded'` is where it gets saved - so you can play with the path.
2. then we want this PR https://github.com/huggingface/transformers/pull/16844 which I hope will get merged shortly which uses even less cpu memory.
With these 2 fixes we will still need `42+10`GB (or `42+5` if you sharded to 5GB) per process - which may just be enough for what you need - i.e. at `47*2` = 94GB => should fit into 128GB.
Please let me know if this unblocks you.
-------------------
another way:
with Deepspeed nvme + cpu offload 1 gpu should be enough! as you only need to be able to load a single largest layer and if you don't care for the parallel input processing you're not gaining anything from 2 gpus anyway when using nvme offload (I think, I haven't measured, so I can be wrong).
------------------
and I still want to try to work out `low_cpu_mem_usage=True` with deepspeed zero-3
<|||||>@stas00, thank you so much for your help! I'm answering for @gportill since we were working on this issue together.
## Summary of what worked:
1. Install transformers and DeepSpeed from GitHub
```bash
pip install git+http://github.com/huggingface/transformers.git#egg=transformers
pip install git+http://github.com/microsoft/DeepSpeed.git#egg=deepspeed
```
3. If using NVME offload, set up Linux-native asynchronous I/O facility:
```bash
sudo apt install libaio-dev
```
6. If using CPU offload, increase swap memory with Stas' directions: https://github.com/huggingface/transformers/issues/16616#issuecomment-1102834737
8. Load sharded model (only some models are available sharded, T0 and T0pp included)
```python
model = AutoModelForSeq2SeqLM.from_pretrained(model_name, revision="sharded")
```
<hr>
## Full working example:
This example was modified from https://github.com/huggingface/transformers/issues/15399#issue-1117950014 and assumes all of the "summary of what worked" steps were taken.
```python
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
# Imports
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import os
# To avoid warnings about parallelism in tokenizers
os.environ["TOKENIZERS_PARALLELISM"] = "false"
import torch
from argparse import ArgumentParser
#################
# DeepSpeed Config
#################
def generate_ds_config(args):
"""
ds_config notes
- enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
faster.
- for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
all official t5 models are bf16-pretrained
- set offload_param.device to "none" or completely remove the `offload_param` section if you don't
- want CPU offload
- if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
- which params should remain on gpus - the larger the value the smaller the offload size
For indepth info on Deepspeed config see
https://huggingface.co/docs/transformers/main/main_classes/deepspeed
keeping the same format as json for consistency, except it uses lower case for true/false
fmt: off
"""
config = AutoConfig.from_pretrained(args.model_name)
world_size = int(os.getenv("WORLD_SIZE", "1"))
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = args.batch_size * world_size
config = {
"fp16": {
"enabled": False
},
"bf16": {
"enabled": False
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": args.offload,
"nvme_path": args.nvme_offload_path,
"pin_memory": True,
"buffer_count": 6,
"buffer_size": 1e8,
"max_in_cpu": 1e9
},
"aio": {
"block_size": 262144,
"queue_depth": 32,
"thread_count": 1,
"single_submit": False,
"overlap_events": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.1 * model_hidden_size * model_hidden_size,
"stage3_max_live_parameters": 1e8,
"stage3_max_reuse_distance": 1e8,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
return config
#################
# Helper Methods
#################
def parse_args():
"""Parse program options"""
parser = ArgumentParser()
parser.add_argument("--model-name", default="bigscience/T0", help="Name of model to load.")
parser.add_argument("--offload", choices=["nvme", "cpu", "none"], default="none",
help="DeepSpeed optimization offload choices for ZeRO stage 3.")
parser.add_argument("--nvme-offload-path", default="/tmp/nvme-offload",
help="Path for NVME offload. Ensure path exists with correct write permissions.")
parser.add_argument("--batch-size", default=1, help="Effective batch size is batch-size * # GPUs")
return parser.parse_args()
#################
# Main
#################
# Distributed setup
local_rank = int(os.getenv("LOCAL_RANK", "0"))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
args = parse_args()
ds_config = generate_ds_config(args)
# fmt: on
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# Special version of T0
revision = None
if args.model_name in ["bigscience/T0", "bigscience/T0pp"]:
revision = "sharded"
model = AutoModelForSeq2SeqLM.from_pretrained(args.model_name, revision=revision)
# initialise Deepspeed ZeRO and store only the engine object
ds_engine = deepspeed.initialize(model=model, config_params=ds_config)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(args.model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
# synced_gpus (bool, optional, defaults to False) โ
# Whether to continue running the while loop until max_length (needed for ZeRO stage 3) model_kwargs โ
# Additional model specific keyword arguments will be forwarded to the forward function of the model.
# If model is an encoder-decoder model the kwargs should include encoder_outputs.
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}\n")
```
And the following code to run:
```bash
export CUDA_LAUNCH_BLOCKING=0
export OMP_NUM_THREADS=1
python -m torch.distributed.run --nproc_per_node=2 T0_inference.py
```
<|||||>That's a really neat summary and code parametrization, @AADeLucia - great work!
Just to add that with the sharded model it's now possible to infer T0 (42GB) and other similar models in fp32 using just 2x 24GB gpus, w/ deepspeed w/o any offload.
But if you have smaller GPUs, or just one GPU or larger models then the above script allows you to offload to cpu RAM if you have lots of it and if not so much to an NVMe device - each making the performance progressively slower.
And once:
1. `transformers>1.18.0`
2. `deepspeed>0.6.3`
are available you can install the released versions instead of the git versions.
|
transformers | 16,615 | closed | Update Support image on README.md | # What does this PR do?
Updates the Support image linking to our [Expert Acceleration Program Page](https://huggingface.co/support).
The Marketing + Monetization Team felt this image deserved a refresh to better highlight the incredible Machine Learning Experts at Hugging Face. In addition, we hope this helps drive additional EAP awareness among our brilliant community members in an effort to better support their ML work/roadmaps.
| 04-05-2022 20:12:02 | 04-05-2022 20:12:02 | _The documentation is not available anymore as the PR was closed or merged._<|||||>whenever possible you should compress images using https://tinypng.com/
Here you'd be saving 83% for instance:
compressed image:

Other than that, looks good to me! cc'ing @gary149 in case he has feedback on the design<|||||>> Other than that, looks good to me! cc'ing @gary149 in case he has feedback on the design
Looks ok to me, but the logo is distorted and maybe the image should be vertically smaller to take less space on the readme?<|||||>The logo shouldn't be distorted. Have tested this and the original image was verified/improved by our graphic designer, Bibi. @feconroses if you have any other thoughts?
@gary149 could you share additional details/screenshots to help us address any distortions? <|||||>Looks good to me! <|||||>i think what @gary149 means is the logo is not the usual aspect ratio, it's "flatter". (let me invite Bibi to GitHub BTW)<|||||>Hi! Bibi here. I'll have a look at the issue and resolve it<|||||>Ok for me.<|||||>you should tweet about it @BritneyMuller! |
transformers | 16,614 | closed | Improving T5 Docs |
### Who can help
@NielsRogge @patrickvonplaten @sgugger
Documentation: @sgugger
## Information
Model I am using : T5ForConditionalGeneration
The problem arises when using:
* my own modified scripts
The tasks I am working on is:
* my own task or dataset
## To reproduce
#[13240 ](https://github.com/huggingface/transformers/pull/13240) is a really nice PR that adds a lot of clarity to the documentation. However, in the examples provided we read
```python
tokenizer.pad_token = tokenizer.eos_token # to avoid an error
```
and I feel more information could be given to explain users why this is necessary. I am currently attempting to do batched decoding with T5 and observing very strange outputs, and therefore I'm keen to understand if this is a problem. Very soon I will test to understand whether the strange behaviour is due to batching or not, but it would be great to enhance the docs to explain the error that would occur and why.
## Expected behavior
One or two extra sentences to explain why we left-pad encoder input sequences with <EOS> when doing batched decoding for T5.
| 04-05-2022 18:13:30 | 04-05-2022 18:13:30 | Hi,
Thanks for your question! To be honest it wasn't clear for me neither, I guess it's set as otherwise it might complain that no padding token is set.
I took that snippet from this PR: #7552.
It includes the comment:
```
# when generating, we will use the logits of right-most token to predict the next token
# so the padding should be on the left
```
However, that was for a decoder-only model (GPT-2). Not sure whether the same is required for an encoder-decoder one like T5. Maybe @patrickvonplaten can clarify here.<|||||>Thanks @NielsRogge, this would be very helpful indeed. I did the following checks this morning:
1. Run my original, non-batched decoder
2. Run batched decoding, implemented following the docs
The results are vastly different.
For 1 we get:
```json
{
"1_00000": {
"0": {
"utterance": "hi, could you get me a restaurant booking on the 8th please?",
"Restaurants_2": {
"predicted_str": " [states] 10:the 8th [intents] i1 [req_slots] <EOS>"
}
},
"1": {
"utterance": "could you get me a reservation at p.f. chang's in corte madera at afternoon 12?",
"Restaurants_2": {
"predicted_str": " [states] 0:corte madera 2:the 8th 9:p.f. chang's 10:afternoon 12 [intents] i1 [req_slots] <EOS>"
}
}
```
For 2 we get:
```json
{
"1_00000": {
"0": {
"utterance": "hi, could you get me a restaurant booking on the 8th please?",
"Restaurants_2": {
"predicted_str": " [states] 10:the 8th [intents] i1 [req_slots] i1 [req_slots] i1 [req_slots] i1 [req_slots] i1 [req_slots] i1 [req_slots] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] <EOS>"
}
},
"1": {
"utterance": "could you get me a reservation at p.f. chang's in corte madera at afternoon 12?",
"Restaurants_2": {
"predicted_str": " [states] 0:corte madera 2:the 8th 9:p.f. chang's 10:afternoon 12 [intents] i1 [req_slots] i1 [req_slots] i1 [req_slots] i1 [req_slots] i1 [req_slots] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] [intents] 0 [intents] i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i <EOS>"
}
},
}
```
The results above are obtained with `batch_size=5` but setting `batch_size=1` gives identical results. Therefore my "batched" implementation is somehow subtly different to the one giving correct results.
I'll try to figure out why this occurs. I should state the I can make the code/checkpoints to replicate the above issues available to you for debugging. I'll keep you posted.
Note: The results in 2. are after manually post-processing to remove many `<EOS>` strings. I can post the "raw" version if it is helpful with debugging!<|||||>Ok, I debugged my code and the preliminary test passed. The change has been to change the `generate` API call from
```python
output_seqs = model.generate(
input_ids=input_ids.to(DEVICE),
attention_mask=attention_mask.to(DEVICE),
max_length=args.decoder_max_seq_len,
use_cache=True,
)
```
to
```python
output_seqs = model.generate(
input_ids=input_ids.to(DEVICE),
attention_mask=attention_mask.to(DEVICE),
max_length=args.decoder_max_seq_len,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
use_cache=True,
)
```
I'm not sure if this is an omission in the docs or why this is a fix. An explanation would be appreciated! Maybe something note worthy is that the last id in the `input_ids` is the `<EOS>` token ID when batch size is `1` (this also holds true of my non-batched implementation). The output tensors then undergo the following postprocessing steps:
```python
output_strings = tokenizer.batch_decode(output_seqs)
output_strings = remove_padding(output_strings)
```
where `remove_padding` is
```python
def remove_padding(output_strings: list[str], pad_token: str) -> list(str):
padding_free = []
for s in output_strings:
pad_token_start = s.find(pad_token)
while pad_token_start != -1:
s = f"{s[:pad_token_start]}{s[pad_token_start+len(pad_token):].lstrip()}"
pad_token_start = s.find(pad_token)
padding_free.append(f"{s} {pad_token}")
return padding_free
```
By contrast, the implementation that does not use batching uses the call:
```python
output_seqs = model.generate(
input_ids.to(DEVICE),
max_length=args.decoder_max_seq_len,
do_sample=False,
temperature=1.0,
use_cache=True,
num_beams=1,
bos_token_id=tokenizer.bos_token_id,
eos_token_id=tokenizer.eos_token_id,
pad_token_id=tokenizer.pad_token_id,
early_stopping=True,
)
```
and postprocessing is simply
```python
output_strings = tokenizer.decode(output_seqs[0])
```<|||||>Good point @alexcoca ,
@NielsRogge @alexcoca Yeah this looks like it was a bad copy-paste from GPT2.
Should be corrected here: https://github.com/huggingface/transformers/pull/16646<|||||>Thanks @patrickvonplaten! So from your PR I understand that it is not necessary to set the tokenizer padding to `<EOS>` as we do for GPT-2 and my fix worked because I passed the correct `pad_token_id` to `generate` in my fix. So I'll revert both and expect this to work as well.
On a different note, I ran a large scale test on batched inference. I get `70.20432%` accuracy when I decode with batch size `16` and `70.19564%` when I run my original code. The difference is too small to matter but it does show that the inferences in the two cases functions slightly differently!
<|||||>I could be related to https://github.com/huggingface/transformers/issues/14859#issuecomment-1015389797 actually<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,613 | closed | Fill-in-the-Blank Text Generation of T5 | Hello, any guys know how to implement the `Fill-in-the-Blank Text Generation` of T5 in this post: https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html
Is there any demo codes in huggingface transformers to do this ?
Thanks. | 04-05-2022 17:38:15 | 04-05-2022 17:38:15 | I'm wondering the same. I've got pretraining going with the `run_t5_mlm_flax.py` script, which is great(!), but help with fill-in-the-blank generation is my next step.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,612 | closed | fix default num_attention_heads in segformer doc | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # [16605](https://github.com/huggingface/transformers/issues/16605)
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-05-2022 16:53:27 | 04-05-2022 16:53:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,611 | closed | [Speech2Text Doc] Fix docs | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes Speech to Text model docs and add tests
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-05-2022 16:27:35 | 04-05-2022 16:27:35 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Great review - thanks @ydshieh |
transformers | 16,610 | closed | 3-dimensional attention_mask in LongformerSelfAttention |
# โ Questions & Help
<!-- The GitHub issue tracker is primarly intended for bugs, feature requests,
new models, benchmarks, and migration questions. For all other questions,
we direct you to the Hugging Face forum: https://discuss.huggingface.co/ .
-->
## Details
<!-- Description of your issue -->
<!-- You should first ask your question on the forum, and only if
you didn't get an answer after a few days ask it here on GitHub. -->
Hi everyone, I am learning NLP and I am doing a project that use ```distilroberta``` as an encoder model. I have read about [K-BERT](https://arxiv.org/pdf/1909.07606), and I want to implement that model with ```distilroberta```. To say it briefly, **K-BERT** has a parallel Knowledge Graph with it. Each word in an input sentence will be queried from that Knowledge Graph and the information is injected to the input sentence beside that word. The information from the Knowledge Graph is just related to their token onwer. So, there is a visible matrix that decides which tokens are related to a token, it acts as an ```attention_mask``` an ```attention_mask``` is usually ```[batch, seq_len]``` but this kind of attention mask is ```[batch, seq_len, seq_len]```, and ```position index``` to indicate which token is related to the others. Therefore, when using distilroberta, I need to pass ```position``` as ```position_ids``` and ```visible matrix``` as ```attention_mask``` argument. Thankfully, in ```modeling_roberta``` it accept the ```attention_mask``` with 3 dimension and create an extended attention mask as ```[:, None, :, :]```, my visible matrix works. However, When I try to convert ```distilroberta``` to ```longformer```, there is a problem in ```LongformerSelfAttention```:
```python
# values to pad for attention probs
remove_from_windowed_attention_mask = (attention_mask != 0)[:, :, None, None]
```
If ```attention_mask``` is ```[batch, seq_len]```. After transforming like that, ```remove_from_windowed_attention_mask``` will be ```[batch, seq_len, 1, 1]```. However, with my ```visible matrix```, it results in ```[batch, seq_len, 1, 1, seq_len]```, then that has an error at:
```python
def _sliding_chunks_query_key_matmul(self, query: torch.Tensor, key: torch.Tensor, window_overlap: int):
"""
Matrix multiplication of query and key tensors using with a sliding window attention pattern. This
implementation splits the input into overlapping chunks of size 2w (e.g. 512 for pretrained Longformer) with an
overlap of size window_overlap
"""
batch_size, seq_len, num_heads, head_dim = query.size()
assert (
seq_len % (window_overlap * 2) == 0
), f"Sequence length should be multiple of {window_overlap * 2}. Given {seq_len}"
assert query.size() == key.size()
ValueError: too many values to unpack (expected 4)
```
I actually do not know how to solve this problem. I am afraid whether I change that attention mask transformation to be suitable for my ```visible matrix```, it will affect something unexpectedly or not. I would be really appreciated because of your help.
**A link to original question on the forum**: https://discuss.huggingface.co/t/3-dimensional-attention-mask-in-longformerselfattention/16496?u=khangnguyen2907
<!-- Your issue will be closed if you don't fill this part. -->
| 04-05-2022 15:03:00 | 04-05-2022 15:03:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,609 | closed | Use CLIP model config to set some kwargs for components | # What does this PR do?
In `CLIPModel`, set `output_attentions` and `output_hidden_states` using `CLIPModel.config` if these values are specified in the configuration + not specified in the arguments.
(currently, these operations are done in its vision & text components separately, and cause a WIP CLIP PT/TF equivalence test failing - #16557)
## Details
Currently, `CLIPModel` uses its 2 components' (`vision_model` and `text_model`) configurations to perform things like
(here self is `CLIPVisionTransformer` or `CLIPTextTransformer`)
```python
output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
```
If `output_attentions`/`output_hidden_states` are not passed to `CLIPModel.forward` at this line
https://github.com/huggingface/transformers/blob/9fd5e6bbe605941b707b0e1aa223a5c51c183550/src/transformers/models/clip/modeling_clip.py#L966-L967
but `CLIPModel.config` has these values set, `CLIPModel.config.output_attentions` and `CLIPModel.config.output_hidden_states` won't have any effect. This case happens here
https://github.com/huggingface/transformers/blob/9fd5e6bbe605941b707b0e1aa223a5c51c183550/tests/test_modeling_tf_common.py#L544-L547
Therefore, **CLIP PT/TF equivalence test won't returns hidden_states/attentions for the PT model.**
In TF,
https://github.com/huggingface/transformers/blob/b33ab4eb59f3baa0108d494695bc09b4688960a7/src/transformers/modeling_tf_utils.py#L393
will use `config` to set the kwargs at the `CLIPModel` level. These kwargs are passed to the 2 components, and **CLIP PT/TF equivalence test returns hidden_states/attentions for the TF model.**
| 04-05-2022 14:51:54 | 04-05-2022 14:51:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>cc @gante (just for information) since he is recently working on `unpack_inputs` & `input_processing` in TF |
transformers | 16,608 | closed | [Community event] HugGAN sprint for training AI art with free compute | # HugGAN sprint :art: :artist: :paintbrush:
Got GANs?
Feel free to join the [HugGAN event](https://github.com/huggingface/community-events/tree/main/huggan), which takes place virtually from April 4 - April 15th (you can join any time in between these dates). ๐ค
The goal is to train and showcase generative AI art* with free compute provided by [Paperspace](https://www.paperspace.com/).
*GANs in particular but if you're into diffusion models, go ahead ;)
## What is it about?
The main components of the HugGAN sprint consist of:
- [ ] sharing image datasets, useful for training generative models, with the community using the brand-new (and awesome) [ImageFolder](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder) and [push_to_hub](https://huggingface.co/docs/datasets/upload_dataset#upload-from-python) features. There are already 25+ datasets contributed, including famous ones such as [MetFaces](https://huggingface.co/datasets/huggan/metfaces) and [pokemon](https://huggingface.co/datasets/huggan/pokemon) :rocket:
- [ ] training GANs using [example scripts](https://github.com/huggingface/community-events/tree/main/huggan/pytorch) provided by :hugs:, or any custom model picked by you, using free compute :computer: :money_mouth_face: provided by Paperspace
- [ ] get to know how GANs work, through multiple community talks and discussions by famous AI artists (announced soon!)
- [ ] get to know the :hugs: ecosystem through [Datasets](https://huggingface.co/docs/datasets/index), [Accelerate](https://huggingface.co/docs/accelerate/index), using either PyTorch/Keras
- [ ] showcasing the magic of a GAN as a :hugs: Space, like this one: https://huggingface.co/spaces/hysts/DualStyleGAN
## What do I need to do to participate?
To participate, please follow these steps:
- [ ] fill out [this short google form](https://docs.google.com/forms/d/e/1FAIpQLSd_mpK4dYu1V-ejeTzoiIsTiMSVlZ0kYQCEoBmoa0vH-bNuag/viewform)
- [ ] create a Hugging Face Hub account [here](https://huggingface.co/join) if you havenโt already, and join the [huggan organization ](https://huggingface.co/organizations/huggan/share/bekBYwkjyeJOAlxpcYRKgjLaRcrnIOeuge)
- [ ] join our discord [here](https://discord.gg/H3bUrDPTfS) - when joining the eventโs discord channel please make sure to click on the :hugs: emoji under the first message to access all relevant information.
| 04-05-2022 14:45:48 | 04-05-2022 14:45:48 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,607 | closed | KeyError: 'loss' while trying to finetune Bart model on custom dataset for summarization. | I am trying to finetune BART on my custom dataset by following a huggingface finetuning tutorial.
<img width="796" alt="Screen Shot 2022-04-05 at 10 29 54 AM" src="https://user-images.githubusercontent.com/78582210/161777173-50af5514-443e-429e-9a99-468c50fe4e03.png">
<img width="846" alt="Screen Shot 2022-04-05 at 10 33 33 AM" src="https://user-images.githubusercontent.com/78582210/161777920-67c25cf2-afe6-435a-8b46-d3d1e639d394.png">
<img width="485" alt="Screen Shot 2022-04-05 at 10 34 05 AM" src="https://user-images.githubusercontent.com/78582210/161778025-2eeafd95-e21d-4385-b265-1c33cdf15d0a.png">
<img width="731" alt="Screen Shot 2022-04-05 at 10 34 24 AM" src="https://user-images.githubusercontent.com/78582210/161778096-c5ab1a96-8bdc-4337-b320-54812993ee0d.png">
At which point I am met with the keyError: 'loss' error. I looked at https://discuss.huggingface.co/t/keyerror-loss-during-fine-tuning-bert-base-italian-cased-for-qa/6638/2 but wasn't able to figure it out for my code. Please help!
@sgugger | 04-05-2022 14:43:01 | 04-05-2022 14:43:01 | Please use the forums to debug your code. You are using the base model class here, that does not accept `labels` and does not return a loss. You should use a model that can be fine-tuned on your task.<|||||>@sgugger Apologies for that and thank you for letting me know. All articles for BART finetuning I came across use the `pretrained_model_name = "facebook/bart-large-cnn"` model. Is there a small version like "t5-small" ? <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,606 | closed | Fix seq2seq doc tests | # What does this PR do?
This PR fixes the doc tests for se2seq models (BART, mBART, BigBirdPegasus, PLBart). | 04-05-2022 14:17:48 | 04-05-2022 14:17:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,605 | closed | change num_attention_heads=[1, 2, 5, 8] to [1,2,4,8] | Hi @NielsRogge ,
Is this a typo in segformer? The default number of attention heads should be `[1,2,4,8]`.
https://github.com/huggingface/transformers/blob/4354005291dacd5c8264f0936a33678df4f4bb71/src/transformers/models/segformer/configuration_segformer.py#L107 | 04-05-2022 13:36:46 | 04-05-2022 13:36:46 | @NielsRogge I'll work on this.<|||||>Hi,
Thanks for pointing out, actually that piece of code is correct, we should just update the docstring.
As seen below (table taken from the [paper](https://arxiv.org/abs/2105.15203)), (N1, N2, N3, N4) refer to the number of heads in each stage:
<img width="832" alt="Screenshot 2022-04-05 at 16 50 13" src="https://user-images.githubusercontent.com/48327001/161781658-53578f3d-462c-470c-846f-9f07ecae4872.png">
Can you open a PR to fix the docstring?
<|||||>Hi @NielsRogge ,
Thanks for your quick reply very much.
This is my misunderstanding. I'm sorry to waste your time.
Best regards,
Jun<|||||>Can you still open a PR to fix it in the docs? |
transformers | 16,604 | closed | Trouble exporting resolve_conj operation to ONNX | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.2
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyTorch version (GPU?): 1.10.2+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): Helsinki-NLP/opus-mt-xx-xx
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Run below script
```
import torch.nn as nn
from transformers import MarianMTModel
from transformers import AutoTokenizer
import torch
class HelsinkiModel(nn.Module):
def __init__(self, model_id, device):
super(HelsinkiModel, self).__init__()
self.mt_model = MarianMTModel.from_pretrained(model_id).to(device)
self.mt_model.eval()
self.device = device
def forward(self, input_ids, attention_mask):
return self.mt_model.generate(input_ids=input_ids, attention_mask=attention_mask, repetition_penalty=0.7)
model = HelsinkiModel('Helsinki-NLP/opus-mt-fr-en', 'cpu')
tokenizer = AutoTokenizer.from_pretrained('Helsinki-NLP/opus-mt-fr-en')
inputs = tokenizer("Une phrase d'exemple", padding='max_length', max_length=512, truncation=True, return_tensors="pt")
inputs = (inputs["input_ids"].to(device), inputs["attention_mask"].to(device))
torch.onnx.export(model, inputs, path, export_params=True, opset_version=14, do_constant_folding=False,
input_names=["input_ids", "attention_mask"], output_names=["outputs"],
dynamic_axes={'input_ids': {0: 'batch_size'}, 'attention_mask': {0: 'batch_size'}, 'outputs': {0: 'batch_size'}})
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Error trace
```
/home/.local/lib/python3.9/site-packages/transformers/models/marian/modeling_marian.py:234: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_weights.size() != (bsz * self.num_heads, tgt_len, src_len):
/home/.local/lib/python3.9/site-packages/transformers/models/marian/modeling_marian.py:240: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attention_mask.size() != (bsz, 1, tgt_len, src_len):
/home/.local/lib/python3.9/site-packages/transformers/models/marian/modeling_marian.py:271: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if attn_output.size() != (bsz * self.num_heads, tgt_len, self.head_dim):
/home/.local/lib/python3.9/site-packages/transformers/generation_utils.py:1119: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_ids.shape[-1] >= max_length:
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:180: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
self._done = torch.tensor([False for _ in range(batch_size)], dtype=torch.bool, device=self.device)
/home/.local/lib/python3.9/site-packages/transformers/generation_utils.py:1934: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if num_beams * batch_size != batch_beam_size:
/home/.local/lib/python3.9/site-packages/transformers/models/marian/modeling_marian.py:850: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if input_shape[-1] > 1:
/home/.local/lib/python3.9/site-packages/transformers/generation_logits_process.py:389: TracerWarning: Converting a tensor to a Python list might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
dynamic_banned_tokens = self._calc_banned_bad_words_ids(input_ids.tolist())
/home/.local/lib/python3.9/site-packages/transformers/generation_logits_process.py:120: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if cur_len < self.min_length:
/home/.local/lib/python3.9/site-packages/transformers/generation_logits_process.py:595: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if cur_len == self.max_length - 1:
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:215: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
if not (batch_size == (input_ids.shape[0] // self.group_size)):
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:215: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if not (batch_size == (input_ids.shape[0] // self.group_size)):
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:233: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self._done[batch_idx]:
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:247: TracerWarning: Iterating over a tensor might cause the trace to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
zip(next_tokens[batch_idx], next_scores[batch_idx], next_indices[batch_idx])
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:251: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if (eos_token_id is not None) and (next_token.item() == eos_token_id):
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:277: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
self._done[batch_idx] = self._done[batch_idx] or beam_hyp.is_done(
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:278: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
next_scores[batch_idx].max().item(), cur_len
/home/.local/lib/python3.9/site-packages/transformers/generation_utils.py:2053: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if beam_scorer.is_done or stopping_criteria(input_ids, scores):
/home/.local/lib/python3.9/site-packages/transformers/generation_stopping_criteria.py:113: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
return any(criteria(input_ids, scores) for criteria in self)
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:258: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
next_score.item(),
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:382: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
self.worst_score = min(score, self.worst_score)
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:375: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if len(self) < self.num_beams or score > self.worst_score:
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:378: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
sorted_next_scores = sorted([(s, idx) for idx, (s, _) in enumerate(self.beams)])
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:303: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if self._done[batch_idx]:
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:321: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
sorted_hyps = sorted(beam_hyp.beams, key=lambda x: x[0])
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:326: TracerWarning: Using len to get tensor shape might cause the trace to be incorrect. Recommended usage would be tensor.shape[0]. Passing a tensor of different shape might lead to errors or silently give incorrect results.
sent_lengths[self.num_beam_hyps_to_keep * i + j] = len(best_hyp)
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:333: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
sent_max_len = min(sent_lengths.max().item() + 1, max_length)
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:336: TracerWarning: Converting a tensor to a Python number might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if sent_lengths.min().item() != sent_lengths.max().item():
/home/.local/lib/python3.9/site-packages/transformers/generation_beam_search.py:343: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if sent_lengths[i] < max_length:
/home/.local/lib/python3.9/site-packages/torch/onnx/symbolic_helper.py:716: UserWarning: allowzero=0 by default. In order to honor zero value in shape use allowzero=1
warnings.warn("allowzero=0 by default. In order to honor zero value in shape use allowzero=1")
Traceback (most recent call last):
File "/home/repos/translation-service/scripts/onnx_main.py", line 24, in <module>
export_to_onnx(model, 'example.onnx', inputs, device)
File "/home/repos/translation-service/models/model_helper.py", line 14, in export_to_onnx
torch.onnx.export(model, inputs, path, export_params=True, opset_version=14, do_constant_folding=False,
File "/home/.local/lib/python3.9/site-packages/torch/onnx/__init__.py", line 316, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/.local/lib/python3.9/site-packages/torch/onnx/utils.py", line 107, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/.local/lib/python3.9/site-packages/torch/onnx/utils.py", line 724, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/.local/lib/python3.9/site-packages/torch/onnx/utils.py", line 497, in _model_to_graph
graph = _optimize_graph(graph, operator_export_type,
File "/home/.local/lib/python3.9/site-packages/torch/onnx/utils.py", line 216, in _optimize_graph
graph = torch._C._jit_pass_onnx(graph, operator_export_type)
File "/home/.local/lib/python3.9/site-packages/torch/onnx/__init__.py", line 373, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/home/.local/lib/python3.9/site-packages/torch/onnx/utils.py", line 1028, in _run_symbolic_function
symbolic_fn = _find_symbolic_in_registry(domain, op_name, opset_version, operator_export_type)
File "/home/.local/lib/python3.9/site-packages/torch/onnx/utils.py", line 982, in _find_symbolic_in_registry
return sym_registry.get_registered_op(op_name, domain, opset_version)
File "/home/.local/lib/python3.9/site-packages/torch/onnx/symbolic_registry.py", line 125, in get_registered_op
raise RuntimeError(msg)
RuntimeError: Exporting the operator resolve_conj to ONNX opset version 14 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.
```
## Expected behavior
A succesfull ONNX export.
There are numerous warnings. I cannot test if these are problematic, because the model as a whole does not export. I want to resolve the final RuntimeError, which refers to the resolve_conj pytorch operation. I've no idea where in the model this operation is used.
If anyone has a clear cut solution to exporting these models, I'd be happy to hear it. But for starters, I would very much like to see where in the source code of these models this function is, since I don't seem to be able to find it.
| 04-05-2022 11:56:26 | 04-05-2022 11:56:26 | Why are you using `opset=14` ?
I think that 12 or 13 would be enough and would work.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,603 | closed | Convert PyTorch Model to Hugging Face model | I have looked at many resources but I still seem to have issues/
I have a "model.pt" file which I got from finetuning a BERT model (with additional custom layesr added). I want to upload this to the hugging face hub. How can I convert the `.pt` model to files/model that can be used on hugging face hub? | 04-05-2022 11:06:56 | 04-05-2022 11:06:56 | Hi, we have guide which explains how to share custom PT models:, which should help with this. https://huggingface.co/docs/transformers/custom_models<|||||>Thank s for your response. However, I do not seem to grasp the resource. It looks like it requires implementing the code from the start. In my own case, I already have a `.pt` file.
I am not sure what part of the docs handles this.
I will appreciate any further help.<|||||>Hi @Freemanlabs , it would be best to use the [forum](https://discuss.huggingface.co/) for this question. We use issues for bug reports and feature requests. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,602 | closed | don't load state_dict twice when using low_cpu_mem_usage in from_pretrained | # What does this PR do?
In `from_pretrained` the `state_dict` is loaded twice when `low_cpu_mem_usage` is `True`, which is not required since the `state_dict` is already loaded before as we can here.
https://github.com/huggingface/transformers/blob/21decb7731e998d3d208ec33e5b249b0a84c0a02/src/transformers/modeling_utils.py#L1795-L1797
So, it's fine to remove it there because when is_sharded=False, the state_dict is loaded at line 1797 | 04-05-2022 10:36:58 | 04-05-2022 10:36:58 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,601 | closed | version 4.2.0 vs newer version | I have this problem with ver 4.2.0+ but is ok for under ver. Can anyone help me because other parts of my project were used ver 4.10+

| 04-05-2022 10:22:35 | 04-05-2022 10:22:35 | Hi @PNMinh286 ๐ The information you provided is not enough for us to help you, as we don't know exactly what you're to do :) Ideally, a script with which we can reproduce the issue should be provided. Have a look at our [issue guidelines](https://github.com/huggingface/transformers/blob/main/ISSUES.md).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,600 | closed | Specifying num_labels in from_pretrained without id2label leads to unexpected error | This is a "soft" bug report. I _think_ it is the intended usage, but it is not very intuitive in terms of "which arguments go where" in a from_pretrained config call.
To use hyperparameter search, I wrote the following `model_init`. Our pipeline is dynamic, and allows for both classification and regression (num_labels=1) problems. In the latter case, `label2id` and `id2label` are None, which seems natural but which also is the root of the current issue.
```python
from os import PathLike
from typing import Dict, Optional, Union
from transformers import BertConfig, AutoModelForSequenceClassification
def model_init(model_name_or_path: Union[str, PathLike],
num_labels: int,
label2id: Optional[Dict[str, int]] = None,
id2label: Optional[Dict[int, str]] = None):
config = BertConfig.from_pretrained(model_name_or_path,
num_labels=num_labels,
label2id=label2id,
id2label=id2label)
# If you do not include this, you'll get an error
# config.num_labels = num_labels
return AutoModelForSequenceClassification.from_pretrained(model_name_or_path, config=config)
if __name__ == "__main__":
model_init("bert-base-cased", 1) # also errors with any other integers - due to lack of label mappings
```
I found that when using regression with its respective arguments (as in the example), the code would error out on me. Although I later found that the issue is not num_labels=1 but the lack of id2label maps. Trace:
```
Traceback (most recent call last):
File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2021.3\scratches\scratch_18.py", line 21, in <module>
model_init("bert-base-cased", 1)
File "C:\Users\bramv\AppData\Roaming\JetBrains\PyCharm2021.3\scratches\scratch_18.py", line 17, in model_init
return AutoModelForSequenceClassification.from_pretrained(model_name_or_path, config=config)
File "C:\Users\bramv\.virtualenvs\cefr-de-L5vhpNeN\lib\site-packages\transformers\models\auto\auto_factory.py", line 447, in from_pretrained
return model_class.from_pretrained(pretrained_model_name_or_path, *model_args, config=config, **kwargs)
File "C:\Users\bramv\.virtualenvs\cefr-de-L5vhpNeN\lib\site-packages\transformers\modeling_utils.py", line 1493, in from_pretrained
model = cls(config, *model_args, **model_kwargs)
File "C:\Users\bramv\.virtualenvs\cefr-de-L5vhpNeN\lib\site-packages\transformers\models\bert\modeling_bert.py", line 1504, in __init__
self.num_labels = config.num_labels
File "C:\Users\bramv\.virtualenvs\cefr-de-L5vhpNeN\lib\site-packages\transformers\configuration_utils.py", line 252, in __getattribute__
return super().__getattribute__(key)
File "C:\Users\bramv\.virtualenvs\cefr-de-L5vhpNeN\lib\site-packages\transformers\configuration_utils.py", line 391, in num_labels
return len(self.id2label)
TypeError: object of type 'NoneType' has no len()
```
To investigate what is happening, I cloned the library and printed out 1. whether num_labels is available in the kwargs passed to init; 2. when the num_labels.setter is called with which value. I print a newline at the end of the __init__ call.
```
num_labels in kwargs False
num_labels setter called 2
num_labels setter called 1
num_labels in kwargs False
num_labels setter called 2
num_labels in kwargs False
num_labels setter called 2
```
What is going on here? Why is init called three times? In the second one, it seems that for a brief moment the setter is called with the correct value (1), but then it gets overwritten with the default of 2 again (?)
What confuses me the most is that despite that the setter is clearly being called, which in turn should also set the id2label map, I still get the `'NoneType' has no len()` error on the final config.
Setting `config.num_labels = num_labels` solves the issue, which explicitly calls the setter and creates the `id2label` map. But that does not feel very intuitive because I set num_labels already in the `from_pretrained` call, where I _can_ also set the label mappings. So I don't run into problems when I specify the label2id/id2label maps (as expected).
**tl;dr** you can specify id2label maps in from_pretrained() but only specifying `num_labels` won't work. You have to explicitly call the num_labels setter to ensure that the label maps are created. It would be more user-friendly if this could be done on-the-fly without the extra manual `config.num_labels = num_labels` step.
## Environment info
- `transformers` version: 4.17.0
- Platform: Windows-10-10.0.19041-SP0
- Python version: 3.9.2
- PyTorch version (GPU?): 1.11.0+cu113 (True)
@sgugger @LysandreJik
Sorry for the long post, and for tagging. Wasn't sure who to tag for this so feel free to pass it on to someone else.
| 04-05-2022 10:18:22 | 04-05-2022 10:18:22 | This is a bit tricky and I'm not entirely sure it's a bug. The crux of the issue is that you are technically doing this:
```
from transformers import BertConfig
config = BertConfig.from_pretrained(
"bert-base-cased",
num_labels=1,
label2id=None,
id2label=None,
)
```
So a `BertConfig` is instantiated from the checkpoint. Then the `num_labels` are changed to be set at 1, then the `label2id` is changed to be set as `None` and likewise for the `id2label`.
When then requesting `num_labels` (which is not a direct attribute if you look in the code), the lib tries to fetch` len(id2label)` which is `None` since you forced that by passing the kwarg.
We could ignore `None` values in kwargs passed along to `BertConfig.from_pretrained` but the idea (documented) is that what you passes erases the value of the checkpoint, so the behavior is not entirely wrong.<|||||>Thanks for the quick reply!
Isn't the order of setting the arguments reversed? First te label maps are set (None as a default), and then num_labels is set. When the maps are None, num_labels is taken from kwargs.
https://github.com/huggingface/transformers/blob/d55fcbcc50a27f4f5ad8a5c83786905e55212fa0/src/transformers/configuration_utils.py#L304-L311
This in turn calls the setter, which generates the label maps if they do not exist:
https://github.com/huggingface/transformers/blob/d55fcbcc50a27f4f5ad8a5c83786905e55212fa0/src/transformers/configuration_utils.py#L388-L399
So this is what I had expected to happen with my example:
1. label maps are None
2. -> num_labels is taken from kwargs and `set`
3. -> num_labels.setter is called
4. -> because the given label maps are None, they are re-generated for num_labels==1
But this doesn't seem to be happening and I don't quite follow why not. Is it because the label maps are populated with the values from the pretrained model, overwriting my own kwargs instead of the other way around? So these won't actually be None, but the values from the pretrained config?
https://github.com/huggingface/transformers/blob/d55fcbcc50a27f4f5ad8a5c83786905e55212fa0/src/transformers/configuration_utils.py#L304-L305<|||||>You are reading the wrong code. When using `from_pretrained` with kwargs as you do:
- the config is fully instantiated from the checkpoint
- then each kwarg in order is set in this instantiated config<|||||>You're absolutely right.
Is it an idea to allow _either_ the id2label map _or_ num_labels? (in init or at least in the pretrained call) It seems to me that id2label is automatically generated anyway if not given based on num_labels, and conversly num_labels is actually the number of labels in the id2label mapping. So the init should only every require one of those two if I'm not mistaken.
That may take away a lot of confusion.<|||||>We could add defensive checks, that's a good idea (at least that the two args are consistent when passed together).<|||||>Did that in the PR mentioned above if you want to have a look.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,599 | closed | [Doctests] Correct filenaming | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Removes duplicated lines in doc tests file and adds leading language identifier.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-05-2022 07:09:38 | 04-05-2022 07:09:38 | This was probably forgotten in https://github.com/huggingface/transformers/pull/16518 cc @sgugger <|||||>_The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,598 | closed | Only get generated text from Bart Generate | I know Bart can be used to fill in Multi masked tokens with only one [mask], but how can I only get filled in /generated text, not the whole sentence! If it is single token, I can figure it out, but how do I get span of text for multi masked tokens.
Thank you!
| 04-05-2022 06:28:44 | 04-05-2022 06:28:44 | Hi @musitafa0032 ! The best place to ask this question would the [forum](https://discuss.huggingface.co/). We use issues for bug reports and feature requests. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,597 | closed | [deepspeed] fix typo, adjust config name | Fixes https://github.com/huggingface/transformers/issues/16596 - use the correct config name for bfloat16 config section.
@sgugger | 04-05-2022 05:44:38 | 04-05-2022 05:44:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,596 | closed | [BUG] bf16 incorrectly configured in src/transformers/deepspeed.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Ubuntu
- Python version: 3.8
- PyTorch version (GPU?): 8x A10
- Tensorflow version (GPU?):
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
@stas00
Models: T5
## Information
I am trying to fine-tune T5 using the Huggingface Trainer in bf16 using its built-in DeepSpeed integration. While I added `bf16=True` and `"bf16": {
"enabled": true
},` to the TrainingArguments and DeepSpeed config respectively, this flag makes no difference in GPU memory usage or training speed. Hence, I found a typo in src/transformers/deepspeed.py: At line 253,
`if self.is_true("bfoat16.enabled"):
self._dtype = torch.bfloat16`
Instead of `bfoat16.enabled`, it should be `bfloat16.enabled`. Yet that in itself is also outdated, as the latest DeepSpeed docs say that `bf16` not `bfloat16` should be in the DeepSpeed config. | 04-05-2022 05:27:16 | 04-05-2022 05:27:16 | Oh, my bad, thank you for noticing the misspelling, Michael.
And it indeed was renamed at the very last moment to `bf16` to match `fp16`, but the old `bfloat16` config should work too.
Please try with this fix https://github.com/huggingface/transformers/pull/16597 |
transformers | 16,595 | closed | GPT Neo/J Padding Side Results in Different Generation Outputs | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyTorch version (GPU?): 1.10.0+cu111 (False)
- Tensorflow version (GPU?): 2.8.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: None
### Who can help
@patil-suraj @patrickvonplaten
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I find that for GPT Neo and GPT J the padding side has a great effect on the quality of the generation. More specifically it seems that while padding on the left results in decent generation output, padding on the right causes the model to generate empty spaces or repeating characters (ex. "AAAAAAA"). I am wondering why padding has this effect on generation.
Model I am using (Bert, XLNet ...): GPT Neo 125M (although I also observed the same issue for larger variants of Neo and GPT-J)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Run this [Colab](https://colab.research.google.com/drive/1LdJ4kt_egJprxmpWZJSwkbvx71kN5RFv?usp=sharing)
or
1. Load any of the GPT Neo or GPT J models
2. Compare the results of padding inputs on the left vs right on generation outcome
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
I expect that it shouldn't matter whether we pad the input from the left or right but it seems like that has a effect on generation results.
<!-- A clear and concise description of what you would expect to happen. -->
| 04-05-2022 00:17:45 | 04-05-2022 00:17:45 | Hi @anas-awadalla !
This is because these are auto-regressive models, which generate next tokens based on previous tokens. So if the inputs are padded to the right the model will receive padded tokens as context, which is not useful during generation.
Because of this for generation with any auto-regressive models, the input is padded to the left.<|||||>Thanks for your reply! Did you mean the input should be padded to the left? In both cases the input is passed in as part of the context so can you clarify why one padding strategy is better? Also in the generate function we pass input ids and an attention mask, shouldnโt the attention mask indicate what parts of the input is padding tokens and the model should not consider those as part of the context?<|||||>> Thanks for your reply! Did you mean the input should be padded to the left? In both cases the input is passed in as part of the context so can you clarify why one padding strategy is better? Also in the generate function we pass input ids and an attention mask, shouldnโt the attention mask indicate what parts of the input is padding tokens and the model should not consider those as part of the context?
Bumping this :)<|||||>I meet the same problem... Even padding to the left still results in worse performance than not padding.<|||||>Could you post a code snippet so we could take a look ?<|||||>@patil-suraj Could you clarify your response above? I closed the issue previously because padding on the left works well for me but would still like to understand why that's the case. |
transformers | 16,594 | closed | Why is wandb being logged in and how to turn it off? | I do not have a profile with wandb profile. I have not logged into anything and I don't want to use wandb. I am simply following this notebook to finetune a T5 model for summarization https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb#scrollTo=X4cRE8IbIrIV (I am not running the huggingface login and the git cells)
The notebook was working fine till a day before and I was storing checkpoints but now when I try to run either from the checkpoint or by loading t5-small, I get asked for the wandb API key on running the trainer. I don't even have a profile.
<img width="1013" alt="Screen Shot 2022-04-04 at 7 19 45 PM" src="https://user-images.githubusercontent.com/78582210/161647987-eeb3066e-72d9-4cad-9a7e-e2810073984f.png">
When I tried wandb off, it again asked me for the API key.
I tried os.environ["WANDB_DISABLED"] = "true" but it doesn't work.
trainer.train() runs for 501 iterations and then gives the following error :
<img width="754" alt="Screen Shot 2022-04-04 at 7 04 49 PM" src="https://user-images.githubusercontent.com/78582210/161646555-26741d97-5c0e-44c5-9461-8dd93a35ab99.png">
I don't understand why it suddenly started asking me for this and how to resolve it.
Please help
@sgugger | 04-04-2022 23:07:17 | 04-04-2022 23:07:17 | hey @Nikita-Salkar I work at Weights & Biases, you can turn off all external logger logging, including wandb logging by passing `report_to="none"` in your `Seq2SeqTrainingArguments`.
You might have noticed the following warning when setting up your TrainingArguments.
```
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-)
```
Right now the default is to run all loggers that you have installed, so maybe you installed wandb on your machine since the last time you ran the script?
If you would like to log with wandb, best practice would already be to start setting `report_to="wandb"`<|||||>Hello @morganmcg1 !! Thank you so very much! That worked! My first error says "Automatic weights & biases logging enabled..." so I guess it automatically logs me in because I did not install wandb. Grateful for your help! Thank you!<|||||>@Nikita-Salkar glad it worked :) We can probably close this issue right?<|||||>@morganmcg1 Yes yes. Thank you! |
transformers | 16,593 | closed | added type hints to CTRL pytorch | # What does this PR do?
I added type annotations for CTRL (PT) as described in #16059
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@sgugger | 04-04-2022 22:51:36 | 04-04-2022 22:51:36 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,592 | closed | Qdqbert example add benchmark script with ORT-TRT | # What does this PR do?
add a benchmark script using ORT-TRT
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-04-2022 21:21:53 | 04-04-2022 21:21:53 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Let me know if this is ready to merge for you and I'll happily merge it.<|||||>> Let me know if this is ready to merge for you and I'll happily merge it.
It's ready to merge now. Please feel free to go ahead. Thank you every much!! |
transformers | 16,591 | closed | Fix CI: test_inference_for_pretraining in ViTMAEModelTest | # What does this PR do?
Fix CI (scheduled): `test_inference_for_pretraining` in `ViTMAEModelTest`: incorrect device
https://github.com/huggingface/transformers/runs/5809883645?check_suite_focus=true | 04-04-2022 19:12:55 | 04-04-2022 19:12:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,590 | closed | Fix TFTransfoXLLMHeadModel outputs | # What does this PR do?
Fix the outputs of `TFTransfoXLLMHeadModel` (in the case without `labels`) - current TF returns `softmax_output` while PT returns `prediction_scores`:
- Current PT
https://github.com/huggingface/transformers/blob/6f9d8dc1567889b073b91059c9b19e9a6813abfa/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L1119
and
https://github.com/huggingface/transformers/blob/6f9d8dc1567889b073b91059c9b19e9a6813abfa/src/transformers/models/transfo_xl/modeling_transfo_xl.py#L1138-L1140
- Current TF
https://github.com/huggingface/transformers/blob/6f9d8dc1567889b073b91059c9b19e9a6813abfa/src/transformers/models/transfo_xl/modeling_tf_transfo_xl.py#L1005-L1006
## Remarks:
- The case with `labels` is much more complicated - to be addressed in the future.
- The current PT/TF equivalence test has a bit flaw and doesn't detect this issue. A WIP PR #16557 is on its way (with other enhancements)! | 04-04-2022 16:21:40 | 04-04-2022 16:21:40 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,589 | closed | [FlaxSpeechEncoderDecoderModel] More Rigorous PT-Flax Equivalence Tests | Adds more rigorous PyTorch-Flax equivalence tests for the FlaxSpeechEncoderDecoderModel. Namely, this PR adds equivalence tests for the intermediate hidden-states.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed.
| 04-04-2022 16:11:22 | 04-04-2022 16:11:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,588 | closed | RFC: `torch==1.12` will toggle `torch.backends.matmul.allow_tf32` to `False` - what should we do? | Ampere GPUs added a new mode called [TF32](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/). Pytorch created a new flag to support the TF32 mode enabling using `torch.backends.matmul.allow_tf32` which has been `True` by default in pytorch since it was added.
Having this mode on means that matrix multiplications when inputs were in FP32 were actually done in TF32, which made the math significantly faster, albeit less precise (TF32 has the dynamic range of BF16, and the precision of FP16).
The NVIDIA engineers have done many experiments and have found that Deep Learning training accuracy doesn't get impacted for worse by using TF32 instead of FP32 (and often is better), but it provides a significant speed up. It's easy to see from the [A100 spec](https://www.nvidia.com/en-us/data-center/a100/) why:
```
FP32 | 19.5 TFLOPS
TF32 | 156 TFLOPS
```
(numbers with no sparsity)
And the accuracy tables are:
 from [Accelerating AI Training with NVIDIA TF32 Tensor Cores](https://developer.nvidia.com/blog/accelerating-ai-training-with-tf32-tensor-cores/)
However, the lost precision for some non-DL applications is a problem. Therefore starting from pytorch 1.12 (already in nightly shortly) the default for `torch.backends.matmul.allow_tf32` will be `False`, which won't make the training accuracy worse, but it'll make fp32 training significantly slower. So if you believe we should remain consistent/back compatible - most likely we should turn it back on for pt>1.11:
```
if version.parse(torch.__version__) > version.parse("1.11"):
torch.backends.matmul.allow_tf32 = True
```
at a single point which always gets executed for pytorch users.
The question is whether this should be done:
1. Not at all - let the user sort it out
2. Transformers-wide
3. Only in HF Trainer (and Accelerate) and if not done add a new flag to let the user control the behavior
Additionally other use-modes should be made in sync:
1. PyTorch/XLA (some other flag?)
Currently tf32 and how to flip it on/off is documented here: https://huggingface.co/docs/transformers/performance#tf32
A detailed discussion with multiple links to other related resources is here: https://dev-discuss.pytorch.org/t/pytorch-and-tensorfloat32/504
@LysandreJik, @sgugger, @patrickvonplaten, @patil-suraj | 04-04-2022 16:03:29 | 04-04-2022 16:03:29 | You don't have to condition the `torch.backends.matmul.allow_tf32 = True` on torch version, on previous pytorch version it'll just be a no-op.<|||||>The main reason for the conditional suggestion was to be self-documenting, but w/o the conditional this code will fail in older pytorch, for example:
```
$ python -c "import torch; print(torch.__version__); torch.backends.matmul.allow_tf32 = True"
1.8.1+cu102
Traceback (most recent call last):
File "<string>", line 1, in <module>
AttributeError: module 'torch.backends' has no attribute 'matmul'
```
<|||||>@mruberry shared on slack, that jax has a similar flag https://github.com/google/jax/pull/6143 should you want to make this behavior consistent across all 3 frameworks and/or to make it configurable. And they too have a default that not appreciated by all who expect fp32 to be fp32: https://github.com/google/jax/issues/7010
<|||||>Just to understand:
PyTorch added tf32, set it to True by default (from which version to which version?) and now reverted the default to False?
I think I'm in favor of **not** overwriting ``torch.backends.matmul.allow_tf32 = True``, but instead add good documentation to let the user decide what to do here. Also happy to add a `tf32` flag to the Trainer which I would also set to False though. Think overwriting `torch.backends.matmul.allow_tf32 = True` gets us out of sync with PyTorch and might lead to unexpected behavior no?
E.g. if a user does:
```python
import torch
torch.backends.matmul.allow_tf32 = False
import transformers
....
```
Also I think it's a good rule of thumb that in PyTorch by default, always the highest precision, lowest speed is enabled.
Think we don't have to or shouldn't care about JAX here really as the default precision / device behavior is already very different (e.g. JAX uses lowest precision on TPU by default, uses GPU/TPU by default in contrast to PyTorch)<|||||>Tensorflow has it active by default and has a flag to control it ([docs](https://www.tensorflow.org/api_docs/python/tf/config/experimental/enable_tensor_float_32_execution)). I'd say we don't need to touch it in TF, but happy to go with a solution that minimizes PT-TF interface differences.<|||||>This is a very complicated as on the one hand, we don't want to change the PyTorch default and surprise the user, but on the other hand we don't want most of our beginner users to experience degraded performance in training on most GPUs without them knowing why (as this change will be hidden in PyTorch release notes).
I'm also in favor of not touching PyTorch's default (the same way we don't turn on things link `torch.backends.cudnn.benchmark` or `torch.backends.cudnn.deterministic`) and leave it to the user, but we do need proper documentation. Also in favor of having a `TrainingArguments` flag to make it easier for the user to turn on in our examples.<|||||>> Just to understand: PyTorch added tf32, set it to True by default (from which version to which version?) and now reverted the default to False?
Small point of clarification: we have not changed the default to False at this time, but expect to do so in the future.
> Also I think it's a good rule of thumb that in PyTorch by default, always the highest precision, lowest speed is enabled.
Agreed! This is the principal that motivated this change.
We will also have user-facing documentation beyond the release notes when this change is part of PyTorch release, because we agree this change has the potential to be surprising and disruptive to current Ampere users. We'll also provide a recommendation for developers when making this change in nightlies.
<|||||>> Just to understand: PyTorch added tf32, set it to True by default (from which version to which version?) and now reverted the default to False?
I think it was added in pt-1.9, since 1.8 doesn't have this flag. see https://github.com/huggingface/transformers/issues/16588#issuecomment-1087813641
and the plan is to revert to `False` in pt-1.12, but, of course, this will happen sooner in pt-nightly.
So it has been set to `True` in pt: 1.9, 1.10, 1.11<|||||>> Also in favor of having a TrainingArguments flag to make it easier for the user to turn on in our examples.
I forgot that I added it already when we added bf16 support:
https://github.com/huggingface/transformers/blob/d57da992371c1c8258dc683275b4711dee949d20/src/transformers/training_args.py#L249-L251
Except it has no default setting, I guess we keep it that way w/o default?<|||||>> I'm also in favor of not touching PyTorch's default (the same way we don't turn on things link torch.backends.cudnn.benchmark or torch.backends.cudnn.deterministic) and leave it to the user, but we do need proper documentation.
Please review the current doc and suggest if anything needs to be changed:
https://huggingface.co/docs/transformers/performance#tf32
Thank you!<|||||>Yes, that doc is great. We should also expand a bit the documentation of the flag in `TrainingArguments` (and link to this doc) since this where users might get to TF32 the first time. That flag should indeed be left without default (and leave it to the current PyTorch version default)<|||||>Thank you for reviewing and the feedback, Sylvain.
Here is a PR: https://github.com/huggingface/transformers/pull/16674<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>FYI https://github.com/pytorch/pytorch/pull/76509 has landed, and while it may not be perfect we think it achieves the goal of giving users device agnostic control over fp32 matmul precision. Please don't hesitate to reach out if you have additional questions, I'll also be producing additional documentation on this change ahead of the PyTorch 1.12 release.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,587 | closed | debert TypeError: _softmax_backward_data(): argument 'input_dtype' (position 4) must be torch.dtype, not Tensor | def get_training_corpus():
dataset = list(data["msg"])
for start_idx in range(0, len(dataset), 1000):
samples = dataset[start_idx: start_idx + 1000]
yield samples
DOWNLOADED_MODEL_PATH = 'model'
MODEL_NAME = 'microsoft/deberta-base'
if DOWNLOADED_MODEL_PATH=='model':
os.makedirs('model', exist_ok=True)
old_tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
training_corpus = get_training_corpus()
BertTokenizer_tokenizer = old_tokenizer.train_new_from_iterator(training_corpus, 10000)
BertTokenizer_tokenizer.save_pretrained('model')
BertTokenizer_tokenizer = AutoTokenizer.from_pretrained('model')
config = DebertaConfig.from_pretrained(MODEL_NAME)
config.save_pretrained('model')
# config = BertConfig.from_pretrained('aaa_model')
else:
BertTokenizer_tokenizer = AutoTokenizer.from_pretrained('model')
config = DebertaConfig.from_pretrained('model')
dataset = LineByLineTextDataset(
tokenizer=BertTokenizer_tokenizer,
file_path="aa.txt",
block_size=512,
)
data_collator = DataCollatorForLanguageModeling(tokenizer=BertTokenizer_tokenizer,mlm=True, mlm_probability=0.15)
persian_model = DebertaForMaskedLM(config=config)
batch_size = 15
training_args = TrainingArguments(
output_dir='aaa_debert_model',
overwrite_output_dir=True,
num_train_epochs=3,
learning_rate=5e-05,
per_device_train_batch_size=batch_size,
save_steps=len(dataset) / batch_size,
save_total_limit=2,
prediction_loss_only=True
)
trainer = Trainer(
model=persian_model,
args=training_args,
data_collator=data_collator,
train_dataset=dataset
)
trainer.train()
trainer.save_model('aaa_debert_model/pretrain')
BertTokenizer_tokenizer.save_pretrained('aaa_debert_model/aaa_model')
error message:
trainer.train()
File "C:\Users\HP\.conda\envs\tensorflow26-py37\lib\site-packages\transformers\trainer.py", line 1332, in train
tr_loss_step = self.training_step(model, inputs)
File "C:\Users\HP\.conda\envs\tensorflow26-py37\lib\site-packages\transformers\trainer.py", line 1909, in training_step
loss.backward()
File "C:\Users\HP\.conda\envs\tensorflow26-py37\lib\site-packages\torch\_tensor.py", line 363, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "C:\Users\HP\.conda\envs\tensorflow26-py37\lib\site-packages\torch\autograd\__init__.py", line 175, in backward
allow_unreachable=True, accumulate_grad=True) # Calls into the C++ engine to run the backward pass
File "C:\Users\HP\.conda\envs\tensorflow26-py37\lib\site-packages\torch\autograd\function.py", line 253, in apply
return user_fn(self, *args)
File "C:\Users\HP\.conda\envs\tensorflow26-py37\lib\site-packages\transformers\models\deberta\modeling_deberta.py", line 114, in backward
inputGrad = _softmax_backward_data(grad_output, output, self.dim, output)
TypeError: _softmax_backward_data(): argument 'input_dtype' (position 4) must be torch.dtype, not Tensor
bert roberta no problemใAm I doing something wrong? Python 3.9 transformers 4.15.0 win10 torch 1.11.0 | 04-04-2022 15:41:27 | 04-04-2022 15:41:27 | Yes I have the same issue... I am using transformers 4.17.0<|||||>Same issue. Is it a version issue?<|||||>
If anyone solved this problem, hope to give some advice, thanks๏ผ<|||||>> Yes I have the same issue... I am using transformers 4.17.0
I also tried 4.17.0, the same problem<|||||>same issue<|||||>Should be fixed by #16043 in 4.18.0.<|||||>Downgrade your pytorch to 1.10 and you won't have this problem. I'll make a pull request to get this fixed soon.<|||||>> Downgrade your pytorch to 1.10 and you won't have this problem. I'll make a pull request to get this fixed soon.
Got it, thanks.<|||||>Worked by downgrading pytorch to 1.10. Thanks!<|||||>Hi I am using transformers `4.14.1` and pytorch `1.12.0` and the issue persists<|||||>transformers should be > 4.19 |
transformers | 16,586 | closed | [SpeechEncoderDecoderModel] Correct Encoder Last Hidden State Output | Corrects the `Seq2SeqLMOutput` for the PyTorch SpeechEncoderDecoderModel to return the encoder hidden-state **after** the (optional) encoder-decoder projection layer `enc_to_dec_proj`. This brings the model in alignment with the change made in #16581 to the equivalent Flax model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. | 04-04-2022 15:03:17 | 04-04-2022 15:03:17 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,585 | closed | Improve image classification example | # What does this PR do?
This PR improves the image classification example script, by:
- replacing the `nateraw/imagefolder` hack with [`ImageFolder`](https://huggingface.co/docs/datasets/v2.0.0/en/image_process#imagefolder)
- improving the README, explaining how to easily train on custom data. | 04-04-2022 14:48:38 | 04-04-2022 14:48:38 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,584 | closed | ByT5 parallelization | ## ByT5 parallelization
Regarding ByT5, we observed the following and it looks like a potential bug:
ByT5 models are handled by HF through the same set of classes that handle T5 models, for instance T5ForConditionalGeneration [link](https://github.com/huggingface/transformers/blob/ad0cba08ea295cd7450484468dd34b53816c85fb/src/transformers/models/t5/modeling_t5.py#L1456)
One issue this creates is the way parallelization is performed by calling the parallelize method [link](https://github.com/huggingface/transformers/blob/ad0cba08ea295cd7450484468dd34b53816c85fb/src/transformers/models/t5/modeling_t5.py#L1494)
The method allows using a single device map to distribute attention blocks across devices. The device map is either inferred from the number of layers of the encoder or can be passed as an argument.
The bug stems from the fact that while T5 model have the same number of layers for encoder and decoder, this is not the case for ByT5 models which have a 3:1 structure (see for instance [here](https://github.com/google-research/byt5/blob/master/byt5/gin/models/byt5.xxl.gin#L4-L5) or Table 1 in the [arXiv paper](https://arxiv.org/pdf/2105.13626.pdf)).
We worked around this by reimplementing parallelize to have different encoder and decoder configurations:
```
encoder_device_map = (
get_device_map(len(model.encoder.block), range(torch.cuda.device_count()))
if encoder_device_map is None
else encoder_device_map
)
decoder_device_map = (
get_device_map(len(model.decoder.block), range(torch.cuda.device_count()))
if decoder_device_map is None
else decoder_device_map
)
```
Gently pinging @stas00 here. Do we still use this method to parallelize models such as T5? How do we currently parallelize models like T5? | 04-04-2022 13:17:57 | 04-04-2022 13:17:57 | Until we implement a proper PP (pipeline parallelism) (if we ever do) we only have this naive approach for PP and yes, it assumes a balanced number of layers between encoder and decoder.
There is absolutely no reason to use the naive PP (pipeline parallelism) as Deepspeed ZeRO solves this issue far more efficiently. but we left these in T5 and GPT2 models for now in case someone still uses those.
The next best alternative should someone not want to use Deepspeed ZeRO is TP (tensor parallelism), which soon will come via OSLO and Deepspeed-Inference projects.
For now you can probably just assert if someone tries to use the naive PP with other than T5, since it was designed for T5 in mind. Or you can fix it to handle unbalanced number of layers but it might be a waste of time if nobody uses it.<|||||>Thanks a lot @stas00 !<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,583 | closed | Why does BART generate additional EOS token at the beginning? | ## Environment info
- `transformers` version: 4.18.0.dev0
- Platform: Linux-5.11.0-1018-gcp-x86_64-with-glibc2.31
- Python version: 3.10.4
- Huggingface_hub version: 0.4.0
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.4.1 (cpu)
- Jax version: 0.3.4
- JaxLib version: 0.3.2
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj @patrickvonplaten @sgugger
## Information
Model I am using: BART
## To reproduce
Steps to reproduce the behavior:
```python
from transformers import BartTokenizer, BartForConditionalGeneration
model_name = 'facebook/bart-base'
tokenizer = BartTokenizer.from_pretrained(model_name)
model = BartForConditionalGeneration.from_pretrained(model_name)
sentences = ('A flower.', 'Some good sentences.')
inputs = tokenizer(sentences, return_tensors='pt', max_length=8, padding='max_length', truncation=True)
output = model.generate(inputs.input_ids)
print('Input:', tokenizer.batch_decode(inputs.input_ids))
print('Output:', tokenizer.batch_decode(output))
print('Input IDs:', inputs.input_ids.tolist())
print('Output IDs:', output.tolist())
```
Output:
```
Input: ['<s>A flower.</s><pad><pad><pad>', '<s>Some good sentences.</s><pad><pad>']
Output: ['</s><s>A flower.</s><pad>', '</s><s>Some good sentences.</s>']
Input IDs: [[0, 250, 14214, 4, 2, 1, 1, 1], [0, 6323, 205, 11305, 4, 2, 1, 1]]
Output IDs: [[2, 0, 250, 14214, 4, 2, 1], [2, 0, 6323, 205, 11305, 4, 2]]
```
## Expected behavior
From my understanding, the model should be generating in this way:

```
250, 14214, 4, 2
โ โ โ โ
0, 250, 14214, 4
```
So the first item of the output should be:
```
[250, 14214, 4, 2, 1, 1, 1, ...]
```
But actually it is:
```
[2, 0, 250, 14214, 4, 2, 1]
```
Therefore, I would like to know where the 2 (i.e. the EOS token) comes from? | 04-04-2022 13:11:02 | 04-04-2022 13:11:02 | This is because the way the model are trained. In BART `eos` (`</s>`) token is used as the `decoder_start_token_id`. So the format for the `decoder_input_ids` during training is `</s> <s> tokens ...`. So the model always needs to generate `<s><s>` as the first two tokens.<|||||>@patil-suraj Thank you! I am still a little confused. If the `decoder_input_ids` during training is `</s> <s> a b c ...`, according to my understanding, the model output should be `<s> a b c ...` (`</s> -> <s>`, `<s> -> a`, `a -> b`, `b -> c`, ...). But the model output is actually `</s> <s> a b c ...`.<|||||>I think I understand it now. We always add the decoder output to the input, and in the end we take the input as the final result. |
transformers | 16,582 | closed | Problem generating pytorch_model.bin for wav2vec2 large model using convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py | ## Environment info
- `transformers` version: 4.17.0
- Platform: Linux-5.14.0-1031-oem-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.11.0+cu102 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @anton-l
## Information
I'm uploading new wav2vec2 models to the LeBenchmark (https://huggingface.co/LeBenchmark).
We already have several models for which the integration with HuggingFace works great.
Example: https://huggingface.co/LeBenchmark/wav2vec2-FR-7K-large
**The issue:** I am not able to generate the _pytorch_model.bin_ files for my new *large* models using the same _config.json_ files that worked fine a few months ago. The script (transformers/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py) is giving me a size mismatch error for the _quantizer.vars_. I have no problem generating the same .bin (with a different _config.json_ file) for my new *base* models.
It seems to me that this script is incorrectly retrieving the shape of some layers inside the wav2vec2, and I don't know how to fix it.
## To reproduce
For generating the pytorch_model.bin model for my new large models, I'm running the following command on my local machine:
```python3 transformers/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py --pytorch_dump_folder_path . --checkpoint_path <checkpoint_best.pt> --not_finetuned --config_path <config.json>```
An example of checkpoint_best.pt and its corresponding config.json can be found here: https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/tree/main
My error is the following:
```
Traceback (most recent call last):
File "/home/mzanonboito/IS_2022/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 263, in <module>
convert_wav2vec2_checkpoint(
File "/home/mzanonboito/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/mzanonboito/IS_2022/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 248, in convert_wav2vec2_checkpoint
recursively_load_weights(model, hf_wav2vec, not is_finetuned)
File "/home/mzanonboito/IS_2022/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 133, in recursively_load_weights
set_recursively(hf_model, mapped_key, value, name, weight_type)
File "/home/mzanonboito/IS_2022/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 79, in set_recursively
raise ValueError(
ValueError: Shape of hf is torch.Size([1, 640, 192]), but should be torch.Size([1, 640, 384]) for quantizer.vars
```
At config.json we can find in line 9 the correct shape for this variable:
```
"codevector_dim": 384,
```
Finally, loading the model directly on torch, we can verify that it is indeed the correct shape.
```
>>> import torch
>>> model = torch.load("checkpoint_best.pt")
>>> model['model']['quantizer.vars'].size()
torch.Size([1, 640, 384])
```
I even tried changed codevector_dim at config.json to 192 to see if the script would then work. The result is a new shape error:
```
File "/home/mzanonboito/IS_2022/transformers/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py", line 79, in set_recursively
raise ValueError(
ValueError: Shape of hf is torch.Size([1, 640, 96]), but should be torch.Size([1, 640, 384]) for quantizer.vars
```
I thus don't understand where this issue is coming from, and I would appreciate your help. Thanks!
## Expected behavior
For base models, this script call works fine, and the output files can be found here:
https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-base/tree/main
| 04-04-2022 12:02:43 | 04-04-2022 12:02:43 | Hey @mzboito,
Thanks for the very in-detail issue. I think the problem is the following:
Since Wav2Vec2 using product quantization (quite often with `num_groups=2`), the last dimension of the tensor should be ``codevector_dim // 2 (=num_groups)`` see here: https://github.com/huggingface/transformers/blob/c65633156b29482256619ee6515ba3d6c24578aa/src/transformers/models/wav2vec2/modeling_wav2vec2.py#L974
This should be equivalent to fairseq here: https://github.com/pytorch/fairseq/blob/8fce12ddd4a0414de5a726af6193cee4893f0a13/fairseq/modules/gumbel_vector_quantizer.py#L52
This means in order for the conversion to function, we should set `codevector_dim = 768`.
I think there was one other problem in the config - I took the liberty to correct this in your repo here: https://huggingface.co/LeBenchmark/wav2vec-FR-1K-Female-large/commit/fadfcc59c210ef559b08d8e16b6289278acedc1b and added the converted HF checkpoint. Hope that's fine for you :-)
<|||||>Thanks a lot for the help and quick answer!!!! :-) |
transformers | 16,581 | closed | [FlaxSpeechEncoderDecoder] Fix dtype bug | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes the dtype of the input of the model.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-04-2022 11:28:31 | 04-04-2022 11:28:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,580 | closed | handle torch_dtype in low cpu mem usage | # What does this PR do?
The `torch_dtype` argument in `from_pretrained` is not respected when loading the model with `low_cpu_mem_usage=True`. This is because `_load_pretrained_model_low_mem` creates and assigns new `Paramter` rather than loading directly in model `state_dict`, so the `torch_dtype` is ignored. as can be seen [here](https://github.com/huggingface/transformers/issues/16378#issuecomment-1086502209).
https://github.com/huggingface/transformers/blob/013a7dbe3d8af8f16f2b6cb60b6e21258a9e1399/src/transformers/modeling_utils.py#L2168
This PR casts each tensor in the `state_dict` by retrieving the `data_type` from the model's meta parameters.
Should be merged after #16548 | 04-04-2022 10:58:37 | 04-04-2022 10:58:37 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The first chunk is definitely the right fix, unrelated to dtype
The second chunk is different. what you want is to move this code:
https://github.com/huggingface/transformers/blob/be9474bd3551d6a89fd788a0063895a9b316e7e0/src/transformers/modeling_utils.py#L1846-L1849
to before this code:
https://github.com/huggingface/transformers/blob/be9474bd3551d6a89fd788a0063895a9b316e7e0/src/transformers/modeling_utils.py#L1893-L1894
which would restore the scope of non-default dtype. can merge the 2 `if from_pt`.
Once this is done pytorch will automatically operate with `torch_dtype` for tensor allocation.
Please let me know if what I suggested is easy to follow. The first snipped I quoted is the closing of the special scope.
And the obvious question - should we need to add a new test?
<|||||>@stas00 I already tried the approach that you suggested. But it didn't work. It seems `torch.set_default_dtype` doesn't affect `torch.load`. So all params in loaded `state_dict` will have the `dtype` with which they were saved, irrespective of `torch.set_default_dtype`.
> The first chunk is definitely the right fix, unrelated to dtype
This is actually fixed in https://github.com/huggingface/transformers/pull/16548, I added it in this PR just to be able to test the changes.
> And the obvious question - should we need to add a new test?
We should, IMO.<|||||>I run some experiments and you're correct, Suraj, on all accounts.
It's a bummer since someone who is low on CPU memory will have enough to be able to start the model training in lower precision but then not be able to finetune from the single precision checkpoint from the hub, targeting the same lower precision, since `torch.load` will force a full fp32 model copy, requiring 3x memory of the half-precision model normally and 2x with the `_load_pretrained_model_low_mem` hack.
I'm pretty sure we will have to abandon `torch.load` altogether as the models get bigger unless it becomes more flexible. There is no reason whatsoever to load all of the model's weights into memory at once. And more so when different GPUs should load different chunks of the model. It should be possible to load them in segments. But, of course, this is not the right forum to discuss that.
I made a feature request: https://github.com/pytorch/pytorch/issues/75242 |
transformers | 16,579 | closed | Fill-Mask Pipeline BERT on a sentence ending with/without a dot | ## Environment info
- `transformers` version: 4.8.1
- Platform: Windows 10
- Python version: 3.7.10
- PyTorch version: 1.9.0
## Problem
Why do the following sentences give different predicted words?
Sentence 1: [MASK] is the capital of France
Sentence 2: [MASK] is the capital of France.
where the second sentence is with a dot as the ending.
## Program
from transformers import pipeline
fillmask_bert = pipeline('fill-mask', model="bert-base-uncased")
pred_1 = fillmask_bert("[MASK] is the capital of France")
for item in pred_1:
print(item)
print()
pred_2 = fillmask_bert("[MASK] is the capital of France.")
for item in pred_2:
print(item)
print()
## Fill-Mask Results
"[MASK] is the capital of France"
{'sequence': 'it is the capital of france', 'score': 0.33803579211235046, 'token': 2009, 'token_str': 'it'}
{'sequence': 'paris is the capital of france', 'score': 0.30488064885139465, 'token': 3000, 'token_str': 'paris'}
{'sequence': 'toulouse is the capital of france', 'score': 0.04690197855234146, 'token': 17209, 'token_str': 'toulouse'}
{'sequence': 'lyon is the capital of france', 'score': 0.02363533526659012, 'token': 10241, 'token_str': 'lyon'}
{'sequence': 'marseille is the capital of france', 'score': 0.022472301498055458, 'token': 16766, 'token_str': 'marseille'}
"[MASK] is the capital of France."
{'sequence': 'paris is the capital of france.', 'score': 0.6210566163063049, 'token': 3000, 'token_str': 'paris'}
{'sequence': 'it is the capital of france.', 'score': 0.07383635640144348, 'token': 2009, 'token_str': 'it'}
{'sequence': 'toulouse is the capital of france.', 'score': 0.036227378994226456, 'token': 17209, 'token_str': 'toulouse'}
{'sequence': 'marseille is the capital of france.', 'score': 0.031809303909540176, 'token': 16766, 'token_str': 'marseille'}
{'sequence': 'lyon is the capital of france.', 'score': 0.03100072219967842, 'token': 10241, 'token_str': 'lyon'}
| 04-04-2022 09:46:59 | 04-04-2022 09:46:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,578 | closed | Fix Pyright static type checking by replacing if-else imports with try-except | # What does this PR do?
Fixes https://github.com/huggingface/transformers/issues/11642.
The Transformers library has a system for importing optional dependencies which replaces objects from missing libraries with "dummy" versions; this logic is implemented by a simple `if-else` statement in `transformers/__init__.py`. Unfortunately Pyright/Pylance, the type-hinting software behind VSCode and widely used in other editors, always assumes that the dummy objects are the ones being imported. This breaks diverse useful functionalities such as "Go to reference" (you are always taken to the dummy implementation), documentation (empty dummy docstrings are shown) and type hinting (the dummy's properties are assumed to hold).
This PR replaces the `if-else` import logic with slightly more verbose `try-except(-else)` - the form supported by Pyright/Pylance for this precise use case https://github.com/microsoft/pylance-release/issues/2402.
## WIP Status
`utils/check_inits.py` currently doesn't pass, but I've been able to modify it (locally, not in PR yet) to account for changes to the main `__init__.py`. Unfortunately this surfaced the fact that there are hundreds of other small changes that need to be made across the entire library. I can do this work, but it'll be a longer process. In the meantime I'm **asking for guidance** on the form of import blocks. I can see two options. The difference is in the location of the "true" (i.e. non-dummy) imports:
A
```python
try:
if not is_dependency_available():
raise OptionalDependencyNotAvailableError()
_import_structure["models.mymodel"].append("MyModel")
except OptionalDependencyNotAvailableError:
_import_structure["utils.dummy_dependency_objects"] = ...
```
B
```python
try:
if not is_dependency_available():
raise OptionalDependencyNotAvailableError()
except OptionalDependencyNotAvailableError:
_import_structure["utils.dummy_dependency_objects"] = ...
else:
_import_structure["models.mymodel"].append("MyModel")
```
B is slightly more verbose but seems to be preferred by official Python guidelines: https://docs.python.org/3.9/tutorial/errors.html#handling-exceptions. I'm currently leaning towards B but perhaps I've missed a more elegant solution.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Possibly @sgugger?
| 04-04-2022 09:23:04 | 04-04-2022 09:23:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>There is another way to fix the "Go to Reference" in VsCode that has been shown [here](https://github.com/huggingface/transformers/issues/16394). I would like to avoid changing something as critical at the main init.<|||||>> There is another way to fix the "Go to Reference" in VsCode that has been shown [here](https://github.com/huggingface/transformers/issues/16394). I would like to avoid changing something as critical at the main init.
Unfortunately that switches out the entire language server (Pyright/Pylance for Jedi) and is in effect equivalent to saying that Huggingface doesnโt support Pyright. Many people prefer Pyright as itโs significantly snappier than Jedi-based LSPs and itโs often recommended over Jedi in IDEs that support both (notably Vim and Emacs). <|||||>I wouldn't say Hugging Face does not support Pyright when it's Pyright that doesn't support a simple if/then/else. Honestly, to me, this is an issue that should be fixed at that level instead of asking package maintainers to add some hacky workarounds like the one you are suggesting (not your fault, I understand it's the only thing supported as of now ;-) ).
Pinging @LysandreJik and @patrickvonplaten for your advice, but I'd personally leave things as is and say that we don't support Pyright.<|||||>I fully understand the hesitation to take a scalpel to something as important as the init system. (Which is, by the way, extremely pleasant to use from a user ergonomics perspective. I have learnt a lot from HF's design decisions - thank you!) If Pyright were a niche tool I'd drop it, but it's genuinely one of the best and most popular ways to interact with Python code inside VSCode, Vim and Emacs.
Pyright has good reason for its current behaviour: as a static checker it has no way of knowing whether a given `if-else` condition will evaluate true or false at runtime. The authors made a decision at some point early in development to assume that the last import declaration is the one that counts - even if it's inside two branches of the same `if-else` block:
```python
if cond:
from a import X # Pyright ignores this one...
else:
from b import X # ...and selects this one
```
They could've chosen otherwise, but there's no obvious reason why they should have. The problem with `if-else` is that there's no sense of which branch corresponds to *correct* or *desirable* behaviour, whereas `try-except-else` includes a kind of value judgment: `try-else` is the correct path. That makes `try-except-else` much easier to parse with a static type checker.
There may be an alternative resolution that fixes Pyright and leaves your init system intact, by the way. Many libraries (including Pandas and Numpy) now produce "type stubs", which are essentially what `.h` files are to `.c[pp]` files in C or C++. Pyright and other type checkers can then follow these type stubs as sources of type-hinting truth. I've never generated them and can't guarantee they'd fix "Go to definition" etc, but perhaps it's worth a shot.<|||||>I've given it a bit more thought and I agree that this would be a welcome change. My main issues are around the necessary workarounds in the scripts we use to check the repo consistency.
I also think that syntax B is better in your suggestion.
Would we need to do this for each of the model subfolders init as well?<|||||>Yeah it needs to be done for model imports too. Itโs probably best to automate it using something like awk or batch processing in vim. I had a stab at manual correction and thereโs just too much.
I agree that checking for consistency is tricky. Does someone on your side have capacity to work on this change, perhaps to guide modifications to `check_init`, or should I finish this PR?<|||||>I don't think anyone has the bandwidth to work on this presently, so if you have time to finish the PR, that would be best!<|||||>That's fine, I'll finish the PR myself. :)
So far I've made two non-trivial decisions that I'd like to check with you.
1. In your inits, `TYPE_CHECKING` blocks mirror blocks with `_import_structure` statements. This entire issue's caused by static file checkers, so it's in principle possible to fix it by touching only the code inside `TYPE_CHECKING` blocks. (The fix would consist of replacing `if/else` statements with `try/except/else`, as discussed above.) However, I suggest changing the `_import_structure` blocks, in the exact same way, to preserve their symmetry. It's strictly speaking not necessary - and touching that part of the code brings its risks - but I believe that keeping the two structures in sync improves readability and makes it easier to automatically compare them using a simpler `check_inits.py` CI/CD script.
2. I am only touching code inside `__init__.py` files under the hypothesis that this will be enough to satisfy type checkers and minimise the amount of work necessary.
Are both fine with you?<|||||>Completely agree with both point! Thanks a lot for tackling this!<|||||>Alright, I think the bulk of the work is done, but there are a few failing tests.
1. `run_examples_flax` and `run_tests_flax` are, I believe, not my fault! https://github.com/google/jax/commit/1af5c888bd823423e18007b5326a374524bbffa5
2. "Add model like runner" is interesting: https://github.com/huggingface/transformers/runs/6049632592?check_suite_focus=true#step:5:15 is triggered because `is_tokenizers_available` is somehow, strangely, erased from the new model `__init__.py` file. I did add an `from ...utils import OptionalDependencyNotAvailable`, but I don't see from `commands/add_new_model.py` why that would be a problem. Would appreciate a second pair of eyes on this.
3. `run_tests_templates` I can look into, though I'm not at all familiar with that part of the codebase. If you know someone who can at least vaguely gesture at what's going on, that'd be super helpful. :)<|||||>Many thanks for the review and apologies for no update yet - this is very much on my mind, I've just been distracted by other priorities. Hope to look at it over one of the coming evenings.<|||||>Fixed imports - sorry, that was an artefact of my automated init fixer.
I haven't been able to do much about [`to_replace`](https://github.com/huggingface/transformers/blob/main/templates/adding_a_new_model/cookiecutter-template-%7B%7Bcookiecutter.modelname%7D%7D/to_replace_%7B%7Bcookiecutter.lowercase_modelname%7D%7D.py) because it's not a simple replacement. In the original version it's enough to paste these lines right under `if is_xxx_available()`, whereas now there's a (variable) number of lines between `if not is_xxx_available()` and the corresponding `else` block, where these blocks should be placed.
Will tackle "add new model like" command next.<|||||>For the changes in `to_replace.py`, if I'm not mistaken, all those `if not is_xxx_available()` prompts are for the main init. We can add a new comment (like the `" # PyTorch models structure"`) we already have to flag where we want those new models.<|||||>Alright, I think we're done here! :) All tests pass and Pyright picks up the correct classes.<|||||>Thanks both! :) Happy to contribute back! |
transformers | 16,577 | closed | Running Error: Trying to backward through the graph a second time | When I define a net, and pass its output to BART model's decoder, which uses for cross attention. The bart decoder has 12 attention layers. And I will get an Error: Trying to backward through the graph a second time.
The same [question](https://github.com/pyg-team/pytorch_geometric/issues/4259).
Appreciate your help!!!
The net is as below:
```python
class Net(torch.nn.Module):
def __init__(self, in_channels, out_channels):
super().__init__()
self.linear1 = nn.Linear(in_channels, out_channels)
def forward(self, x):
x = self.linear1(x)
return x
```
I add a cross attention module of bart_decoder and Net outputs, which is modified from Huggingface's library transformers, (transformers/src/models/bart/modeling_bart.py). I feed the Net outputs into bart_decoder, and it will be passed into 12 bart_decoder layers. The Net-decoder cross attention module is after encoder-decoder cross attention and before FFN. I found a [paper](https://arxiv.org/pdf/2003.08612.pdf), the figure in it is similar to my code. I think maybe one Net output is feed to 12 decoder layers, when backpropagation, it will backward through the graph a second time. But encoder-decoder cross attention do the same thing without an error. I am really confused.
This is some of my codes:
```python
self.gat = Net(config.d_model, config.d_model)
......
gat_outputs = self.gat(x)
gat_outputs = gat_outputs.repeat(encoder_outputs[0].size(0), 1, 1)
gat_attention_mask = torch.ones((gat_outputs.size(0), gat_outputs.size(1))).to(device)
decoder_outputs = self.decoder(
input_ids=decoder_input_ids,
attention_mask=decoder_attention_mask,
encoder_hidden_states=encoder_outputs[0],
encoder_attention_mask=attention_mask,
head_mask=decoder_head_mask,
cross_attn_head_mask=cross_attn_head_mask,
past_key_values=past_key_values,
inputs_embeds=decoder_inputs_embeds,
use_cache=use_cache,
output_attentions=output_attentions,
output_hidden_states=output_hidden_states,
return_dict=return_dict,
gat_outputs=gat_outputs,
gat_attention_mask=gat_attention_mask
)
......
# decoder contains 12 decoder layers
layer_outputs = decoder_layer(
hidden_states,
attention_mask=attention_mask,
encoder_hidden_states=encoder_hidden_states,
encoder_attention_mask=encoder_attention_mask,
layer_head_mask=(head_mask[idx] if head_mask is not None else None),
cross_attn_layer_head_mask=(
cross_attn_head_mask[idx] if cross_attn_head_mask is not None else None
),
past_key_value=past_key_value,
output_attentions=output_attentions,
use_cache=use_cache,
gat_outputs=gat_outputs,
gat_attention_mask=gat_attention_mask
)
......
self.gat_attn = BartAttention(
self.embed_dim,
num_attn_heads,
dropout=config.attention_dropout,
is_decoder=True,
)
residual = hidden_states
# cross_attn cached key/values tuple is at positions 3,4 of present_key_value tuple
gat_attn_past_key_value = past_key_value[-2:] if past_key_value is not None else None
hidden_states, gat_attn_weights, gat_attn_present_key_value = self.gat_attn(
hidden_states=hidden_states,
key_value_states=gat_outputs,
attention_mask=gat_attention_mask,
layer_head_mask=cross_attn_layer_head_mask,
past_key_value=gat_attn_past_key_value,
output_attentions=output_attentions,
)
hidden_states = nn.functional.dropout(hidden_states, p=self.dropout, training=self.training)
hidden_states = residual + hidden_states
hidden_states = self.gat_attn_layer_norm(hidden_states)
# add cross-attn to positions 3,4 of present_key_value tuple
present_key_value = present_key_value + gat_attn_present_key_value
``` | 04-04-2022 09:07:59 | 04-04-2022 09:07:59 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,576 | closed | How to fine-tune the models on huggingface hub with run_mlm.py | Say I have a corpus, I want to use it to fine-tune a model which is on huggingface hub, How should I do?
Thanks. | 04-04-2022 08:22:41 | 04-04-2022 08:22:41 | Hi @ralgond ๐ Our documentation probably has the information you need, check [this page](https://huggingface.co/docs/transformers/main/en/training#finetune-a-pretrained-model) (for instance). The documentation is searchable, try looking for the keywords related to your problem. Alternatively, in the `run_mlm.py` script, try running with `--help` to have a description of the arguments. I hope this helps! :D
We also reserve these GitHub issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค I'm closing this issue, but feel free to reopen with queries that fit the criteria I described. |
transformers | 16,575 | closed | CvT: Introducing Convolutions to Vision Transformers | # What does this PR do?
Add CvT Model for Vision Classification
Fixes #13158
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@NielsRogge @FrancescoSaverioZuppichini
| 04-04-2022 06:52:11 | 04-04-2022 06:52:11 | Well I will need to update all weights again. Since they were in contracted form. Okay will take time.<|||||>@FrancescoSaverioZuppichini I have made most of the changes. The changes left are:
1. Configuration: For num_stages, it would be better for understanding, inferring from depth might make it little difficult to understand IMO. But if you feel the need to remove then I can do that
2. Configuration: I will update the projection after reading paper once more.
3. Tuple one: Tell me how to update that part
4. CVTLayer, Intermediate and Stage part: I followed the convention used in ViT and Segfomer as @NielsRogge said. He can give the input.
<b>I especially need clarification on point 4, the changes will be heavy if I need to add Stage layer as well. Then renaming all the weights again and upload it.<b><|||||>@NielsRogge waiting for update!<|||||>@NielsRogge I will be free on weekend, I will make the suggested changes then
1. I think itโs okay, more in line with repo and paper. More clear I guess.
2. I talked with Francesco, he said no need for that since itโs similar to ConvNextFeatureExteactor use AutoFeatureExtractor.
I was hoping to have a meeting were we can quickly finish this. As you suggested changes are minor. Maybe an hour or two it can be done. If your free any day this week we can do that then too.
<|||||>> I was hoping to have a meeting were we can quickly finish this. As you suggested changes are minor. Maybe an hour or two it can be done. If your free any day this week we can do that then too.
Is it possible for you to apply the suggestions from the review?<|||||>@NielsRogge I made the suggested changes. The shifting of models to Microsoft is left. You can review it. <|||||>Some remarks:
* PR is in a good state, but please don't close comments that aren't resolved yet. I would like the `BaseModelOutputWithCLSToken` to be removed, and just return the CLS token by default in the last hidden states.
* can you also add CvT to the doc tests? This will make sure all code examples in the docstrings are tested as well. All details found here: https://github.com/huggingface/transformers/tree/main/docs#testing-documentation-examples<|||||>@NielsRogge Can you work on the doc part. Every time something or the other mistake takes place. If the doc part is done.
Then I can make changes in `BaseModelOutputCLSToken` and inference of stages using depth and length check on patch sizes etc.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16575). All of your documentation changes will be reflected on that endpoint.<|||||>@NielsRogge can you review it and suggest any further changes.
I'm not sure how you want to modify `cls_token` part. Since modifying in `CvtStage` section (stopping the split) will change shape of hidden states (stored in all hidden states and passing of 4D shape for CNN in next layer). I leave that part to you on how you want to change it to pass it over different classes further.
I have run the `make fix-copies` and done docstrings part. I think everything is done apart from change you wanted to make for `cls_token`.
P.S. Can we sit for an hour and so to finish this PR. I can then finally move on from it. I know you're busy but I hope you can find some time.<|||||>Hi,
It seems that there was an issue with rebasing as the "files changed" tab now shows that 800+ files have changed.
Could you open a new, clean PR?<|||||>@NielsRogge How should I do that. Create a new branch and copy all the codes?
|
transformers | 16,574 | closed | about attention value | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Hi I'am a beginner of ML programming
I'm studying transformer, but there's something I don't understand.
The first is about the attention value. I can understand the correlation between each word through self-attention and multiply the value by softmax. But I don't understand why the value added to all those values contains information about the context.
Secondly, I don't know why Position-Wise Feed-Foward Networks are used. | 04-04-2022 06:34:05 | 04-04-2022 06:34:05 | Hi @wonjunchoi-arc ๐ We have some information about the questions you asked in our course -- see [here](https://huggingface.co/course/chapter2/5?fw=pt#attention-masks), for instance. The course and other documentation are searchable, and they are a great source of information.
We also reserve these issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค I'm closing this issue, but feel free to reopen with queries that fit the criteria I described.<|||||>> Hi @wonjunchoi-arc ๐ We have some information about the questions you asked in our course -- see [here](https://huggingface.co/course/chapter2/5?fw=pt#attention-masks), for instance. The course and other documentation is searchable, and it is a great source of information.
>
> We also reserve these issues for bugs in the repository and/or feature requests. For any other requests, we'd like to invite you to use our [forum](https://discuss.huggingface.co/) ๐ค I'm closing this issue, but feel free to reopen with queries that fit the criteria I described.
thank you!!
I'll study harder!!
|
transformers | 16,573 | closed | Fix and improve CTRL doctests | - Improve CTRL doctests and fix test assertions, where appropriate
# What does this PR do?
This PR addresses the CTRL doc test failures and replaces the example text with one that is more appropriate for CTRL specifically (i.e., by prefacing it with a control code)
Motivated as part of the doctest sprint: https://github.com/huggingface/transformers/issues/16292
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten @ydshieh @patil-suraj
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-04-2022 04:49:52 | 04-04-2022 04:49:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten @sgugger
Could you also take a look when you have some time?
In particular, I don't find any existing documentation mentioning the usage of `Opinion ...` (put the control code as the first word), although I **feel** this is the way to go.
(still don't feel very comfortable without seeing this usage)<|||||>Let's ping @LysandreJik as he might know more on CTRL ;-)<|||||>Thanks for the review! I'll address these comments asap.
As for the control code coming first, there's actually an example right here:
https://github.com/huggingface/transformers/blob/77321481247787c97568c3b9f64b19e22351bab8/examples/pytorch/text-generation/run_generation.py#L88<|||||>> Thanks for the review! I'll address these comments asap.
>
> As for the control code coming first, there's actually an example right here:
>
> https://github.com/huggingface/transformers/blob/77321481247787c97568c3b9f64b19e22351bab8/examples/pytorch/text-generation/run_generation.py#L88
Thank you for this info. @jeremyadamsfisher <|||||>Thanks again for the feedback.
I've added assertions on lines 383 and 562-565 and changed the model from from `sshleifer/tiny-ctrl` to `ctrl`<|||||>Sure thing, those are easy changes.
To clarify the control code coming first, would it make sense to add something like this?
```python
>>> # CTRL was trained with control codes as the first token
>>> inputs = tokenizer("Opinion my dog is cute", return_tensors="pt")
>>> assert inputs[0] in tokenizer.control_codes.values()
```<|||||>> To clarify the control code coming first, would it make sense to add something like this?
>
> ```python
> >>> # CTRL was trained with control codes as the first token
> >>> inputs = tokenizer("Opinion my dog is cute", return_tensors="pt")
> >>> assert inputs[0] in tokenizer.control_codes.values()
> ```
That doesn't seem to work, will tinker with this a bit more:
```
UNEXPECTED EXCEPTION: KeyError('Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers')
Traceback (most recent call last):
File "/Users/jeremyfisher/.pyenv/versions/3.8.12/lib/python3.8/doctest.py", line 1336, in __run
exec(compile(example.source, filename, "single",
File "<doctest transformers.models.ctrl.modeling_ctrl.CTRLModel.forward[5]>", line 1, in <module>
File "/Users/jeremyfisher/Documents/transformers/src/transformers/tokenization_utils_base.py", line 239, in __getitem__
raise KeyError(
KeyError: 'Indexing with integers (to access backend Encoding for a given batch index) is not available when using Python based tokenizers'
```<|||||>Aha! This works:
```
>>> # CTRL was trained with control codes as the first token
>>> inputs = tokenizer("Opinion my dog is cute", return_tensors="pt")
>>> assert inputs["input_ids"][0,0].item() in tokenizer.control_codes.values()
```
Added this to the doctest wherever there was a ` inputs = tokenizer(...)`<|||||>@ydshieh heads up -- I've addressed your second set of comments and the checks have passed :)
Still waiting on @LysandreJik would love to hear your thoughts<|||||>Hi, @jeremyadamsfisher Could you try to resolve the conflicts in
```
src/transformers/models/ctrl/modeling_ctrl.py
```
Then we are ready to merge :-) Thanks!
(I can help on this if you need, just let me know)<|||||>Hi, @jeremyadamsfisher Just to let you know: I resolved the conflict lines, and pushed to this PR branch.
I think we are ready to merge (once the CI tests are green).
If you need to make some more changes (if any), don't forget to `git pull` first. Thanks.<|||||>Merged! Thank you again, @jeremyadamsfisher !<|||||>> Merged! Thank you again, @jeremyadamsfisher !
Thank you @ydshieh! Apologies I wasn't able to fix the merge conflicts myself, but it is much appreciated! |
transformers | 16,572 | closed | Fine-tuning longformer for Question Answering | I wish to fine-tune longformer for Question Answering. I've tried both pretrained triviaQA and squad huggingface longformer models:
- https://huggingface.co/allenai/longformer-large-4096-finetuned-triviaqa
- https://huggingface.co/valhalla/longformer-base-4096-finetuned-squadv1
with kaggle data on tensorflow in this public notebook https://www.kaggle.com/code/sumeetsandhu/longformer-qa-train.
I get this error when training:
`ValueError: Shape mismatch: The shape of labels (received (1,)) should equal the shape of logits except for the last dimension (received (1024, 1)).`
According to documentation for both models:
- https://huggingface.co/docs/transformers/model_doc/longformer#transformers.TFLongformerForQuestionAnswering
- https://huggingface.co/docs/transformers/main/en/model_doc/longformer#transformers.LongformerForQuestionAnswering
the start_positions and end_positions input labels should be index numbers (not one-hot vectors).
What am I missing? | 04-03-2022 16:30:54 | 04-03-2022 16:30:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,571 | closed | can i use the transformers pretraining script of T5 as mT5 ? | @patrickvonplaten @lewtun, @NielsRogge I am planing to pretrain multilingual T5 small and/or medium from scratch using the huggingface T5 pre-training script , i came across this post [https://github.com/huggingface/transformers/issues/5079](https://github.com/huggingface/transformers/issues/5079) and the hugginface implementation for T5, **my question is can i use the same pretraining script from T5 , by replace the T5Config with mT5Config ? WOULD THIS WORK ?**
Also **how should the dataset be arranged for multilingual languages pretraining ? should all the langages be arranged in a sequential order where a sequence of one lang followed by another eg: [French, German, Italian] or should all the languages be randomly shuffled ?**
for the record i am planning to pretrain mT5 on indian languages on the oscar corpus and some additionally sourced text corpus. | 04-03-2022 15:48:26 | 04-03-2022 15:48:26 | check this https://github.com/Shivanandroy/simpleT5
it uses transformers and it based on transformers and PyTorch lightening and it supports mt5 training.
with Pytorch lightning you can train mt5 on TPU with xla support. but i think you need to edit the code .
However, t5 code with flax (jax) seems your best option now since flax much faster than XLA torch<|||||>Cool!, @salrowili thanks a ton. btw how should the dataset be arranged for multilingual language pretraining ? should all the langages be arranged in a sequential order where a sequence of one lang followed by another eg: [French, German, Italian] or should all the languages be randomly shuffled ?<|||||>Hey @StephennFernandes,
Yes it should work just fine! Note that mt5 used more or less the same pretraining logic that was used for t5v1_1, which is stated here: https://huggingface.co/docs/transformers/model_doc/t5v1.1<|||||>@patrickvonplaten what about sampling the text corpus. i have a text corpus that has been arranged sequentially based on the languages. but the pretraining script, randomly shuffles the data based on the max_seq_len. to yeild batches that have similar seq_len to train efficiently.
this however works fine for T5, but when coming to multi-lingual training, the training script would let the model train randomly where one sample sequence was "french" the other was "spanish" etc. is it okay for mt5 to randomly shuffle multi-lingual data and pretrain ? <|||||>Hey @StephennFernandes,
Could you maybe ask such a question on the forum: https://discuss.huggingface.co/ ? :-) We try to keep Transformers issues for questions related just the modeling code and bugs. Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Weโve released [nanoT5 1](https://github.com/PiotrNawrot/nanoT5) which is a minimal codebase that reproduces T5-model (similar to BART) pre-training in PyTorch (not Flax), using Huggingface.
You can take a look, it should be easy to modify it so that it works with multilingual data |
transformers | 16,570 | closed | Force Alignment with Wav2Vec2 models | # ๐ Feature request
Most TTS acoustic models like FastSpeech, FastPitch requires duration of each phonemes or characters while training. In some cases there are force alignment models and aligners available for some languages, but most languages don't have one. leveraging the wav2vec model for force alignment [Forcelignment with wav2vec2](https://pytorch.org/audio/main/tutorials/forced_alignment_tutorial.html) . ASR models from [Robust Speech Challenge](https://huggingface.co/models?other=robust-speech-event) can be used for this.
## Motivation
Making it easy to force align for low resource languages
## Your contribution
I can contribute by writing a method to force align using wav2vec2 models following the above mentioned pytorch tutorial.
| 04-03-2022 12:16:04 | 04-03-2022 12:16:04 | I fully agree and I think it would be great if we could add a code snippet such as the one you mention to `transformers`. I think the place for now should be here: https://github.com/huggingface/transformers/tree/main/examples/research_projects/wav2vec2 - maybe we could add a `alignment.py` file there?
More than happy to help if you want to give it a try :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,569 | closed | KeyError: 'logits' | Hi,
I am pretraining a model from scratch, when test, I encounted this error.
test file:
==============================
from transformers import pipeline, AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('./wentian/output/checkpoint-200000/')
tokenizer = AutoTokenizer.from_pretrained('./wentian/output/checkpoint-200000/')
pipe = pipeline(task='fill-mask', model=model, tokenizer=tokenizer)
mask = pipe.tokenizer.mask_token
print(pipe.tokenizer.encode("้ซๅบๆด่ฝฆๆฐดๆช้ซๅ่ชๆฅๆฐด[MASK]ๆๅคๅ่ฝๅผบๅๅผ่ฝฆ่ถ
ๅผบๅขๅๆฐดๆขๅฎถ็จๅทๅคด"))
print (mask) # is [MASK]
pipe("้ซๅบๆด่ฝฆๆฐดๆช้ซๅ่ชๆฅๆฐด[MASK]ๆๅคๅ่ฝๅผบๅๅผ่ฝฆ่ถ
ๅผบๅขๅๆฐดๆขๅฎถ็จๅทๅคด", top_k=10)
==============================
output:
==============================
[2, 987, 1, 555, 875, 539, 502, 987, 201, 781, 494, 539, 4, 404, 278, 175, 768, 377, 173, 1, 875, 867, 377, 267, 201, 539, 1, 317, 634, 241, 284, 3]
[MASK]
Traceback (most recent call last):
File "wentian_test.py", line 17, in <module>
pipe("้ซๅบๆด่ฝฆๆฐดๆช้ซๅ่ชๆฅๆฐด[MASK]ๆๅคๅ่ฝๅผบๅๅผ่ฝฆ่ถ
ๅผบๅขๅๆฐดๆขๅฎถ็จๅทๅคด", top_k=10)
File "/mnt/d/ml/tianchi/wentian/wentian2/lib/python3.8/site-packages/transformers-4.18.0.dev0-py3.8.egg/transformers/pipelines/fill_mask.py", line 225, in __call__
outputs = super().__call__(inputs, **kwargs)
File "/mnt/d/ml/tianchi/wentian/wentian2/lib/python3.8/site-packages/transformers-4.18.0.dev0-py3.8.egg/transformers/pipelines/base.py", line 1026, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/mnt/d/ml/tianchi/wentian/wentian2/lib/python3.8/site-packages/transformers-4.18.0.dev0-py3.8.egg/transformers/pipelines/base.py", line 1034, in run_single
outputs = self.postprocess(model_outputs, **postprocess_params)
File "/mnt/d/ml/tianchi/wentian/wentian2/lib/python3.8/site-packages/transformers-4.18.0.dev0-py3.8.egg/transformers/pipelines/fill_mask.py", line 96, in postprocess
outputs = model_outputs["logits"]
File "/mnt/d/ml/tianchi/wentian/wentian2/lib/python3.8/site-packages/transformers-4.18.0.dev0-py3.8.egg/transformers/utils/generic.py", line 218, in __getitem__
return inner_dict[k]
KeyError: 'logits'
============================== | 04-03-2022 08:44:25 | 04-03-2022 08:44:25 | the environment:
transformers ==4.18.0.dev0 or 4.17.0
tokenizers==0.12.0<|||||>Hey @ralgond, what is the checkpoint you're using? Can you reproduce with a checkpoint on the hub?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi, I received the same error here, using the code from this link: https://huggingface.co/neuralmind/bert-base-portuguese-cased
```
from transformers import pipeline
pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
pipe('Tinha uma [MASK] no meio do caminho.')
```
output:
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-140-f83d9db26e10>](https://localhost:8080/#) in <module>()
1 pipe = pipeline('fill-mask', model=model, tokenizer=tokenizer)
2
----> 3 res = pipe('Tinha uma [MASK] no meio do caminho.')
4 res
4 frames
[/usr/local/lib/python3.7/dist-packages/transformers/utils/generic.py](https://localhost:8080/#) in __getitem__(self, k)
218 if isinstance(k, str):
219 inner_dict = {k: v for (k, v) in self.items()}
--> 220 return inner_dict[k]
221 else:
222 return self.to_tuple()[k]
KeyError: 'logits'
```<|||||>I received the same problem when I pre-trained bert with MLM.<|||||>cc @narsil, have you seen this error before?<|||||>@camila-cg @Sette ,
This seems to work
```python
from transformers import pipeline
pipe = pipeline("fill-mask", model="neuralmind/bert-base-portuguese-cased")
print(pipe("Tinha uma [MASK] no meio do caminho."))
```
In the example you shared you're not showing what is `model`. My guess is that you loaded with `AutoModel.from_pretrained(..)` instead of using the correct `AutoModelForMaskedLM`.
If I do this:
```python
from transformers import pipeline, AutoModel, AutoTokenizer
pipe = pipeline(
"fill-mask",
model=AutoModel.from_pretrained("neuralmind/bert-base-portuguese-cased"),
tokenizer=AutoTokenizer.from_pretrained("neuralmind/bert-base-portuguese-cased"),
)
print(pipe("Tinha uma [MASK] no meio do caminho."))
```
then I get that error `KeyError: 'logits'`. That's because the model is missing the correct head and is not sending the `logits` so the pipeline crashes.
Is that it ?<|||||>In my case, I was passing the model object bert as an argument, the right thing would be to pass the model directory. It worked for me.<|||||>The code below works fine for me. The only difference is that use AutoModelForMaskedLM instead of AutoModel
```
from transformers import pipeline, AutoModelForMaskedLM, AutoTokenizer
model_path = '/roberta-base/'
model = AutoModelForMaskedLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
fill_mask = pipeline('fill-mask', model=model, tokenizer=tokenizer)
res = fill_mask("Spend time online and odds are you'll have to type a username and password to check your e-mail, access your bank <mask> or read a newspaper story. Enter Microsoft Corp...")
```<|||||>> @camila-cg @Sette ,
>
> This seems to work
>
> ```python
> from transformers import pipeline
>
> pipe = pipeline("fill-mask", model="neuralmind/bert-base-portuguese-cased")
>
> print(pipe("Tinha uma [MASK] no meio do caminho."))
> ```
>
> In the example you shared you're not showing what is `model`. My guess is that you loaded with `AutoModel.from_pretrained(..)` instead of using the correct `AutoModelForMaskedLM`.
>
> If I do this:
>
> ```python
> from transformers import pipeline, AutoModel, AutoTokenizer
>
> pipe = pipeline(
> "fill-mask",
> model=AutoModel.from_pretrained("neuralmind/bert-base-portuguese-cased"),
> tokenizer=AutoTokenizer.from_pretrained("neuralmind/bert-base-portuguese-cased"),
> )
>
> print(pipe("Tinha uma [MASK] no meio do caminho."))
> ```
>
> then I get that error `KeyError: 'logits'`. That's because the model is missing the correct head and is not sending the `logits` so the pipeline crashes.
>
> Is that it ?
I think it is the bert implementation causes this error. @zzk0 's roberta works fine |
transformers | 16,568 | closed | Fix two typos | There are two typos in translation part, that translation be written as summarization, translated be written as summarized. I guess it's caused by copying same expression from summarization part without check.
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fix two typos (translation be written wrongly as summarization)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-03-2022 07:47:54 | 04-03-2022 07:47:54 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16568). All of your documentation changes will be reflected on that endpoint.<|||||>Ah, it seems conflicts must be resolved offline beforehand. Let me know if you need help in doing so!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,567 | closed | How to create vocab.txt for run_mlm.py | hi, I am training a model from scratch. (I set "--train_file ./train_uncased.txt")
I run script run_mlm.py with argument "--tokenizer_name bert-base-chinese"
However, I can not find some words (which is in train_uncased.txt) in the file vocab.txt.
could someone tell me how to create a vocab.txt
Thanks | 04-03-2022 05:31:25 | 04-03-2022 05:31:25 | |
transformers | 16,566 | closed | Question about Bigbird Random Attention Mechanism/possible bug | I was looking through the code for random attention in Big Bird and I found something that confused me and might be a bug.
In the bigbird_block_sparse_attention function in BigBirdSelfAttention class the selected blocks only seem to be from 1-9. You can see this if you look at the max value of the "rand_attn" variable after initializing it with _bigbird_block_rand_mask_with_head. To me this seems to imply that only block 1-9 are being selected for random attention when it should be that any blocks besides those in the global or windowed attention are selected. This behavior seems to happen because of the following code block taken from _bigbird_block_rand_mask_with_head:
```
# Total number of blocks in the mmask
num_blocks = from_seq_length // from_block_size
# Number of blocks per plan
plan_block_length = np.array(plan_from_length) // from_block_size
# till when to follow plan
max_plan_idx = plan_from_length.index(from_seq_length)
# Random Attention adjacency list
rand_attn = [
np.zeros((num_blocks, np.sum(plan_num_rand_blocks[: max_plan_idx + 1])), dtype=np.int32)
for i in range(num_heads)
]
```
In cases where input size > 704, plan_block_length should always be [11, num_blocks] and max_plan_idx will be equal to 1. plan_num_rand_blocks which is an input to this function will be [3,0]. You can find the reason for this in _get_rand_attn_plan.
This brings us to the for loop in _bigbird_block_rand_mask_with_head. since `max_plan_idx = 1` this loop will cover `range(0,2)`. On the first pass `plan_idx = 0` so it does not enter the first if statement `if plan_idx > 0`. `plan_num_rand_blocks[0] = 3` so it will not continue. `plan_idx = 0` so `from_start_block_id = 1`, `to_start_block_id = 0` and `curr_r_cnt = 3`. This makes blk_rw_idx loop range(0, 11). This is then passed into _get_single_block_row_attention which is where my misunderstandings start. In this function to_end_block_id - global_block_right is always used for global attention, however if this is only covering blocks from 0-11 there should be no right global attention since that would be around num_block which could be significantly higher than 11. This also is why all the tokens are 1-9 since they are all chosen from this range.
This brings us to the second pass through the plan_idx loop with a value of 1. This time we enter the first if statement. We then skip `if plan_num_rand_blocks[plan_idx] > 0` since `plan_num_rand_blocks[1] = 0`. This makes us enter `for pl_id in range(1):` which should only loop once with value of 0. `if plan_num_rand_blocks[pl_id] == 0` is not entered since `plan_num_rand_blocks[0] = 3`. This means we loop blk_rw_idx with range(11, num_blocks). Since `pl_id = 0` then `to_start_block_id = 0` and we pass 11 to to_end_block_id in `to_end_block_id=plan_block_length[pl_id]`. This means that this section is also selecting token from 1-9 as was shown in the previous section.
```
for plan_idx in range(max_plan_idx + 1):
rnd_r_cnt = 0
if plan_idx > 0:
# set the row for all from_blocks starting from 0 to
# plan_block_length[plan_idx-1]
# column indx start fromm plan_block_length[plan_idx-1] and ends at
# plan_block_length[plan_idx]
if plan_num_rand_blocks[plan_idx] > 0:
rnd_r_cnt = int(np.sum(plan_num_rand_blocks[:plan_idx]))
curr_r_cnt = int(np.sum(plan_num_rand_blocks[: plan_idx + 1]))
for blk_rw_idx in range(global_block_top, plan_block_length[plan_idx - 1]):
for h in range(num_heads):
rand_attn[h][blk_rw_idx, rnd_r_cnt:curr_r_cnt] = self._get_single_block_row_attention(
block_id=blk_rw_idx,
to_start_block_id=plan_block_length[plan_idx - 1],
to_end_block_id=plan_block_length[plan_idx],
num_rand_blocks=plan_num_rand_blocks[plan_idx],
window_block_left=window_block_left,
window_block_right=window_block_right,
global_block_left=global_block_left,
global_block_right=global_block_right,
)
for pl_id in range(plan_idx):
if plan_num_rand_blocks[pl_id] == 0:
continue
for blk_rw_idx in range(plan_block_length[plan_idx - 1], plan_block_length[plan_idx]):
rnd_r_cnt = 0
to_start_block_id = 0
if pl_id > 0:
rnd_r_cnt = int(np.sum(plan_num_rand_blocks[:pl_id]))
to_start_block_id = plan_block_length[pl_id - 1]
curr_r_cnt = int(np.sum(plan_num_rand_blocks[: pl_id + 1]))
for h in range(num_heads):
rand_attn[h][blk_rw_idx, rnd_r_cnt:curr_r_cnt] = self._get_single_block_row_attention(
block_id=blk_rw_idx,
to_start_block_id=to_start_block_id,
to_end_block_id=plan_block_length[pl_id],
num_rand_blocks=plan_num_rand_blocks[pl_id],
window_block_left=window_block_left,
window_block_right=window_block_right,
global_block_left=global_block_left,
global_block_right=global_block_right,
)
if plan_num_rand_blocks[plan_idx] == 0:
continue
curr_r_cnt = int(np.sum(plan_num_rand_blocks[: plan_idx + 1]))
from_start_block_id = global_block_top
to_start_block_id = 0
if plan_idx > 0:
rnd_r_cnt = int(np.sum(plan_num_rand_blocks[:plan_idx]))
from_start_block_id = plan_block_length[plan_idx - 1]
to_start_block_id = plan_block_length[plan_idx - 1]
for blk_rw_idx in range(from_start_block_id, plan_block_length[plan_idx]):
for h in range(num_heads):
rand_attn[h][blk_rw_idx, rnd_r_cnt:curr_r_cnt] = self._get_single_block_row_attention(
block_id=blk_rw_idx,
to_start_block_id=to_start_block_id,
to_end_block_id=plan_block_length[plan_idx],
num_rand_blocks=plan_num_rand_blocks[plan_idx],
window_block_left=window_block_left,
window_block_right=window_block_right,
global_block_left=global_block_left,
global_block_right=global_block_right,
)
```
To me this seem like it would be a bug since random attention should be selecting from all blocks that are not in the windowed attention or the global attention. However, I double checked and this implementation seems to line up with the implementation that was published here https://github.com/google-research/bigbird. Could someone please explain what is happening here? Apologies for the long post. Thank you
TLDR; Random attention doesn't seem to select from all attention blocks and I am not sure why.
| 04-03-2022 03:15:38 | 04-03-2022 03:15:38 | @vasudevgupta7, who has worked on the model, might have more information!<|||||>Just a follow up on this post, I have since spoke to the first author on the paper, they limit random attention to the first 1024 tokens based on better empirical results from cross validation. Something to keep in mind if you are ever using this model since you might want to change this behavior depending on the task. |
transformers | 16,565 | closed | Add Doc Tests for Reformer PyTorch | # What does this PR do?
https://github.com/huggingface/transformers/issues/16292
Fixing doc tests in modeling_reformer.py.
- ReformerModelWithLMHead.forward
- ReformerForMaskedLM.forward
- ReformerForQuestionAnswering.forward
- ReformerForSequenceClassification.forward
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten
@ydshieh
@patil-suraj | 04-03-2022 02:18:54 | 04-03-2022 02:18:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I found two errors in the ReformerForMaskedLM.forward example.
1. config.is_decoder = True in default.
```python
# before
model = ReformerForMaskedLM.from_pretrained("google/reformer-crime-and-punishment")
# AssertionError: If you want to use `ReformerForMaskedLM` make sure `config.is_decoder=False` for bi-directional self-attention.
# after
model = ReformerForMaskedLM.from_pretrained("google/reformer-crime-and-punishment", is_decoder=False)
# It's OK.
```
2. tokenizer.mask_token_id is None.
```python
mask_token_index = (inputs.input_ids == tokenizer.mask_token_id)[0].nonzero(as_tuple=True)[0]
# TypeError: 'bool' object is not subscriptable
# Because tokenizer.mask_token_id is None.
```<|||||>This issue reports same problems.
https://github.com/huggingface/transformers/issues/10813<|||||>Hi, @hiromu166 , could you remind me of the reasons why you need to overwrite the code sample in the model file, instead of just using `add_code_sample_docstrings` and provide the expected outputs and checkpoints?
(sorry if you already mentioned before!)<|||||>Hi @ydshieh, I commented about the reason for using `replace_return_docstrings`!
Please check them๐<|||||>Hi, @hiromu166
I think we can change back to
```
_CHECKPOINT_FOR_DOC = "google/reformer-crime-and-punishment"
```
and use this (`_CHECKPOINT_FOR_DOC `) for `ReformerModelWithLMHead`. If we do so, it should be possible to use `add_code_sample_docstrings`. Let me know if you still have problem doing so.
For `ReformerForMaskedLM`, we use `https://huggingface.co/hf-internal-testing/tiny-random-reformer`, and we can use `replace_return_docstrings`, as you point out there is `mask token` issue.
Let's try to make these 2 working first :-)<|||||>Hi @ydshieh, thank you for the suggestion.
OK, I'll change `_CHECKPOINT_FOR_DOC` like below:
- `"google/reformer-crime-and-punishment"` for `ReformerModel`, `ReformerModelWithLMHead`.
- `"hf-internal-testing/tiny-random-reformer"` for `ReformerForMaskedLM`, `ReformerForSequenceClassification`, `ReformerForQuestionAnswering` to solve randomness and other problems.<|||||>Hi, @hiromu166 Could you try to resolve the conflicts `utils/documentation_tests.txt` ๐(you can also pull the latest main branch, rebase this PR on main, fix the conflict, then force push).
(This is not necessary for me to review, but is required to fix the conflict before merging)
I will review this PR tomorrow :-)<|||||>Hi @ydshieh, I tried to resolve the conflict like below. Is this OK?
```bash
git fetch upstream
git rebase upstream/main
-- fix conflict --
git rebase --continue
git push -f origin add_doctest_reformer_pt
```<|||||>> Hi @ydshieh, I tried to resolve the conflict like below. Is this OK?
>
> ```shell
> git fetch upstream
> git rebase upstream/main
> -- fix conflict --
> git rebase --continue
> git push -f origin add_doctest_reformer_pt
> ```
Looks good! I will review this PR now<|||||>PR Merged. Thank you again, @hiromu166 โค๏ธ !<|||||>I'm glad it was merged smoothly. Thank you for your cooperation!!<|||||>Thanks a mille for your contribution @hiromu166 ! |
transformers | 16,564 | closed | wav2vec2 : Speech to text conversion fails when using large file | I'm trying to get the translation of an English Speech ( WAV file : 1.2 GB); But, Seeing error due to vector size. Same code works when the file size is low.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0
- Platform: macOS-10.16-x86_64-i386-64bit
- Python version: 3.8.12
- PyTorch version (GPU?): 1.11.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
### Who can help
Models: - Wav2Vec2: @patrickvonplaten, @anton-l
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models - Wav2Vec2: @patrickvonplaten, @anton-l
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART, BART, Marian, Pegasus: @patil-suraj
- Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- Longformer, BigBird: @ydshieh
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patil-suraj, @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): [Wav2Vec2](https://huggingface.co/docs/transformers/model_doc/wav2vec2)
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Download a english speech from youtube as mp3
2. Convert the mp3 to wav (almost 1.2 GB file size)
3. Use wav2vec2 to provide speech to text translation
4. But, The model fails due to file length or the Frame rate. Is there any limitation on these vectors?
5. Further details of the `.wav` file
Metadata of the `.wav` file for which the model works :
```
Framerate : 8000
Channel info : 1
Bytes/sample : 2
Maximum amplitude : 32767
Length of audio : 51028
```
Metadata of the `.wav` file for which the model doesn't work :
```
Framerate : 44100
Channel info : 2
Bytes/sample : 2
Maximum amplitude : 32768
Length of audio : 5901073
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
### Stack Trace :
```
Traceback (most recent call last):
File "main.py", line 59, in <module>
logits = model(input_values)["logits"]
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1751, in forward
outputs = self.wav2vec2(
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1347, in forward
extract_features = self.feature_extractor(input_values)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 515, in forward
hidden_states = conv_layer(hidden_states)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 415, in forward
hidden_states = self.conv(hidden_states)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 302, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Applications/anaconda3/envs/speechml/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 298, in _conv_forward
return F.conv1d(input, weight, bias, self.stride,
RuntimeError: Expected 2D (unbatched) or 3D (batched) input to conv1d, but got input of size: [1, 1, 2, 94417166]
```
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
To print the text that is converted from the audio
| 04-03-2022 01:34:44 | 04-03-2022 01:34:44 | Hello @iamshreeram!
The error actually refers to the shape of the input: it should be `(batch_size, 1, sequence_length)`, meaning that the inputs have to be single-channel arrays (mono audio) while your second file has 2 channels (stereo audio).
But anyways, the model itself won't be able to run on the whole 1.2GB file since it's too long for a single batch (you'll see a memory error). For such long files we have an ASR pipeline with chunked inference, which you can learn about in this tutorial: https://huggingface.co/blog/asr-chunking
The pipeline will handle stereo-to-mono conversion too, so you'll just have to specify an input filename.
Let me know if it works for you :) <|||||>Hi, I have converted from stereo to mono (400MB) but inference still fails with below error.
```
fish: Job 1, 'python main.py' terminated by signal SIGKILL (Forced quit)
```
I believe OS is terminating the process due to heavy memory usage. What is the max size limit for this inference to work?
<|||||>Hi! Could you please post a code snippet that shows the `pipeline` parameters that you've used for inference here? :)<|||||>Also see docs here: https://huggingface.co/docs/transformers/main/en/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline.chunk_length_s<|||||>Thanks for quick reply. The memory error was while performing inference with logits
```
logits = model(input_values)["logits"]
logits.shape
```
While using `pipeline`, It took almost an hour to process but the output was empty.
Below is the snippet -
```
from transformers import pipeline
import os
import logging
logging.basicConfig(
format='%(asctime)s %(levelname)-8s %(message)s',
level=logging.INFO,
datefmt='%Y-%m-%d %H:%M:%S')
logging.info("Decoding the audio files..")
pipe = pipeline(model="facebook/wav2vec2-base-960h")
audio_url = "youtube/stevejobs-speech-mono.wav"
logging.info("Starting pipeline..")
output = pipe(audio_url, chunk_length_s=10, stride_length_s=(4, 2))
logging.info("Converted text :", output)
```
Output -
```
2022-04-25 20:36:25 INFO Decoding the audio files..
Downloading: 100%|โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ| 2.05k/2.05k [00:00<00:00, 688kB/s]
Some weights of Wav2Vec2ForCTC were not initialized from the model checkpoint at facebook/wav2vec2-base-960h and are newly initialized: ['wav2vec2.masked_spec_embed']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
2022-04-25 20:36:40 INFO Starting pipeline..
2022-04-25 21:25:26 INFO Converted text :
```<|||||>Hey @iamshreeram,
Can you maybe upload the audio of the video to somewhere it can be downloaded?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,563 | closed | Error when running "Quick Tour" code snippets | ## Environment info
- `transformers` version: 4.9.2
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17
- Python version: 3.8.11
- PyTorch version (GPU?): 1.9.1 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Parallel
@sgugger @patrickvonplaten @anton-l @Narsil
## Information
Model I am using: wav2vec2
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
Hey, I'm new to Transformers so pardon me if this issue has an obvious fix I can't think of. I was trying to go through the Quick Tour (https://huggingface.co/docs/transformers/quicktour), and I encountered an error when running the code snippets mentioned there.
## To reproduce
Steps to reproduce the behavior:
```
from transformers import pipeline
import datasets
speech_recognizer = pipeline ("automatic-speech-recognition", model = "facebook/wav2vec2-base-960h" ,device = 0)
dataset = datasets.load_dataset("superb", name ="asr", split = "test")
files = dataset["file"]
speech_recognizer(files[:4])
```
Here's the Stack Trace:
```
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
/tmp/ipykernel_16600/2678924457.py in <module>
----> 1 speech_recognizer(files[:4])
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py in __call__(self, inputs, **kwargs)
131 inputs = ffmpeg_read(inputs, self.feature_extractor.sampling_rate)
132
--> 133 assert isinstance(inputs, np.ndarray), "We expect a numpy ndarray as input"
134 assert len(inputs.shape) == 1, "We expect a single channel audio input for AutomaticSpeechRecognitionPipeline"
135
AssertionError: We expect a numpy ndarray as input
```
I tried mitigating this error by converting the list of filenames to a numpy array, but I seem to get another error that I don't know how to deal with:
```
from transformers import pipeline
import datasets
import numpy as np
speech_recognizer = pipeline ("automatic-speech-recognition", model = "facebook/wav2vec2-base-960h" ,device = 0)
dataset = datasets.load_dataset("superb", name ="asr", split = "test")
files = dataset["file"]
speech_recognizer(np.array(files[:4]))
```
Stack Trace:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/tmp/ipykernel_16600/437131926.py in <module>
1 import numpy as np
2
----> 3 speech_recognizer(np.array(files[:4]))
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py in __call__(self, inputs, **kwargs)
134 assert len(inputs.shape) == 1, "We expect a single channel audio input for AutomaticSpeechRecognitionPipeline"
135
--> 136 processed = self.feature_extractor(
137 inputs, sampling_rate=self.feature_extractor.sampling_rate, return_tensors="pt"
138 )
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/transformers/models/wav2vec2/feature_extraction_wav2vec2.py in __call__(self, raw_speech, padding, max_length, pad_to_multiple_of, return_attention_mask, return_tensors, sampling_rate, **kwargs)
179 # zero-mean and unit-variance normalization
180 if self.do_normalize:
--> 181 raw_speech = self.zero_mean_unit_var_norm(raw_speech)
182
183 # convert into correct format for padding
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/transformers/models/wav2vec2/feature_extraction_wav2vec2.py in zero_mean_unit_var_norm(input_values)
84 Every array in the list is normalized to have zero mean and unit variance
85 """
---> 86 return [(x - np.mean(x)) / np.sqrt(np.var(x) + 1e-5) for x in input_values]
87
88 def __call__(
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/transformers/models/wav2vec2/feature_extraction_wav2vec2.py in <listcomp>(.0)
84 Every array in the list is normalized to have zero mean and unit variance
85 """
---> 86 return [(x - np.mean(x)) / np.sqrt(np.var(x) + 1e-5) for x in input_values]
87
88 def __call__(
<__array_function__ internals> in mean(*args, **kwargs)
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/numpy/core/fromnumeric.py in mean(a, axis, dtype, out, keepdims, where)
3417 return mean(axis=axis, dtype=dtype, out=out, **kwargs)
3418
-> 3419 return _methods._mean(a, axis=axis, dtype=dtype,
3420 out=out, **kwargs)
3421
~/miniconda3/envs/mytextattack/lib/python3.8/site-packages/numpy/core/_methods.py in _mean(a, axis, dtype, out, keepdims, where)
176 is_float16_result = True
177
--> 178 ret = umr_sum(arr, axis, dtype, out, keepdims, where=where)
179 if isinstance(ret, mu.ndarray):
180 ret = um.true_divide(
TypeError: cannot perform reduce with flexible type
```
I was wondering if someone could provide some insight on how to fix this?
| 04-02-2022 19:23:20 | 04-02-2022 19:23:20 | @srujanjoshi Did you try to upgrade your `transformers` version ? It works as advertised on latest release.
There were some improvements made the the pipeline made since `4.9` including easier integration with datasets.
I would like to add that even you pass 4 items, it's not batched, you need to use `batch_size` if you want batching (and I don't recommend baching blindly whole files, as padding can become extremely big :)). Just because the number `4` seemed to have been intended as a sort of batching method :)
Cheers.<|||||>We also just updated the quicktour example @srujanjoshi - could you try out the new example? :-)<|||||>@patrickvonplaten @Narsil
Okay, I updated my transformers version to 4.18. When I run the updated code snippet (listed below):
```python
from transformers import pipeline
import datasets
speech_recognizer = pipeline ("automatic-speech-recognition", model = "facebook/wav2vec2-base-960h" ,device = 0)
dataset = datasets.load_dataset("PolyAI/minds14", name="en-US", split="train")
files = dataset["path"]
speech_recognizer(files[:4])
```
I get the following stack trace:
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/audio_utils.py:32, in ffmpeg_read(bpayload, sampling_rate)
31 try:
---> 32 with subprocess.Popen(ffmpeg_command, stdin=subprocess.PIPE, stdout=subprocess.PIPE) as ffmpeg_process:
33 output_stream = ffmpeg_process.communicate(bpayload)
File ~/miniconda3/envs/transformersenv/lib/python3.8/subprocess.py:858, in Popen.__init__(self, args, bufsize, executable, stdin, stdout, stderr, preexec_fn, close_fds, shell, cwd, env, universal_newlines, startupinfo, creationflags, restore_signals, start_new_session, pass_fds, encoding, errors, text)
855 self.stderr = io.TextIOWrapper(self.stderr,
856 encoding=encoding, errors=errors)
--> 858 self._execute_child(args, executable, preexec_fn, close_fds,
859 pass_fds, cwd, env,
860 startupinfo, creationflags, shell,
861 p2cread, p2cwrite,
862 c2pread, c2pwrite,
863 errread, errwrite,
864 restore_signals, start_new_session)
865 except:
866 # Cleanup if the child failed starting.
File ~/miniconda3/envs/transformersenv/lib/python3.8/subprocess.py:1704, in Popen._execute_child(self, args, executable, preexec_fn, close_fds, pass_fds, cwd, env, startupinfo, creationflags, shell, p2cread, p2cwrite, c2pread, c2pwrite, errread, errwrite, restore_signals, start_new_session)
1703 err_msg = os.strerror(errno_num)
-> 1704 raise child_exception_type(errno_num, err_msg, err_filename)
1705 raise child_exception_type(err_msg)
FileNotFoundError: [Errno 2] No such file or directory: 'ffmpeg'
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
Input In [37], in <cell line: 1>()
----> 1 speech_recognizer(files[:4])
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py:167, in AutomaticSpeechRecognitionPipeline.__call__(self, inputs, **kwargs)
126 def __call__(
127 self,
128 inputs: Union[np.ndarray, bytes, str],
129 **kwargs,
130 ):
131 """
132 Classify the sequence(s) given as inputs. See the [`AutomaticSpeechRecognitionPipeline`] documentation for more
133 information.
(...)
165 `"".join(chunk["text"] for chunk in output["chunks"])`.
166 """
--> 167 return super().__call__(inputs, **kwargs)
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/base.py:1015, in Pipeline.__call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1011 if can_use_iterator:
1012 final_iterator = self.get_iterator(
1013 inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params
1014 )
-> 1015 outputs = [output for output in final_iterator]
1016 return outputs
1017 else:
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/base.py:1015, in <listcomp>(.0)
1011 if can_use_iterator:
1012 final_iterator = self.get_iterator(
1013 inputs, num_workers, batch_size, preprocess_params, forward_params, postprocess_params
1014 )
-> 1015 outputs = [output for output in final_iterator]
1016 return outputs
1017 else:
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:111, in PipelineIterator.__next__(self)
108 return self.loader_batch_item()
110 # We're out of items within a batch
--> 111 item = next(self.iterator)
112 processed = self.infer(item, **self.params)
113 # We now have a batch of "inferred things".
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:253, in PipelinePackIterator.__next__(self)
250 return accumulator
252 while not is_last:
--> 253 processed = self.infer(next(self.iterator), **self.params)
254 if self.loader_batch_size is not None:
255 if isinstance(processed, torch.Tensor):
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:530, in _BaseDataLoaderIter.__next__(self)
528 if self._sampler_iter is None:
529 self._reset()
--> 530 data = self._next_data()
531 self._num_yielded += 1
532 if self._dataset_kind == _DatasetKind.Iterable and \
533 self._IterableDataset_len_called is not None and \
534 self._num_yielded > self._IterableDataset_len_called:
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/torch/utils/data/dataloader.py:570, in _SingleProcessDataLoaderIter._next_data(self)
568 def _next_data(self):
569 index = self._next_index() # may raise StopIteration
--> 570 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
571 if self._pin_memory:
572 data = _utils.pin_memory.pin_memory(data)
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py:32, in _IterableDatasetFetcher.fetch(self, possibly_batched_index)
30 for _ in possibly_batched_index:
31 try:
---> 32 data.append(next(self.dataset_iter))
33 except StopIteration:
34 self.ended = True
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/pt_utils.py:170, in PipelineChunkIterator.__next__(self)
167 self.subiterator = self.infer(next(self.iterator), **self.params)
168 try:
169 # Try to return next item
--> 170 processed = next(self.subiterator)
171 except StopIteration:
172 # When a preprocess iterator ends, we can start lookig at the next item
173 # ChunkIterator will keep feeding until ALL elements of iterator
(...)
176 # Another way to look at it, is we're basically flattening lists of lists
177 # into a single list, but with generators
178 self.subiterator = self.infer(next(self.iterator), **self.params)
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/automatic_speech_recognition.py:191, in AutomaticSpeechRecognitionPipeline.preprocess(self, inputs, chunk_length_s, stride_length_s)
188 inputs = f.read()
190 if isinstance(inputs, bytes):
--> 191 inputs = ffmpeg_read(inputs, self.feature_extractor.sampling_rate)
193 stride = None
194 extra = {}
File ~/miniconda3/envs/transformersenv/lib/python3.8/site-packages/transformers/pipelines/audio_utils.py:35, in ffmpeg_read(bpayload, sampling_rate)
33 output_stream = ffmpeg_process.communicate(bpayload)
34 except FileNotFoundError as error:
---> 35 raise ValueError("ffmpeg was not found but is required to load audio files from filename") from error
36 out_bytes = output_stream[0]
37 audio = np.frombuffer(out_bytes, np.float32)
ValueError: ffmpeg was not found but is required to load audio files from filename
```
Just to clarify (since I updated Transformers) this is the output of running 'transformers-cli env' command:
```
- `transformers` version: 4.18.0
- Platform: Linux-5.13.0-39-generic-x86_64-with-glibc2.17
- Python version: 3.8.13
- Huggingface_hub version: 0.5.1
- PyTorch version (GPU?): 1.11.0+cu113 (True)
- Tensorflow version (GPU?): 2.8.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
```
<|||||>As the error suggests:
```
ValueError: ffmpeg was not found but is required to load audio files from filename
```
You need to load the files from filename to get a wave representation of sound. `ffmpeg` is leveraged in `transformers` since it covers a super large array of different files.
If you can't install `ffmpeg` for whatever reason, you need to find a way to get those soundfiles into a 1d array at the expected sampling rate of the model (usually 16k Hz). (Everything is taken care of for you if you have `ffmpeg` installed)
@patrickvonplaten @sgugger should we change that example to avoid relying on `ffmpeg` ? Or should we make it explicit ?
I like doing ASR very simply in this example, also casually dropping to CUDA with a single `device=0` but maybe for a quicktour we want something simpler ? (The first example is a classifier so requires basically nothing).
(For audio without `ffmpeg` we would still rely on `librosa` or `soundfile` both of which also require a library to function (like `libsndfile` , librosa can also use ffmpeg if present)). `libsndfile` is not necessarily always present on all systems<|||||>On another note.
- `superb` is `6Go` which might be a little too much for a Quicktour, wdyt ? `datasets` is using `common_voice` , `tr` which is 600Mo: https://huggingface.co/docs/datasets/audio_process, maybe better ? (Well `common_voice`, `en` is super large too, but maybe since we now have a lot of non English models it could be OK to promote non english
- `datasets` can also do the preprocessing (without the resampling) using `librosa`. So we can write `pip install datasets[audio]`, the `pipeline` can do the resampling using `torchaudio` which is included in the docs for audio in `datasets`. It is not optimal since it does require another resampling (while `ffmpeg` resamples as it decompresses and reads the file).
In the Quicktour making things as simple as possible trumps "no resampling" thing, but it's also good if we can avoid promoting slightly incorrect usages.
So to put more simply:
```python
!sudo apt-get install libsndfile
pip install datasets[audio]
pip install torchaudio librosa
from transformers import pipeline
import datasets
speech_recognizer = pipeline ("automatic-speech-recognition", model = "facebook/wav2vec2-base-960h" ,device = 0)
dataset = datasets.load_dataset("common_voice", name="tr", split="train")
files = dataset["audio"]
for f in files[:4]:
# Pipeline expects the `raw` key for it's array while `datasets` uses `array`, we can make a change
# to accept both in `pipeline` since dicts are exclusively used for raw arrays and `sampling_rate` is already the correct
# name in both.
f["raw"] = f.pop("array")
print(speech_recognizer(files[:4]))
```
Pros:
- Easier on windows (where installing sndfile is not necessary I think)
Cons:
- Resampling under the hood
```python
!sudo apt-get install ffmpeg
pip install datasets
from transformers import pipeline
import datasets
speech_recognizer = pipeline ("automatic-speech-recognition", model = "facebook/wav2vec2-base-960h" ,device = 0)
dataset = datasets.load_dataset("common_voice", name="tr", split="train")
files = dataset["path"]
print(speech_recognizer(files[:4]))
```
Pros:
- No resampling
Cons:
- Installing ffmpeg is not simple necessarily on Windows (especially since we need it in the PATH to use it)
<|||||>Hey @Narsil,
I changed the quicktour example from `superb` to `minds14` which is only some hundred MB (PR is already merged).
https://huggingface.co/datasets/PolyAI/minds14 also has the `.wav` format which means we wouldn't need ffmpeg here.
Will open a PR to change the dependency on `ffmpeg` - fully agree with you @Narsil !<|||||>PR to avoid `ffmpeg` and to make clear that resampling is needed in 90% of the cases: https://github.com/huggingface/transformers/pull/16723<|||||>Thank you guys for the detailed feedback on my issue!
Really appreciate it, especially as someone who was new to the Transformers library. |
transformers | 16,562 | closed | Add Luke to Onnx | # What does this PR do?
I added lines to make Luke models available for Onnx conversion.
## Who can review?
- albert, bert, xlm: @LysandreJik
adding @NielsRogge since he worked on luke | 04-02-2022 07:06:53 | 04-02-2022 07:06:53 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16562). All of your documentation changes will be reflected on that endpoint.<|||||>
> ```
> RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "luke"
> ```
I have all requirements installed and yet, I get the following output when I run the slow test:
```
RUN_SLOW=1 pytest tests/onnx/test_onnx_v2.py -k "luke"
================= test session starts==============================
platform darwin -- Python 3.9.12, pytest-7.1.1, pluggy-1.0.0
rootdir: /Users/aakash/work/transformers, configfile: setup.cfg
collected 141 items / 141 deselected / 0 selected
```
What am I missing?
<|||||>You need to add the luke architecture to be tested in `tests/onnx/test_onnx_v2.py`.
Currently you are running those tests and filtering for the ones containing "luke" in their names, but none is found.<|||||>the slow tests resulted in the following errors:
[luke_onnx_test.log](https://github.com/huggingface/transformers/files/8608221/luke_onnx_test.log)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@lewtun could you revive this PR? Seems like it was already in a good state.<|||||>Hey @aakashb95 sorry for the slow reply!
The reason your tests are failing is because:
* LUKE can't be loaded with `AutoModelForMaskedLM`. To solve this, please rebase your branch on `main` to include the recent changes in https://github.com/huggingface/transformers/pull/17499
* You need to import the `LukeForXxx` PyTorch classes in `features.py` [here](https://github.com/huggingface/transformers/pull/16562/files#diff-f4bad33d844c5d91b09dd0af27395ade640e907920b357daf125dd19458bdd60L38)
* You need to specify the outputs for the new features in `config.py` in the `_tasks_to_common_outputs` dictionary, e.g. add this:
```python
"entity-classification": OrderedDict({"logits": {0: "batch"}}),
"entity-pair-classification": OrderedDict({"logits": {0: "batch"}}),
"entity-span-classification": OrderedDict({"logits": {0: "batch"}}),
```
After that, I think this PR should be in a good shape and the tests will pass :)<|||||>Thanks for reviving this PR!
I have incorporated the changes you mentioned and run the slow tests.
The tests fail with errors similar to the one below:
```log
self = <tests.onnx.test_onnx_v2.OnnxExportTestCaseV2 testMethod=test_pytorch_export_071_luke_entity_pair_classification>, test_name = 'luke_entity-pair-classification'
name = 'luke', model_name = 'studio-ousia/luke-base', feature = 'entity-pair-classification'
onnx_config_class_constructor = functools.partial(<bound method OnnxConfig.from_model_config of <class 'transformers.models.luke.configuration_luke.LukeOnnxConfig'>>, task='entity-pair-classification')
device = 'cpu'
def _onnx_export(self, test_name, name, model_name, feature, onnx_config_class_constructor, device="cpu"):
from transformers.onnx import export
model_class = FeaturesManager.get_model_class_for_feature(feature)
config = AutoConfig.from_pretrained(model_name)
> model = model_class.from_config(config)
E AttributeError: type object 'LukeForEntityPairClassification' has no attribute 'from_config'
tests/onnx/test_onnx_v2.py:266: AttributeError
```
If my understanding is correct, the `AutoModelFor*` classes have `from_config` implemented and since `luke` is unique, a `from_config` would have to be implemented for corresponding `LukeFor*` classes, right?
Attaching logs for reference:
[luke_onnx_slow_test.log](https://github.com/huggingface/transformers/files/8884467/slow_test.log)
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>cc'ing @lewtun here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,561 | closed | t5-large model OOM with FP16, but run well without FP16 | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.17.0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.10
- Python version: 3.8.12
- PyTorch version (GPU?): 1.11.0a0+17540c5 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes, V100(16GB)
- Using distributed or parallel set-up in script?: Yes, 2 GPUs
### Who can help
@stas00
## Information
Model I am using (Bert, XLNet ...): T5-large
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
I refered to https://github.com/huggingface/transformers/issues/8771#issuecomment-759248400
| Method | max BS |
| --- | --- |
| baseline | 2 (4 will OOM) |
| fp16 | 1 will OOM |
Steps to reproduce the behavior:
```
git clone https://github.com/huggingface/transformers
cd transformers/examples/legacy/seq2seq
wget https://cdn-datasets.huggingface.co/translation/wmt_en_ro.tar.gz
tar -xzvf wmt_en_ro.tar.gz
```
Baseline script:
```
export BS=2; rm -r output_dir; \
PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch \
--nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large \
--output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval \
--do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 \
--learning_rate 3e-5 --logging_first_step --logging_steps 1000 \
--max_source_length 128 --max_target_length 128 --num_train_epochs 1 \
--overwrite_output_dir --per_device_eval_batch_size \
--per_device_train_batch_size --predict_with_generate --eval_steps 25000 \
--sortish_sampler --task translation_en_to_ro --test_max_target_length 128 \
--val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500
```
FP16 script:
```
export BS=1; rm -r output_dir; \
PYTHONPATH=../../src USE_TF=0 python -m torch.distributed.launch \
--nproc_per_node=2 ./finetune_trainer.py --model_name_or_path t5-large \
--output_dir output_dir --adam_eps 1e-06 --data_dir wmt_en_ro --do_eval \
--do_train --evaluation_strategy=steps --freeze_embeds --label_smoothing 0.1 \
--learning_rate 3e-5 --logging_first_step --logging_steps 1000 \
--max_source_length 128 --max_target_length 128 --num_train_epochs 1 \
--overwrite_output_dir --per_device_eval_batch_size \
--per_device_train_batch_size --predict_with_generate --eval_steps 25000 \
--sortish_sampler --task translation_en_to_ro --test_max_target_length 128 \
--val_max_target_length 128 --warmup_steps 500 --n_train 2000 --n_val 500 \
--fp16
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
Traceback (most recent call last):
File "./finetune_trainer.py", line 367, in <module>
main()
File "./finetune_trainer.py", line 304, in main
train_result = trainer.train(
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1400, in train
tr_loss_step = self.training_step(model, inputs)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1984, in training_step
loss = self.compute_loss(model, inputs)
File "/workspace/huggingface/transformers/examples/legacy/seq2seq/seq2seq_trainer.py", line 180, in compute_loss
loss, _ = self._compute_loss(model, inputs, labels)
File "/workspace/huggingface/transformers/examples/legacy/seq2seq/seq2seq_trainer.py", line 173, in _compute_loss
logits = model(**inputs, use_cache=False)[0]
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/parallel/distributed.py", line 930, in forward
output = self.module(*inputs[0], **kwargs[0])
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1635, in forward
decoder_outputs = self.decoder(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 1030, in forward
layer_outputs = layer_module(
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 717, in forward
hidden_states = self.layer[-1](hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 327, in forward
forwarded_states = self.DenseReluDense(forwarded_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/transformers/models/t5/modeling_t5.py", line 285, in forward
hidden_states = self.wi(hidden_states)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 103, in forward
return F.linear(input, self.weight, self.bias)
File "/opt/conda/lib/python3.8/site-packages/torch/nn/functional.py", line 1971, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 1; 15.78 GiB total capacity; 14.58 GiB already allocated; 7.75 MiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
9%|โ | 89/1000 [00:31<05:20, 2.84it/s]
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 2260) of binary: /opt/conda/bin/python
```
(edited by @stas00 to wrap long cmd lines to make it easier to read the Issue) | 04-02-2022 04:49:46 | 04-02-2022 04:49:46 | 1. Before we can proceed, any chance you could port your code to the modern version of finetune_trainer - this code is very old and is no longer being tested or supported and is available only for those who still have to use it.
I remember there were a few stages to the transition, I tried to keep track of the changes here: https://github.com/huggingface/transformers/issues/10036 if it helps.
2. In general fp16 mixed precision doesn't always save memory over straight fp32, since you actually need to have even more memory to allocate both fp16 and fp32 weights, and then some operations use less memory under amp than fp32. You will find the full explanation and breakdown here: https://huggingface.co/docs/transformers/performance#fp16
If you want to use a bigger batch size use deepspeed with cpu memory offload. You can see the stats in the comment you linked to.<|||||>Thanks @stas00, I will try to the new code.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,560 | closed | add progress bar to eval loop | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-02-2022 04:14:13 | 04-02-2022 04:14:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16560). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,559 | closed | can't empty cache if torch not imported | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-02-2022 04:03:55 | 04-02-2022 04:03:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16559). All of your documentation changes will be reflected on that endpoint.<|||||>Sorry for taking so long to review;
Could you just run the code quality tool to ensure that the code quality passes? You can install them with the following, from the root of your clone:
```
pip install -e ".[quality]"
```
And then run them with:
```
make fixup
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,558 | closed | no such thing as pytorch_model | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 04-02-2022 03:36:12 | 04-02-2022 03:36:12 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_16558). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 16,557 | closed | Improve PT/TF equivalence test | # What does this PR do?
Improve PT/TF equivalence test.
To make the review a bit easier for you, I made some comments. And here are a summary of changes:
- `test_pt_tf_model_equivalence` in TensorFlow `LED` and `CLIP` are removed: the common one can handle it.
- `test_pt_tf_model_equivalence` in TensorFlow `LXMERT` and `ViTMAE` are removed: we only need to overwrite
- `prepare_pt_inputs_from_tf_inputs` for `LXMERT`
- `check_pt_tf_models` for `ViTMAE`
- Main changes in `TFModelTesterMixin.test_pt_tf_model_equivalence`
- restructure the code into components, so they could be overwritten separately instead of the whole big block
- move some ugly (temporary) logic blocks outside:
- `_make_attention_mask_non_null`
- `_postprocessing_to_ignore_test_cases`
- About `check_pt_tf_outputs`:
- it now can handle instances of `ModelOutput` (for CLIP model)
- better failure message: print the tensor name where the large diff between PT/TF occurs, like `output.hidden_states` or `output.text_model_output.attentions_1`
- A better way to handle the cases where PT/TF outputs have different keys: we try to test the output values for the common keys in both outputs.
Once this PR is approved/merged:
- To work on the same PT/TF equivalence test on PT side (should be very quick)
- To apply the same logic to PT/Flax equivalence test, both on Flax and PT sides. | 04-01-2022 21:31:56 | 04-01-2022 21:31:56 | _The documentation is not available anymore as the PR was closed or merged._<|||||>(just rebase on main - no real change since your last review)<|||||>Merge now. Don't hesitate to leave comments in any :-) |
transformers | 16,556 | closed | Fix flax import in `__init__.py`: `modeling_xglm -> modeling_flax_xglm` | # What does this PR do?
This PR fixes import statement in `src/transformers/models/xglm/__init__.py` file: `modeling_xglm -> modeling_flax_xglm`.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patil-suraj | 04-01-2022 17:50:23 | 04-01-2022 17:50:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 16,555 | closed | Adding missing type hints for BigBird model | PyTorch Implementation model added with missing type hints
## What does this PR do?
Added type hints for BigBird PyTorch as described in https://github.com/huggingface/transformers/issues/16059
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
| 04-01-2022 17:17:25 | 04-01-2022 17:17:25 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This looks great, thank you! And sorry for the delay in reviewing it. |
transformers | 16,554 | closed | Added new Spanish translation of autoclass_tutorial | # Translation of autoclass_tutorial.mdx into spanish
I made the translation of autoclass_tutorial.mdx into Spanish (fixes #15947). The document is located in the transformers/docs/source_es folder.
This PR is linked to #16348, which originally had some merge conflicts.
FYI @omarespejel
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
| 04-01-2022 17:13:29 | 04-01-2022 17:13:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @Duedme thank you for your new commits. Could you please add `autoclass_tutorial` to `../transformers/docs/source/es/_toctree.yml`. According to the [new Translation](https://github.com/huggingface/transformers/blob/main/docs/TRANSLATING.md) guide? This would allow the tests to pass.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.