repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
⌀ | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 15,446 | closed | [generate] fix synced_gpus default | The default for `synced_gpus` should be `False` - it's already correctly documented as such.
@sgugger | 01-31-2022 19:15:47 | 01-31-2022 19:15:47 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,445 | closed | skip large generations pipeline test for XGLM | # What does this PR do?
Skip large generations pipeline test for XGLM since it uses sinusoidal positional embeddings which are resized on the fly. | 01-31-2022 19:09:33 | 01-31-2022 19:09:33 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Can we just add some TODO to fix better later in the comment in that case?<|||||>Merging to get the CI green. |
transformers | 15,444 | closed | [M2M100, XGLM] fix positional emb resize | # What does this PR do?
The sinusoidal position embeddings in M2M100 and XGLM are not correctly resized as the `max_pos` is computed in-correctly. Take into account the `past_key_values_length` when computing `max_pos`. | 01-31-2022 18:56:55 | 01-31-2022 18:56:55 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Looks good to me - a bit surprising to me that no one saw this earlier |
transformers | 15,443 | closed | Fix FlaxDataCollatorForT5MLM for texts with "<pad>" | If the input text contains `"<pad>"` (true for C4/en dataset) T5 tokenizer assigns it index 0.
Collator code (line 371) assumes that only tokens with `id > 0` are the text tokens and the rest are sentinel ids (`-1`), which turns out not to be true for this particular case.
This causes the line to throw an error similar to
```
input_ids = input_ids_full[input_ids_full > 0].reshape((batch_size, -1))
*** ValueError: cannot reshape array of size 1040383 into shape (8192,newaxis)
```
This modification should also allow to use padding in T5 pre-training.
@patil-suraj (note edited by @stas00 to tag the correct maintainer for flax) | 01-31-2022 18:53:50 | 01-31-2022 18:53:50 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15443). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@patil-suraj , cold you please review this PR? It's very short<|||||>Oh we have a similar PR here from @patrickvonplaten
https://github.com/huggingface/transformers/pull/15835<|||||>Great! |
transformers | 15,442 | closed | Misfiring tf warnings | One last misfiring TF warning switched to `tf.print`. | 01-31-2022 18:38:57 | 01-31-2022 18:38:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think this is the last misfiring warning (this one is usually triggered in Seq2Seq models). I'm a little worried that there's some use-case where this will actually cause console spam instead, but I think it's safe.<|||||>Aaa, wait! In some examples this is actually much worse. I believe the warning isn't even correct/necessary anymore, though - I'm doing some testing to see if it even makes sense with modern TF.<|||||>After testing, I don't think this warning even makes sense anymore - I had no problems changing the values of these arguments at call-time instead of model initialization. I presume this dates from a time when TensorFlow had much more trouble with function retracing. I removed the warning entirely. |
transformers | 15,441 | closed | [XGLMTokenizer] correct positional emb size | # What does this PR do?
Correct the `PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES`, 1024 => 2048 | 01-31-2022 18:14:39 | 01-31-2022 18:14:39 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,440 | closed | tokenizer truncation_side is not set up with from_pretrained call | ## Environment info
- `transformers` version: 4.16.1
### Who can help
Library:
- Tokenizers: @SaulLu
- Pipelines: @Narsil
## Information
Model I am using (Bert, XLNet ...): **GPT2Tokenizer**
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
```
tokenizer = GPT2Tokenizer.from_pretrained("gpt2", truncation_side="left")
print(tokenizer.truncation_side)
```
```
right
```
## Expected behavior
```
left
```
## Possible solution
I believe the problem is in the missing part at `tokenization_utils_base.py` (just like the one for the padding side at https://github.com/huggingface/transformers/blob/master/src/transformers/tokenization_utils_base.py#L1457-L1462)
```
# Truncation side is right by default and overridden in subclasses. If specified in the kwargs, it is changed.
self.truncation_side = kwargs.pop("truncation_side", self.truncation_side)
assert self.truncation_side in [
"right",
"left",
], f"Truncation side should be selected between 'right' and 'left', current value: {self.truncation_side}"
```
Feel free to add this solution if it is good enough without writing tests, as I don't have transformers dev environment ready for PR right now.
| 01-31-2022 18:12:03 | 01-31-2022 18:12:03 | Thanks for the detailed issue @LSinev ! Indeed, this seems to be a desirable change!
@Narsil, it looks like we doubled the work, we should choose one of the 2 PRs. :relaxed: <|||||>Haha no worries @SaulLu .
Whichever you prefer as a PR, I don't have a real preference for the test choice. (Yours has much more coverage, mine is much simpler)<|||||>Thank you! Personally, I think it would be good to test the behavior on as many tokenizer classes as possible, so we'll start with mine?
I will of course put both of you as co-authors.<|||||>Go right ahead, I'll close my PR. |
transformers | 15,439 | closed | Change REALM checkpoint to new ones | # What does this PR do?
This moves all REALM checkpoints to the Google organization as they have been moved here. | 01-31-2022 17:28:48 | 01-31-2022 17:28:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,438 | closed | Harder check for IndexErrors in QA scripts | # What does this PR do?
This falls back by not adding the example when we get an `IndexError` for some obscure reason when trying to get the offsets of a given feature.
Fixes #15401 | 01-31-2022 17:16:37 | 01-31-2022 17:16:37 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,437 | closed | Error when group_by_length is used with an IterableDataset | # What does this PR do?
Since `group_by_length` is not supported for `IterableDataset`s, we should raise an error at the trainer init when we detect that arg is set and the training set is an `IterableDataset`.
Fixes #15418 | 01-31-2022 16:57:23 | 01-31-2022 16:57:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,436 | closed | use mean instead of elementwise_mean in XLMPredLayer | # What does this PR do?
Super tiny change:
`XLMPredLayer` currently uses
```
loss = nn.functional.cross_entropy(
scores.view(-1, self.n_words), y.view(-1), reduction="elementwise_mean"
)
```
I believe `elementwise_mean` is deprecated:
https://github.com/pytorch/pytorch/pull/13419
https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html
| 01-31-2022 16:53:46 | 01-31-2022 16:53:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,435 | closed | Fix spurious warning in TF TokenClassification models | This small fix switches `warnings.warn` for `tf.print` in the loss function for TF `TokenClassification` models. When TF compiles a model which has a conditional that depends on values inside a Tensor, it traces both branches. The result of this is that when one of the branches contains a `warnings.warn` statement, that warning is fired during tracing, before the model has seen any data.
Using `tf.print` instead of `warnings.warn` has a few downsides - in particular, it will possibly fire more than once. Still, this is better than firing the warning every time the model is used even if the data is correct, which is what happens now. | 01-31-2022 16:19:52 | 01-31-2022 16:19:52 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I think if we use `FutureWarning` they are only printed once, no? Could you check in this instance @Rocketknight1 ?
<|||||>@sgugger Either `FutureWarning` or `warnings.warn` are correctly only printed once - the problem is that they are **always** printed once, even if there are no -1s in the labels! This is because TF compiles the branch of the graph where the labels tensor contains `-1` and fires the warning during tracing. Only a TF graph op (like `tf.print()`) is correctly fired if and only if the tensor contains `-1`.<|||||>Ah thanks for explaining! I had completely misunderstood :-)<|||||>Don't worry, even the TF engineers don't understand TF |
transformers | 15,434 | closed | [Swin] Add missing header | # What does this PR do?
Add missing header to Swin docs. | 01-31-2022 16:12:27 | 01-31-2022 16:12:27 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,433 | closed | Add doc for add-new-model-like command | # What does this PR do?
This PR adds documentation for the new command `add-new-model-like`. | 01-31-2022 15:48:07 | 01-31-2022 15:48:07 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,432 | closed | add t5 ner finetuning | # What does this PR do?
This PR adds a Colab notebook for fine-tuning Text-2-Text T5 for Named Entity Recognition
@patil-suraj @patrickvonplaten | 01-31-2022 15:19:57 | 01-31-2022 15:19:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patil-suraj , thanks
I have reverted the commit |
transformers | 15,431 | closed | Can't push a model to hub | ## Environment info
- `transformers` version: 4.16.1
- Platform: Linux-5.4.0-96-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@patrickvonplaten, @anton-l
## Information
Model I am using: facebook/wav2vec2-xls-r-300m
The problem arises when using:
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: speech-recognition-community-v2
## To reproduce
Steps to reproduce the behavior:
1. I use a modified jupiter notebook from this site: https://huggingface.co/blog/fine-tune-wav2vec2-english
2. Firstly a train was running with TrainingArguments(push_to_hub=True). It worked, you can see results here: https://huggingface.co/AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt/commits/main until the commit "Training in progress, step 94900"
3. Then my notebook has crashed
4. I ran all cells until trainer.train()
5. sent the current state to the repository without thinking(commits 293563110d31ba9610e4ad7bffd8b377beaf4cf5 and 3f6e999d7d2fdb4640fd1523bfca744681fcc2fc)
6. Then I run train from last checkpoint
7. When try to send result to hub but get error
8.

9. I can resolve this issue only using this command
- repo = Repository(local_dir="xls-r-300m-ba_v2", clone_from="AigizK/wav2vec2-large-xls-r-300m-bashkir-cv7_opt")
- copy last model.bin to this folder
- repo.push_to_hub(commit_message="50 epochs")
When I asked in the Discord from @patrickvonplaten how i can resolve he wrote:
`Could you open an issue on Transformers for this? This looks like a library mismatch between transformers and huggingface_hub`
https://discord.com/channels/879548962464493619/930507803339137024/937665851878952990
| 01-31-2022 14:54:50 | 01-31-2022 14:54:50 | Gently pinging @sgugger here. Any idea what could be the problem here? I cannot really reproduce this error<|||||>Are you sure that in your second run you still had `push_to_hub=True` in your `TrainingArguments`? Because the `repo` arg not defined can only come from there I think.<|||||>@sgugger From Step3 i used only `push_to_hub=False`<|||||>So that's why you get an error. We will make it clearer, but you can't use `Trainer.push_to_hub` if `push_to_hub=False`.<|||||>Thank you. I thought that `push_to_hub` only affects during training. |
transformers | 15,430 | closed | Update README.md | fix typo
# What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @LysandreJik | 01-31-2022 14:46:29 | 01-31-2022 14:46:29 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks for fixing! |
transformers | 15,429 | closed | [RobertaTokenizer] remove inheritance on GPT2Tokenizer | # What does this PR do?
Continue making tokenizers independent. This PR refactors the `RobertaTokenizer` to remove dependency on `GPT2Tokenizer` | 01-31-2022 14:42:17 | 01-31-2022 14:42:17 | _The documentation is not available anymore as the PR was closed or merged._<|||||>The failures are unrelated, merging. |
transformers | 15,428 | closed | [Robust Speech Challenge] Add missing LR parameter | Some participants of the Robust Speech event complain about learning rate issues when training the model with the run_speech_recognition_ctc_bnb.py script. This problem is due to a missing LR parameter on the 8-bit optimizer initialization. This PR fixes this.
@patrickvonplaten | 01-31-2022 14:10:12 | 01-31-2022 14:10:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,427 | closed | [XGLM] fix gradient checkpointing | # What does this PR do?
Adds the missing import statement for `torch.utils.checkpoint`
Fixes #15422 | 01-31-2022 14:00:15 | 01-31-2022 14:00:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,426 | closed | Higher memory allocation for GPT-2 forward pass when not using past_key_values. | I have following problem:
I have a context text as input and then GPT-2 generates a text based on the input.
Currently I do this token for token the following way:
```
model_outputs = model(input_ids=last_token, past_key_values=past)
logits = model_outputs['logits']
past = model_outputs['past_key_values']
# do stuff with the logits to determine the next token...
```
This does work and is pretty fast, but the problem is the high gpu memory consumption by always saving the past_key_values.
AMP is enabled too!
To avoid this I had a trade-off in mind: Only save and use the past_key_values from the context text and put in the generated text so far as input_ids, instead of just the last token!
```
model_outputs = model(input_ids=generated_tokens, past_key_values=past, use_cache=False)
logits = model_outputs['logits']
# do stuff with the logits to determine the next token...
```
But now the memory allocation per generated token is even higher. Even though I didn't change anything else in the code.
Can anyone help me with this, or explain what exactly happens here?
## Environment info
- `transformers` version: 4.15.0
- Platform: Windows-10-10.0.19041-SP0 (also tested on Ubuntu)
- Python version: Python 3.7.3
- PyTorch version (GPU?): 1.10.1+cu102 (True)
- Tensorflow version (GPU?): not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @narsil
| 01-31-2022 13:27:12 | 01-31-2022 13:27:12 | Hi @LFruth ,
I think `past_key_values` should never incure more memory usage than without (since it's really a cache of matrices that have to be caculated anyway during inference.).
What could be at play here, is that you are using custom code, keeping references to old past_key_values, which means an ever growing amount of memory. Can you provide your full script ?
Do you mind trying to use `model.generate(....)` and see if it also has ever growing memory usage (more than expected) ? And if you want to play with the logits, you can add a custom `LogitsProcessor` in order to use some rule which isn't already integrated into `transformers`.
I can help providing an example on how to do so if that's the case.<|||||>Hi @Narsil,
Thank you for the quick answer.
What you say about past_key_values makes sense to me!
I don't keep any references to past_key_values anywhere afaik, but I copy my code here later!
I already have the same thing implemented with model.generate! Which is quite interesting, as it is a little slower but has almost no extra memory allocation. I tried to play with past_key_values and use_cache here too, but didn't notice a difference in speed and or memory allocation at all.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I encountered this same issue, and in case it helps anyone, my mistake is that I forgot to add the `torch.no_grad()`. |
transformers | 15,425 | closed | Partial Torch Memory Efficient Attention | # What does this PR do?
This begins the implementation of a central `attention()` function in modeling_utils.py that calls out to https://github.com/AminRezaei0x443/memory-efficient-attention if configuration parameters are set, to allocate configurably down to O(n) memory rather than O(n^2) memory at the expense of parallel execution.
I'm afraid the new memory-efficient-attention still needs to be added as a dependency and some development cruft removed from the source.
I believe it is important to reuse existing projects, so that people's work can be more effective and valued, but I also believe memory-efficient-attention code is MIT licensed if copying it is preferred.
The GPTJ and Perceiver models are altered to call out to the new attention function.
Working on this has been very hard for me, so I am contributing what I have now. If others have better work on this, feel free to accept them before mine.
- [ ] I have commented out the rest of the PR form here, to return to as I find capacity.
<!--
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
-->
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-31-2022 13:25:37 | 01-31-2022 13:25:37 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15425). All of your documentation changes will be reflected on that endpoint.<|||||>Hey @xloem, thanks a lot for your hard work on this! It is cool to support the attention mechanism as visible in https://github.com/AminRezaei0x443/memory-efficient-attention. However, the `transformers` library does not really work with central components to be shared among many models, so we do not design layers in the `modeling_utils.py` file.
This comes from the following two pieces of the "Why should I/shouldn't I use Transformers" from the README:
> 4. Easily customize a model or an example to your needs:
> - We provide examples for each architecture to reproduce the results published by its original authors.
> - **Model internals are exposed as consistently as possible.**
> - **Model files can be used independently of the library for quick experiments.**
and
> This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
You'll see that we have other implementations of efficient attention mechanisms spread across the codebase, and each of them are linked to a single model. Recent examples of this are YOSO and Nyströmformer, which were released in the v4.16 released last week.
cc @sgugger @patrickvonplaten for knowledge<|||||>@xloem, would you be interested in adding this new attention mechanism simply as a new model - linked to its official paper: https://arxiv.org/abs/2112.05682v2 (maybe called something like `O1Transformer` ?) <|||||>Thanks so much for your replies and advice.
I hadn't noticed the readme line. Do you know of any similar libraries to transformers, that do indeed collect building blocks? The transformers library is far smaller than the torch and flax libraries that models already depend on, but I imagine you know the experience of research work better than I do to make that call.
I've been finding a little more energy to work on this. The owner of the dependency repo found there's still an outstanding issue that masks and biases are built O(n^2), so more work is needed.<|||||>Apologies for submitting this PR in such a poor state.
Unlike YOSO and Nystromformer, this implementation is exact. The output is theoretically idempotent with that generated by use of the softmax function.
Although the research could be implemented as a standalone model[edit:typo], I'm really hoping to actually help users on low-memory systems use the code to fine tune current pretrained models. @patrickvonplaten how does that hope settle with you?
After rereading the concerns, my plan is to move the code into the model files to provide for the statements in the readme. @LysandreJik, does this sound reasonable?
I also try to stick to just one sourcefile when developing and learning. My usual approach is to open a python interpreter and do `print(inspect.getsource(modulefunc))` to quickly see abstraction implementations, which is indeed quite unideal. Abstraction can _really_ empower a domain: research and development accelerates when abstraction is made a norm.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,424 | closed | error when running official run_speech_recognition_ctc.py (ValueError: 'a' cannot be empty unless no samples are taken) | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
`transformers` version: 4.17.0.dev0
- Platform:Ubuntu 20.04.3 LTS
- Python version: 3.8.10
- PyTorch version (GPU?):1.10.2+cu113
- Tensorflow version (GPU?): not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @anton-l
Models:
- Wav2Vec2
Library:
- Speech
## Information
Model I am using : facebook/wav2vec2-xls-r-300m
The problem arises when using:
* [ ] the official example script: https://github.com/huggingface/transformers/blob/master/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
The tasks I am working on is:
* [ ] dataset: mozilla-foundation/common_voice_8_0
## To reproduce
Steps to reproduce the behavior:
I have been running the official script `run_speech_recognition_ctc.py` as provided in
[https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition](https://github.com/huggingface/transformers/tree/master/examples/pytorch/speech-recognition)
1. `python run_speech_recognition_ctc.py \
--dataset_name="common_voice" \
--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
--dataset_config_name="ta" \
--output_dir="./wav2vec2-common_voice-tamil" \
--overwrite_output_dir \
--num_train_epochs="15" \
--per_device_train_batch_size="16" \
--gradient_accumulation_steps="2" \
--learning_rate="3e-4" \
--warmup_steps="500" \
--evaluation_strategy="steps" \
--text_column_name="sentence" \
--length_column_name="input_length" \
--save_steps="400" \
--eval_steps="100" \
--layerdrop="0.0" \
--save_total_limit="3" \
--freeze_feature_encoder \
--gradient_checkpointing \
--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
--fp16 \
--group_by_length \
--push_to_hub \
--do_train --do_eval `
## Error Message
`train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1373, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1948, in training_step
loss = self.compute_loss(model, inputs)
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/trainer.py", line 1980, in compute_loss
outputs = model(**inputs)
File "/home/stephen/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1742, in forward
outputs = self.wav2vec2(
File "/home/stephen/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1342, in forward
hidden_states = self._mask_hidden_states(
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 1284, in _mask_hidden_states
mask_time_indices = _compute_mask_indices(
File "/home/stephen/.local/lib/python3.8/site-packages/transformers/models/wav2vec2/modeling_wav2vec2.py", line 263, in _compute_mask_indices
spec_aug_mask_idx = np.random.choice(
File "mtrand.pyx", line 909, in numpy.random.mtrand.RandomState.choice
ValueError: 'a' cannot be empty unless no samples are taken`
## Expected behavior
The same script runs fine without any errors, when the **mask_time_prob** is set to 0 instead of the default value mentioned in the script that 0.05 | 01-31-2022 13:09:21 | 01-31-2022 13:09:21 | Hey @StephennFernandes,
Thanks a mille for opening the issue - this PR seems to be related: https://github.com/huggingface/transformers/pull/15423<|||||>Hey @StephennFernandes,
Can you try whether the error goes away with the newest `transformers` version?<|||||>hey @patrickvonplaten i installed tranfromers from source `pip install git+https://github.com/huggingface/transformers` and the issue still persists<|||||>Hmm, that's a bit odd. Rerunning your training command now<|||||>Hey @StephennFernandes,
I ran the command on master and trained for 3 epochs without error. You can see the run here: https://huggingface.co/patrickvonplaten/wav2vec2-common_voice-tamil<|||||>Could you verify that you are using current master of `transformers`?<|||||>hey patrick i uninstalled and reinstalled transformers and it looks like everything is working fine.
One strange thing i noticed is that until a couple of epochs the wer went low 0.63 and then the wer came up again and for the rest of the training it was 1.0, the final model was trained for 1.0 wer.
This isnt the first time i am facing such issues on updated version of transformers.
the only version of transformers that works fine and converges wers for me was 4.8.0, is there some issue with my cuda drivers perhaps ?? what version of CUDA drivers does huggingface recommend along with pytorch version.
I have an Nvidia A6000 GPU <|||||>Uff I really don't know sadly :-/ Could you maybe open a new issue since this one does not seem to be related anymore or a forum discussion: https://discuss.huggingface.co/? :-)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,423 | closed | Update modeling_wav2vec2.py | @patrickvonplaten
With very tiny sound files (less than 0.1 seconds) the num_masked_span can be too long. The issue is described in issue 15366 and discussed with @patrickvonplaten. This can be an issue with custom datasets that contains sentences with small words like "OK", "Yes" etc.
| 01-31-2022 12:51:12 | 01-31-2022 12:51:12 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,422 | closed | XGLM Bug: break: module 'torch.utils' has no attribute 'checkpoint' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-4.9.253-tegra-aarch64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.2 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
@patil-suraj
## Information
Model I am using (Bert, XLNet ...): XGLMForCausalLM
The problem arises when using:
* [X] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. I am trying to start training on a dataset using a converted model.
2. As soon as I start training using the script `run_clm.py` (found in the examples folder) I get the following error back:
```
break: module 'torch.utils' has no attribute 'checkpoint'
```
Cause might be related to this: https://discuss.pytorch.org/t/attributeerror-module-torch-utils-has-no-attribute-checkpoint/101543/6
When adding the following line in the modeling_xglm.py, the error does not appear and training starts:
```
from torch.utils.checkpoint import checkpoint
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Training would start without any issues. | 01-31-2022 12:30:24 | 01-31-2022 12:30:24 | Hi,
Could you please share a code snippet or colab so I could reproduce this issue? Thanks <|||||>I can't share my dataset, but I used this one as the basis for the trainer:
https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_clm.py
EDIT: Just train any XGLM-compatible model with a default dataset (wikitext for example)<|||||>Thank you for reporting this. Fixed in #15427 |
transformers | 15,421 | closed | [Trainer] suppress warning for length-related columns | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
During the robust speech event multiple participants reported that they were confused by the (harmless) warning:
```
"The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length."
```
The reason why this warning shows up is because we pass a dataset column `"input_length"` to the Trainer so that `group_by_length` is significantly sped-up when the input lengths are automatically passed to the trainer.
Fixes #https://github.com/huggingface/transformers/issues/15366#issuecomment-1024166475
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-31-2022 12:03:03 | 01-31-2022 12:03:03 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I disagree with this. The column is in the dataset (to speed up the preprocessing) but it then ignored by the Trainer, which warrants something. The logging level is not set at warning, but at info, so the user knows there is nothing wrong with it.
I'm fine with rephrasing the message if you think it helps, but we shouldn't silently discard a column of the training dataset.
In most our examples, we don't clean up the datasets to include only the columns the model want and we never had any suer complain about this info.<|||||>Fair point! Made the message a tiny bit more explicit<|||||>Failing tests are due to REALM. |
transformers | 15,420 | closed | Bug in T5 Tokenizer - Adding space after special tokens | ## Environment info
- `transformers` version: 4.16.1
- Platform: Linux-4.15.0-65-generic-x86_64-with-debian-buster-sid
- Python version: 3.7.11
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
### Who can help
@SaulLu @patrickvonplaten @PhilipMay
## Information
When decoding with the T5 tokenizer it inserts redundant spaces after special tokens; this leads to incorrect decoding. Here is an example:
```bash
>> tokenizer.convert_ids_to_tokens(self.encode("``America''"))
['▁', '<unk>', 'America', "'", "'", '</s>']
>>self.convert_ids_to_tokens(self.encode(self.decode(self.encode("``America''"))))
['<unk>', '▁America', "'", "'", '</s>']
```
This happens in line 291 of file `tokenization_t5.py`:
```python
out_string += self.sp_model.decode_pieces(current_sub_tokens) + token + " "
```
What is the purpose of the last space (" ")?
## Expected behavior
IMO the decoding in this example should be exactly the same as the original text.
Thank you! :)
| 01-31-2022 12:00:12 | 01-31-2022 12:00:12 | That's a good question! @SaulLu @LysandreJik @n1t0 - do you have an idea by any chance?<|||||>This seems to be about special tokens:
```python
# make sure that special tokens are not decoded using sentencepiece model
if token in self.all_special_tokens:
out_string += self.sp_model.decode_pieces(current_sub_tokens) + token + " "
current_sub_tokens = []
```
https://github.com/huggingface/transformers/blob/2ca6268394c1464eaa813ccb398c3812904e2283/src/transformers/models/t5/tokenization_t5.py#L289-L292<|||||>@PhilipMay will removing the additional `" "` lead to any negative effect?
If not, can we please remove the additional `" "`?
Sometimes we need to decode encoded inputs, and we will prefer the outcome to be as similar as possible to the original text.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,419 | closed | Add option to resize like torchvision's Resize | # What does this PR do?
A lot of research papers (like ConvNeXT, PoolFormer, etc.) use [`torchvision.transforms.Resize`](https://pytorch.org/vision/stable/transforms.html#torchvision.transforms.Resize) when preparing images for the model. This class supports providing an integer size, which will only resize the smaller edge of the image.
The `resize` method defined in `image_utils.py` currently resizes to (size, size) in case size is an integer. This PR adds the possibility to resize images similar to torchvision's resize by adding an argument `torch_resize` (which is set to `False` by default). One can then also provide an optional `max_size` argument.
This is replicated using Pillow. | 01-31-2022 10:28:57 | 01-31-2022 10:28:57 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Applied all suggestions from your review, I've renamed `torch_resize` (which defaults to `False`) to `default_to_square` (which defaults to `True`).
I guess we cannot default it to a rectangle, to be backwards compatible? Or can we, if we change the existing ones that expect a square to `default_to_square=True`)? Based on [this](https://github.com/huggingface/transformers/search?q=self.resize), it seems like we currently have 10 models using the resize method that assume providing an integer will resize to a square.<|||||>> I guess we cannot default it to a rectangle, to be backwards compatible? Or can we, if we change the existing ones that expect a square to default_to_square=True)? Based on this, it seems like we currently have 10 models using the resize method that assume providing an integer will resize to a square.
We can switch the other way, but only if we change the default for all the subclasses that expected the resize to a square, so not sure it's worth the hassle. |
transformers | 15,418 | closed | Notify user if group_by_length won't be used | @sgugger @stas00 (as suggested in [15196](https://github.com/huggingface/transformers/issues/15196#issuecomment-1024143208))
It is just a minor issue but it could prevent some head-scratching:
Setting `group_by_length` to `True` during training has, as far as I can see, no effect if the provided dataset is a `torch.utils.data.IterableDataset`, as can be seen in this line:
https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/src/transformers/trainer.py#L646
Entering this branch will run the training without bucketing / grouping.
Imo it would be best to raise an exception or, at the very least, print a warning message for the user. Grouping only happens in [`_get_train_sampler()`](https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/src/transformers/trainer.py#L566:L628) which [is never reached](https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/src/transformers/trainer.py#L646:L664) if said condition is true.
| 01-31-2022 08:33:37 | 01-31-2022 08:33:37 | Yes, that's a great idea! Will add something for that today. |
transformers | 15,417 | closed | Support for larger XGLM models | According to the official XGLM repo, https://github.com/pytorch/fairseq/tree/main/examples/xglm, there are five versions of XGLM out there. The architecture of the larger XGLM versions is same as the base XGLM, ; why doesn't HF provide support for larger XGLM versions? It seems that currently HF provides support for the smallest one (546M parameters) only. | 01-31-2022 08:27:34 | 01-31-2022 08:27:34 | Hi,
There are already 4 variants available: https://huggingface.co/models?other=xglm.
Only the 4.5B variant seems to be missing as of now, but pretty sure @patil-suraj is going to add that one as well.<|||||>As Niels pointed out, all the xglm checkpoints are available on the hub except the 4.5B. I'm having some issues with that checkpoint so will upload that once it's resolved.<|||||>Just pushed the 4.5B checkpoint https://huggingface.co/facebook/xglm-4.5B
All XGLM checkpoints are now available on the hub. Closing this issue.<|||||>Hi @patil-suraj and @NielsRogge ! Thanks for your quick reply! I went through the code of XGLM model. It seems that for the model and tokenizer part, we can only use the smallest one. Because I noticed a snippet like this:
XGLM_PRETRAINED_MODEL_ARCHIVE_LIST = [
"facebook/xglm-564M",
# See all XGLM models at https://huggingface.co/models?filter=xglm
]
Would you like to provide any advice to make use of the larger models like 1.7B, 2.5B and 4.5B? For example, just add the model names in the list? Thanks in advance!<|||||>Hi @WadeYin9712
That list is only used for internal purposes. You can use all the xglm checkpoints available on the hub https://huggingface.co/models?other=xglm.
To use it, just pass the repo id to `from_pretrained`. For ex
```python3
model = XGLMForCausalLM.from_pretrained("facebook/xglm-1.7B")
``` |
transformers | 15,416 | closed | Constrained Beam Search [without disjunctive decoding] | # ~~Disjunctive~~ Positive Constraint Decoding
@patrickvonplaten @Narsil
Fixes #14081,
But without the "disjunctive" component. I think it's a good idea to first integrate this `Constraint` framework before adding more advanced techniques like Disjunctive Decoding, which can arguably be added with ease afterwards.
Currently this PR is able to force the generation of tokens and/or an ordered sequence of tokens in the beam search with `constrained_beam_search` in `GenerationMixin.generate`. It finds the best possible, highest probable list of beam outputs in fulfilling a list of `Constraint` objects, yielding results more sensible than just blindly forcing the tokens in the output sequence.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case. #14081
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Tagging @patrickvonplaten because my main back-and-forth was with you on this PR.
## Example Use
Here is an example of how one could use this functionality:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.generation_beam_constraints import PhrasalConstraint
device = "cuda"
model = GPT2LMHeadModel.from_pretrained("gpt2").to(device)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
force_text = " big monsters"
force_text_2 = " crazy"
force_tokens = tokenizer.encode(force_text, return_tensors="pt").to(device)[0]
force_tokens_2 = tokenizer.encode(force_text_2, return_tensors="pt").to(device)[0]
constraints = [
PhrasalConstraint(force_tokens),
PhrasalConstraint(force_tokens_2)
]
input_text = ["The baby is crying because"]
model_inputs = tokenizer(input_text, return_tensors="pt")
for key, value in model_inputs.items():
model_inputs[key] = value.to(device)
k = model.generate(
**model_inputs,
constraints=constraints,
num_beams=7,
num_return_sequences=7,
no_repeat_ngram_size=2
)
for out in k:
print(tokenizer.decode(out))
```
For some example outputs:
```
The baby is crying because she's been told crazy big monsters are going to come and kill her.
The baby is crying because she's been told crazy big monsters are coming for her.
The baby is crying because she's been told crazy big monsters are going to come after her.
```
## The Constraint Framework
Users can define their own constraints by inheriting the `Constraint` abstract class and this framework is ensured to work as desired, because the `Constraint` class is quite strictly defined. If an implementation passes the `self.test()` function of this interface then it necessarily works as desired. An incorrect implementation will lead to an error. But probably most users would be fine just using the `TokenConstraint` and `PhrasalContraint` classes.
## Testing
My approach to testing this new functionality was that it passes every test that was related to the existing `beam_search` function, with the **additional testing that requires that the generated output indeed fulfills the constraint.**
One such example:
```python
# tests/test_generation_utils.py
def test_constrained_beam_search_generate(self):
for model_class in self.all_generative_model_classes:
....
# check `generate()` and `constrained_beam_search()` are equal for `num_return_sequences`
# Sample constraints
force_tokens = torch.randint(min_id, max_id, (1, 2)).type(torch.LongTensor)[0]
constraints = [
PhrasalConstraint(force_tokens),
]
num_return_sequences = 2
max_length = 10
beam_kwargs, beam_scorer = self._get_constrained_beam_scorer_and_kwargs(
input_ids.shape[0], max_length, constraints, num_return_sequences=num_return_sequences
)
output_generate, output_beam_search = self._constrained_beam_search_generate(
model=model,
input_ids=input_ids,
attention_mask=attention_mask,
max_length=max_length,
constrained_beam_scorer=beam_scorer,
constraints=constraints,
beam_kwargs=beam_kwargs,
logits_processor=logits_processor,
logits_process_kwargs=logits_process_kwargs,
)
self.assertListEqual(output_generate.tolist(), output_beam_search.tolist())
for generation_output in output_generate:
self._check_sequence_inside_sequence(force_tokens, generation_output)
...
```
| 01-31-2022 07:36:12 | 01-31-2022 07:36:12 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Very cool taking a look now<|||||>Hey @cwkeam,
This already looks like a super clean and impactful addition to the `transformers` library! Great job. I really like how you implement this new generation method in a extendable way without affecting existing generate methods. I'm sure we can merge it soon.
At the moment, it seems like your example above doesn't work. When I try to run:
```python
from transformers import GPT2Tokenizer, GPT2LMHeadModel
from transformers.generation_beam_constraints import PhrasalConstraint
device = "cuda"
model = GPT2LMHeadModel.from_pretrained("gpt2").to(device)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
force_text = " big monsters"
force_text_2 = " crazy"
force_tokens = tokenizer.encode(force_text, return_tensors="pt").to(device)[0]
force_tokens_2 = tokenizer.encode(force_text_2, return_tensors="pt").to(device)[0]
constraints = [
PhrasalConstraint(force_tokens),
PhrasalConstraint(force_tokens_2)
]
input_text = ["The baby is crying because"]
model_inputs = tokenizer(input_text, return_tensors="pt")
for key, value in model_inputs.items():
model_inputs[key] = value.to(device)
k = model.generate(
**model_inputs,
constraints=constraints,
num_beams=7,
num_return_sequences=7,
no_repeat_ngram_size=2
)
for out in k:
print(tokenizer.decode(out))
```
on my computer, I'm getting the following error:
```
ValueError: `token_ids` has to be list of positive integers or a `torch.LongTensor` but is tensor([7165], device='cuda:0')For single token constraints, refer to `TokenConstraint`.
```
I think the example is really nice and exactly what we are looking for. So maybe you could also add it as a `@slow` test under `tests/test_generation_utils.py` once it works :-)<|||||>Let me know if you need further help!<|||||>Added 1. clear examples (**and tests for them**, 2. more specific input parameter tests, and 3. **bug fixes in other beam search code**.
## 1. Examples (text completion & translation)
The latter example really emphasizes the usefulness of this feature.
Fixed up the first example as requested previously, showing its ability that it can
- [x] handle multiple constraints
- [x] handle constraints that have more than one words (sequence of tokens)
```python
def test_constrained_beam_search(self):
model = GPT2LMHeadModel.from_pretrained("gpt2").to(torch_device)
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
force_tokens = tokenizer.encode(" scared", return_tensors="pt").to(torch_device)[0]
force_tokens_2 = tokenizer.encode(" big weapons", return_tensors="pt").to(torch_device)[0]
constraints = [
PhrasalConstraint(force_tokens),
PhrasalConstraint(force_tokens_2),
]
starting_text = ["The soldiers were not prepared and"]
input_ids = tokenizer(starting_text, return_tensors="pt").input_ids.to(torch_device)
outputs = model.generate(
input_ids,
constraints=constraints,
num_beams=10,
num_return_sequences=1,
no_repeat_ngram_size=1,
max_length=30,
remove_invalid_values=True,
)
generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)
self.assertListEqual(
generated_text,
[
"The soldiers were not prepared and didn't know how big the big weapons would be, so they scared them off. They had no idea what to do",
],
)
```
and here's a translation example:
```python
def test_constrained_beam_search_example_integration(self):
tokenizer = AutoTokenizer.from_pretrained("t5-base")
model = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
encoder_input_str = "translate English to German: How old are you?"
encoder_input_ids = tokenizer(encoder_input_str, return_tensors="pt").input_ids
# lets run beam search using 5 beams
num_beams = 5
# define decoder start token ids
input_ids = torch.ones((num_beams, 1), device=model.device, dtype=torch.long)
input_ids = input_ids * model.config.decoder_start_token_id
# add encoder_outputs to model keyword arguments
model_kwargs = {
"encoder_outputs": model.get_encoder()(
encoder_input_ids.repeat_interleave(num_beams, dim=0), return_dict=True
)
}
constraint_str = "sind"
constraint_token_ids = tokenizer.encode(constraint_str)[:-1] # remove eos token
constraints = [PhrasalConstraint(token_ids=constraint_token_ids)]
# instantiate beam scorer
beam_scorer = ConstrainedBeamSearchScorer(
batch_size=1, num_beams=num_beams, device=model.device, constraints=constraints
)
# instantiate logits processors
logits_processor = LogitsProcessorList(
[
MinLengthLogitsProcessor(5, eos_token_id=model.config.eos_token_id),
]
)
outputs = model.constrained_beam_search(
input_ids, beam_scorer, constraints=constraints, logits_processor=logits_processor, **model_kwargs
)
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
self.assertListEqual(outputs, ["Wie alter sind Sie?"])
```
This is a really useful example, because the following is the test for **ordinary beam search**
```python
def test_beam_search_example_integration(self):
# ... the exact same code as above minus constraints
outputs = model.beam_search(input_ids, beam_scorer, logits_processor=logits_processor, **model_kwargs)
outputs = tokenizer.batch_decode(outputs, skip_special_tokens=True)
self.assertListEqual(outputs, ["Wie alt bist du?"])
```
I've never studied German before, but upon looking it up, it seemed like there were two equally realistic ways to translate "How old are you" in German. While the ordinary beam search output may be fine for most uses, maybe the user specifically wants the other way of translating it. In this case, they can do as I have and force the generation to include the token that changes the direction of the translation as desired.
## 2. Input Parameters tests
Added @patrickvonplaten 's recommendation as well as dealing with `num_groups`:
```python
if num_beams <= 1:
raise ValueError("`num_beams` needs to be greater than 1 for constrained genertation.")
if do_sample:
raise ValueError("`do_sample` needs to be false for constrained generation.")
if num_beam_groups is not None and num_beam_groups > 1:
raise ValueError("`num_beam_groups` not supported yet for constrained generation.")
```
I'm not sure whether the last part is a good idea and whether I should just make the implementation work for `num_beam_groups>1` but the fact is that it can't handle that case yet (will throw an error).
## 3. Bug Fixes in **beam_search** with `max_length`
As I was writing the test code for the few examples above, I realized that the example in the [huggingface documentation](https://huggingface.co/docs/transformers/v4.16.2/en/main_classes/model#transformers.generation_utils.GenerationMixin.beam_search) for **ordinary beam search** was actually failing, because of the **recent change with `max_length`**.
The above testing code `test_beam_search_example_integration(self)` is taken directly from the huggingface documentation for `GenerationMixin.beam_search` and it fails, because `max_length` can be set to None unless otherwise specified, and it throws an error saying that you can't compare a `Tensor` to a `NoneType`.
[From `BeamSearchScorer.finalize()`](https://github.com/huggingface/transformers/blob/master/src/transformers/generation_beam_search.py#L333),
```python
sent_max_len = min(sent_lengths.max().item() + 1, max_length) # ERROR 1
decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len)
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`pad_token_id` has to be defined"
decoded.fill_(pad_token_id)
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < max_length: # ERROR 2
decoded[i, sent_lengths[i]] = eos_token_id
```
So I changed that to:
```python
sent_lengths_max = sent_lengths.max().item() + 1
sent_max_len = min(sent_lengths_max, max_length) if max_length is not None else sent_lengths_max
decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len)
# shorter batches are padded if needed
if sent_lengths.min().item() != sent_lengths.max().item():
assert pad_token_id is not None, "`pad_token_id` has to be defined"
decoded.fill_(pad_token_id)
# fill with hypotheses and eos_token_id if the latter fits in
for i, hypo in enumerate(best):
decoded[i, : sent_lengths[i]] = hypo
if sent_lengths[i] < sent_max_len:
decoded[i, sent_lengths[i]] = eos_token_id
```
<|||||>Hey @patrickvonplaten
I also wanted to ask your opinions on making this feature integrate more easily to the `model.generate()` function. The same way how `bad_word_ids` is converted into a `LogitsProcessor` in the internal logic, maybe there's a way to add a similarly simple API like `force_token_ids` that internally converts it into `Constraint` objects in the backend.
I'm thinking of something like:
```python
starting_text = ["The soldiers were not prepared and"]
force_tokens = ["scared", "big weapons"]
input_ids = tokenizer(starting_text, return_tensors="pt").input_ids.to(torch_device)
force_token_ids = tokenizer(force_tokens).input_ids
outputs = model.generate(
input_ids,
force_token_ids=force_token_ids, # [[28349], [23981, 3721]] something that looks like this (made the numbers up)
...
)
```
But the problem arises when we start talking about **Disjunctive Positive Constraint Decoding** which was the main subject of the original #14081 issue. Although the disjunctive constraint generation implementation isn't even made yet, I think the best approach is something like this:
```python
from transformers.generation_beam_constraints import DisjunctiveConstraint
variations_1 = ["rain", "raining", "rained"]
variations_2 = ["smile", "smiling", "smiled"]
force_rain = tokenizer.encode(variations, return_tensors="pt")
force_smile = tokenizer.encode(variatiosn_2, return_tensors="pt")
constraints = [
DisjunctiveConstraint(force_rain),
DisjunctiveConstraint(force_smile)
]
outputs = model.generate(
input_ids,
constraints=constraints,
...
)
```
Where `DisjunctiveConstraint` is a constraint that is fulfilled by fulfilling just one among the list.
However, this feature would be something that's quite awkward to make into the `force_token_ids` API.
I think one way to approaching this is realizing that a `PhrasalConstraint` is basically a subset of a `DisjunctiveConstraint`:
```python
variations_1 = ["rain"]
force_rain = tokenizer.encode(variations, return_tensors="pt")
constraints = [
DisjunctiveConstraint(force_rain),
]
```
The above has the same effect as just doing `PhrasalConstraint(tokenizer_encode("rain"))`, since the Disjunctive code would be choosing one from a list with one item.
So maybe the API above should be changed to:
```python
starting_text = ["The soldiers were not prepared and"]
force_tokens = [
["scared", "another"],
["big weapons"]
]
input_ids = tokenizer(starting_text, return_tensors="pt").input_ids.to(torch_device)
force_token_ids = tokenizer(force_tokens).input_ids
outputs = model.generate(
input_ids,
force_token_ids=force_token_ids, # [
[ [28349], [456849] ], # DisjunctiveConstraint, choose one between two
[ [23981, 3721] ] # same ordinary PhrasalConstraint for a sequence, yet still a Disjunctive object.
]
...
)
```
Then the input type for `force_token_ids` would be `List[List[List[int]]]` that sounds quite messy.
Let me know your thoughts on this!<|||||>Hey @patrickvonplaten,
That's a good point. I think that the `DisjunctiveConstraint` feature should also come as a separate PR for maintanence purposes as you said. I'll open a new PR soon.
The checks seem to be all passing now! Will you do a final review?
Thanks!
<|||||>Incredible job at implementing this @cwkeam - it's very clean and enables some really cool use cases.
If you'd like we could now the Conjunctive decoding in a follow-up PR then and make it a bit easier for the user to enable it via some keywords for `generate()` and then post about this method - I'm sure the community will be very excited about this addition. If you'd like to promote the integration already at this point, we could also do it right away and delay the conjuntive decoding PR a bit - as you wish :-)<|||||>All slow tests run correctly and the fast tests are not slowed down by this PR. One final change @cwkeam and I think then we are good to go<|||||>@patrickvonplaten @sgugger Thanks for the review! I've adjusted all the code accordingly.
As for promoting these changes, I think I'll save it for the `DisjunctiveConstraint` and simple integration to the `generate()` function is complete. I'll open a new PR about it soon. In terms of after those other changes are done, what would be a nice way to promote this change? Could it be possible to write a blog post in huggingface blogs as well as posting on TowardsDataScience about it?
Thanks so much for everyone's help!<|||||>> @patrickvonplaten @sgugger Thanks for the review! I've adjusted all the code accordingly.
>
> As for promoting these changes, I think I'll save it for the `DisjunctiveConstraint` and simple integration to the `generate()` function is complete. I'll open a new PR about it soon. In terms of after those other changes are done, what would be a nice way to promote this change? Could it be possible to write a blog post in huggingface blogs as well as posting on TowardsDataScience about it?
>
> Thanks so much for everyone's help!
A blog post would be a great idea indeed! This is a really nice feature to write a blog post about I think. Happy to help you <|||||>@patrickvonplaten
I've made all the requested adjustments on the comments.
The test is failing right now at just one script:
```
tests/test_modeling_speech_encoder_decoder.py::Wav2Vec2Speech2Text2::test_encoder_decoder_model_generate
```
which actually runs fine on my machine. Not sure why it's throwing this error after pushing a new commit with adjustments to nothing but comments. This has happened before but simply worked out after fetching from master. How does one generally deal with these problems?
EDIT: upon pushing again with some docstrings adjustments, supposedly just by just triggering a re-run, it's passing now<|||||>Incredible work here @cwkeam - this was one of the cleanest and highest quality PRs that I've seen in a long time! Merging - let's try to tackle disjunctive decoding next :rocket: |
transformers | 15,415 | closed | Performance of T5 | I'm testing the performance of T5, and want to reproduce the result on WMT English to German newstest2014 in the table 14. But the results I got were always 1 or 2 points lower. Are there any scripts that can reproduce the result with pytorch? | 01-31-2022 02:38:56 | 01-31-2022 02:38:56 | I have found that<|||||>> I have found that
Hi, could you please share how you get to reproduce to results?<|||||>It's impossible, because the model hasn't been finetune on the translation. The details are on https://github.com/huggingface/transformers/tree/v2.9.0/examples/translation/t5<|||||>@zzasdf Thanks a lot! |
transformers | 15,414 | closed | [Hotfix] Fix Swin model outputs | # What does this PR do?
This PR:
- fixes some critical outputs of `SwinModel`, to be exactly as `ViTModel`, `BeitModel`, `DeitModel`. More specifically, the `last_hidden_state` previously returned a pooled output, which shouldn't be the case. Instead, the output should be a tensor of shape (batch_size, seq_len, hidden_size), and the pooler output a tensor of shape (batch_size, hidden_size).
- adds `problem_type` support to `SwinForImageClassification`
- fixes the integration test
To do:
- [x] more of a nit, but the pooler in Swin is called `self.pool` rather than `self.pooler` (as in models like BERT, ViT). We could rename it to `self.pooler` for consistency. | 01-30-2022 21:05:55 | 01-30-2022 21:05:55 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,413 | closed | add scores to Wav2Vec2WithLMOutput | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
- Adds `logit_score` and `lm_score` produced by `BeamSearchDecoderCTC` to `Wav2Vec2DecoderWithLMOutput`. Useful for results analysis and pseudo-labeling for self-training.
- Removes padding from batched logits (as returned by `Trainer.predict`) at `batch_decode`. This prevents producing nonsensical tokens at the end of decoded texts.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
The PR is related to Robust ASR week.
@patrickvonplaten @anton-l
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-30-2022 18:58:45 | 01-30-2022 18:58:45 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,412 | closed | How to use multi-node multi-gpu cluster for large batch beam-search decoding | I have a self-implemented-from-scratch torch.nn.Module GPT2 model (self-implemented for custom tweaks to the model) and now I have a 4-node-32-GPU cluster which I like to use to beam-search generate continuation for millions of prefixes. Is there a way to use hugginface as a wrapper to do this in a multi-node-multi-gpu fashion? I know it can be done by manually splitting the file into small chunks and individually beam-search generate,but it is messy. | 01-30-2022 18:14:22 | 01-30-2022 18:14:22 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,411 | closed | [JAX/FLAX]: Language modeling example throws RESOURCE_EXHAUSTED error for large datasets | Hi,
I haven't found any similar issue that is related to my `RESOURCE_EXHAUSTED` error, so I'm creating a new issue here.
Problem is: I can use a dataset that is <= 50GB with a certain batch size for BERT or T5 pre-training. It is working fine.
However, if I want to use a larger dataset > 50GB with the **same** batch size, I always got the following error message:

Which is happening here:
https://github.com/huggingface/transformers/blob/0f69b924fbda6a442d721b10ece38ccfc6b67275/examples/flax/language-modeling/run_t5_mlm_flax.py#L442-L450
Under some circumstances, I can train a ~50GB dataset for one epoch, but before entering the second epoch, training also crashes with that `RESOURCE_EXHAUSTED` error.
My current workaround for e.g. a 150GB corpus is: split the corpus into 50GB chunks, and pre-train the model on those chunks.
What would be an effective method to pre-train models on larger datasets with the JAX/FLAX language modeling implementation :thinking:
Many thanks :heart: | 01-30-2022 17:20:28 | 01-30-2022 17:20:28 | I've seen the streaming PR approach, but it was closed due to strange pre-training behaviour: https://github.com/huggingface/transformers/pull/12607<|||||>The `generate_batch_splits` function is used here later:
https://github.com/huggingface/transformers/blob/0f69b924fbda6a442d721b10ece38ccfc6b67275/examples/flax/language-modeling/run_t5_mlm_flax.py#L812-L815
As far as I understand it, the list of training sample ids e.g. `[2,3,4,5,6,10,1,0,50,16,12,51]` will be splitted into sublists according to the batch size - let's say 4 - so `train_batch_idx` looks like: `[ [2,3,4,5], [6,10,1,0], [50,16,12,51] ]`.
My question is now: do we really need to to this, or can we simply iterate over the train idx array in batch-size steps, e.g. with:
```python
l = [2,3,4,5,6,10,1,0,50,16,12,51]
batch_size = 4
for i in range(0, len(l), batch_size):
print(l[i : i + batch_size])
```
in the batch loop:
https://github.com/huggingface/transformers/blob/0f69b924fbda6a442d721b10ece38ccfc6b67275/examples/flax/language-modeling/run_t5_mlm_flax.py#L815
:thinking: <|||||>Will take a look into this, also cc @patrickvonplaten
If possible could you share the command to reproduce this and also post the full stack trace? Thanks<|||||>Wow this looks like a difficult issue! Thanks for posting it here @stefan-it. Maybe we should also include some Flax, JAX guys here in case they've seen this before<|||||>Quick update on it: I was able to use the full 149GB dataset by avoiding the `np.split` logic that splits the list of sample ids into batch-size length sublists:
```diff
+def get_samples_idx(samples_idx: jnp.ndarray, batch_size: int) -> jnp.ndarray:
+ num_samples = len(samples_idx)
+ samples_to_remove = num_samples % batch_size
+
+ if samples_to_remove != 0:
+ samples_idx = samples_idx[:-samples_to_remove]
+ return samples_idx
+
+
def write_train_metric(summary_writer, train_metrics, train_time, step):
summary_writer.scalar("train_time", train_time, step)
@@ -809,10 +821,11 @@ def main():
# Generate an epoch by shuffling sampling indices from the train dataset
num_train_samples = len(tokenized_datasets["train"])
train_samples_idx = jax.random.permutation(input_rng, jnp.arange(num_train_samples))
- train_batch_idx = generate_batch_splits(train_samples_idx, train_batch_size)
+ samples_idx = get_samples_idx(train_samples_idx, train_batch_size)
# Gather the indexes for creating the batch and do a training step
- for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
+ for step, batch_idx in enumerate(tqdm(range(0, len(samples_idx), train_batch_size), desc="Training...", position=1)):
+ batch_idx = train_samples_idx[step : step + train_batch_size]
samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
model_inputs = data_collator(samples)
```
This requires minimal changes in the existing code and training is running now with 1,082,358 samples per epoch :)<|||||>Interesting - think we could use this to update the official script no? Seems like you found a much better solution<|||||>Update: I've used this workaround and the OOM error after one epoch also disappears!
So I can prepare a PR for the different example scripts (`run_{clm,mlm,t5_mlm}_flax.py`) :)<|||||>I just experienced the same issue. thanks @stefan-it <|||||>It's also working with the `run_mlm_flax.py` script - I'm currently road-testing that fix to make sure, that it is working.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,410 | closed | Add SegformerFeatureExtractor to Auto API | # What does this PR do?
This PR adds `SegformerFeatureExtractor` to the AutoFeatureExtractor API. | 01-30-2022 08:26:51 | 01-30-2022 08:26:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,409 | closed | Wav2Vec2 models must either throw or deal with add_apater | Using `Wav2Vec2Model` models with `config.add_adapter=True` as `Wav2Vec2ForCTC` (for instance) doesn't throw on creation, but eventually fails due to mismatch in the output dimension (`config.output_hidden_size`) and the `lm_head` layer put on top of it (`config.output_hidden`).
# What does this PR do?
I propose:
1. to fix this for two sub-models that only rely on the last output (by using `output_hidden_size` when `add_adapter` is set), and
2. to raise an exception for the other types of models (that try to average multiple layers). Raising an explicit error is better than failing during the forward pass.
Another option is:
3. to always throw if add_adapter is set.
## Before submitting
/!\ This is a draft pull request for review of the general proposal. I did not test this code or write tests yet. I edited the file online directly on github.
## Who can review?
Cc @patrickvonplaten since this is related to the robust-speech-event (community event) | 01-29-2022 22:42:40 | 01-29-2022 22:42:40 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Based on the current CI results:
1. This needs reformatting if we want to merge.
2. All the Wav2Vec2-like models should mirror this change.
3. Ideally, a test would be written to ensure this doesn't regress.
But I still want to have feedback on the preferred approach before considering to go through all the hoops.<|||||>Hey @FremyCompany,
Thanks for the PR. I think it's a welcoming change. Could you run:
```make fix-copies``` so that this change is automatically copied to other varients of Wav2Vec2?<|||||>There is an issue with the changes propagated by the fix-copies command.
```
AttributeError: 'UniSpeechSatConfig' object has no attribute 'add_adapter'
```
I'm not sure how that should be fixed. Any idea?<|||||>I went for an `hasattr` check.
(Assumption: if the key is absent, it defaults to False). |
transformers | 15,408 | closed | Fix additional DataTrainingArguments documentation | (This is an editorial change only)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
## Who can review?
- maintained examples (not research project or legacy): @sgugger, @patil-suraj | 01-29-2022 22:12:35 | 01-29-2022 22:12:35 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,407 | closed | adds decoding scores to Wav2Vec2DecoderWithLMOutput | # What does this PR do?
- Adds `logit_score` and `lm_score` produced by `BeamSearchDecoderCTC` to `Wav2Vec2DecoderWithLMOutput`. Useful for results analysis and pseudo-labeling for self-training.
- Removes padding from batched logits (as returned by `Trainer.predict`) at `batch_decode`. This prevents producing nonsensical tokens at the end of decoded texts.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
The PR is related to Robust ASR week.
@patrickvonplaten @anton-l
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-29-2022 21:29:38 | 01-29-2022 21:29:38 | _The documentation is not available anymore as the PR was closed or merged._<|||||>I'm fine with adding this actually. Is there a reason why you closed the PR? :-)<|||||>I see a new one was opened: https://github.com/huggingface/transformers/pull/15413/files . never mind |
transformers | 15,406 | closed | [XGLMTokenizer] fix init and add in AutoTokenizer | # What does this PR do?
This PR fixes the init of `XGLMTokenizer` and adds it in `AutoTokenizer` mapping.
Fixes #15405
| 01-29-2022 20:32:19 | 01-29-2022 20:32:19 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,405 | closed | AutoTokenizer does not support XGLMTokenizer | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev (f380bf2b612e6030ef8bc8904b287d274f035e29)
- Platform: Linux
- Python version: 3.8
- PyTorch version (GPU?): 1.9.0
- Tensorflow version (GPU?): n/a
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?:
### Who can help
@patil-suraj
## Information
Model I am using (Bert, XLNet ...): XGLM
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Just execute this piece of code:
```
from transformers import AutoTokenizer
tokenizer=AutoTokenizer.from_pretrained(facebook/xglm-1.7B")
```
## Expected behavior
tokenizer to be of class "XGLMTokenizer" | 01-29-2022 16:40:47 | 01-29-2022 16:40:47 | Thank you for reporting this issue! The fix is here #15406 |
transformers | 15,404 | closed | what is the equivalent manner for those lines? | null | 01-29-2022 16:03:12 | 01-29-2022 16:03:12 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
Thanks! |
transformers | 15,403 | closed | Correct eos_token_id settings in generate |
In generation process, if the eos_token_id and config.eos_token_id are both None, use config.decoder.eos_token_id to set eos_token_id
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
https://github.com/huggingface/transformers/issues/14905
<!--
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-29-2022 12:36:36 | 01-29-2022 12:36:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Good for me as well!<|||||>@thinksoso, could you try to adapt the failing test by setting `config.decoder.eos_token_id = None` before running it? :-)<|||||>> @thinksoso, could you try to adapt the failing test by setting `config.decoder.eos_token_id = None` before running it? :-)
I made it!<|||||>> I think it's fine! What do you think? @patrickvonplaten
Would you review again? Thanks! |
transformers | 15,402 | closed | XLNet config num_labels and _num_labels | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models: XLNet
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. Create XLNetModel
2. Perform Sequence classification with more than 2 labels
3. config output before training, you can see "_num_labels":2.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
XLNet config is declaring "_num_labels": 2, when config.num_labels is being used in the model. I don't know why is that.
| 01-29-2022 06:13:33 | 01-29-2022 06:13:33 | Hello! Could you provide a reproducible code example? I don't manage to reproduce:
```py
>>> from transformers import XLNetConfig
>>> config = XLNetConfig(num_labels=4)
>>> config
XLNetConfig {
"attn_type": "bi",
"bi_data": false,
"bos_token_id": 1,
"clamp_len": -1,
"d_head": 64,
"d_inner": 4096,
"d_model": 1024,
"dropout": 0.1,
"end_n_top": 5,
"eos_token_id": 2,
"ff_activation": "gelu",
"id2label": {
"0": "LABEL_0",
"1": "LABEL_1",
"2": "LABEL_2",
"3": "LABEL_3"
},
"initializer_range": 0.02,
"label2id": {
"LABEL_0": 0,
"LABEL_1": 1,
"LABEL_2": 2,
"LABEL_3": 3
},
"layer_norm_eps": 1e-12,
"mem_len": 512,
"model_type": "xlnet",
"n_head": 16,
"n_layer": 24,
"pad_token_id": 5,
"reuse_len": null,
"same_length": false,
"start_n_top": 5,
"summary_activation": "tanh",
"summary_last_dropout": 0.1,
"summary_type": "last",
"summary_use_proj": true,
"transformers_version": "4.16.2",
"untie_r": true,
"use_mems_eval": true,
"use_mems_train": false,
"vocab_size": 32000
}
>>> config.num_labels
4
>>> config._num_labels
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/lysandre/Workspaces/Python/transformers/src/transformers/configuration_utils.py", line 250, in __getattribute__
return super().__getattribute__(key)
AttributeError: 'XLNetConfig' object has no attribute '_num_labels'
```<|||||>I am using different pertained XLNet Model, I guess, it's specific to that. Sorry for raising the issue. |
transformers | 15,401 | closed | Running SQuAD 1.0 sample command raises `IndexError` | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.31
- Python version: 3.9.5
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (automatically through accelerate)
- Using distributed or parallel set-up in script?: No (only one GPU)
### Who can help
@sgugger, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): `bert-base-uncased`
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: (give the name)
* [ ] my own task or dataset: (give details below)
Running the example BERT fine-tuning for SQuAD command as-is raises an `IndexError` during the 'postprocessing' stage after evaluation. I also tried the same task with `run_qa_no_trainer.py` and the same error occurs.
## To reproduce
Steps to reproduce the behavior:
1. `conda install -c pytorch pytorch==1.10.1 torchvision==0.11.2 cudatoolkit==11.3.1`
2. `git clone https://github.com/huggingface/transformers` (commit hash: 16d4acbfdb547cb922361ba07a13de12e1503fb8)
3. `cd transformers`
4. `pip install .`
5. `pip install -r examples/pytorch/question-answering/requirements.txt`
6. `cd examples/pytorch/question-answering `
7. Copy-pasted the BERT SQuAD 1.0 fine-tuning command.
Continued in console output:
```console
57add2499a61# cd examples/pytorch/question-answering
57add2499a61# python run_qa.py \
--model_name_or_path bert-base-uncased \
--dataset_name squad \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_squad/
01/28/2022 23:44:37 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 1distributed training: False, 16-bits training: False
01/28/2022 23:44:37 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=1,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
bf16=False,
bf16_full_eval=False,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_bucket_cap_mb=None,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
half_precision_backend=auto,
hub_model_id=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=3e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=/tmp/debug_squad/runs/Jan28_23-44-36_57add2499a61,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=2.0,
optim=OptimizerNames.ADAMW_HF,
output_dir=/tmp/debug_squad/,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=12,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=None,
run_name=/tmp/debug_squad/,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tf32=None,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
01/28/2022 23:44:37 - INFO - datasets.utils.file_utils - https://raw.githubusercontent.com/huggingface/datasets/1.18.2/datasets/squad/squad.py not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmp13outgcl
Downloading: 5.27kB [00:00, 5.91MB/s]
01/28/2022 23:44:37 - INFO - datasets.utils.file_utils - storing https://raw.githubusercontent.com/huggingface/datasets/1.18.2/datasets/squad/squad.py in cache at /root/.cache/huggingface/datasets/downloads/757bbb86e029fd1bb99d893e0eed445a1d920648c31be7fa62f6911432d7d04f.88910a81ad509b864eb2728ed18e25076f86eaa3cd11c5587ab5ceea8903a4bc.py
01/28/2022 23:44:37 - INFO - datasets.utils.file_utils - creating metadata file for /root/.cache/huggingface/datasets/downloads/757bbb86e029fd1bb99d893e0eed445a1d920648c31be7fa62f6911432d7d04f.88910a81ad509b864eb2728ed18e25076f86eaa3cd11c5587ab5ceea8903a4bc.py
01/28/2022 23:44:38 - INFO - datasets.utils.file_utils - https://raw.githubusercontent.com/huggingface/datasets/1.18.2/datasets/squad/dataset_infos.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmp_ol5ulua
Downloading: 2.36kB [00:00, 3.52MB/s]
01/28/2022 23:44:38 - INFO - datasets.utils.file_utils - storing https://raw.githubusercontent.com/huggingface/datasets/1.18.2/datasets/squad/dataset_infos.json in cache at /root/.cache/huggingface/datasets/downloads/1966b23d025b3d578359815bf6c1e28cdc39773d03785fbc131ada8108c985d9.36bd0df82ceb24eeafc05394b25c534952fd7b2eaacf2b1f49933a8330f5800b
01/28/2022 23:44:38 - INFO - datasets.utils.file_utils - creating metadata file for /root/.cache/huggingface/datasets/downloads/1966b23d025b3d578359815bf6c1e28cdc39773d03785fbc131ada8108c985d9.36bd0df82ceb24eeafc05394b25c534952fd7b2eaacf2b1f49933a8330f5800b
01/28/2022 23:44:38 - INFO - datasets.builder - No config specified, defaulting to first: squad/plain_text
01/28/2022 23:44:38 - INFO - datasets.info - Loading Dataset Infos from /root/.cache/huggingface/modules/datasets_modules/datasets/squad/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453
01/28/2022 23:44:38 - INFO - datasets.builder - Generating dataset squad (/root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453)
Downloading and preparing dataset squad/plain_text (download: 33.51 MiB, generated: 85.63 MiB, post-processed: Unknown size, total: 119.14 MiB) to /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453...
01/28/2022 23:44:38 - INFO - datasets.builder - Dataset not on Hf google storage. Downloading and preparing it from source
0%| | 0/2 [00:00<?, ?it/s]01/28/2022 23:44:39 - INFO - datasets.utils.file_utils - https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmpeh7ajej8
Downloading: 30.3MB [00:00, 103MB/s]
01/28/2022 23:44:39 - INFO - datasets.utils.file_utils - storing https://rajpurkar.github.io/SQuAD-explorer/dataset/train-v1.1.json in cache at /root/.cache/huggingface/datasets/downloads/b8bb19735e1bb591510a01cc032f4c9f969bc0eeb081ae1b328cd306f3b24008
01/28/2022 23:44:39 - INFO - datasets.utils.file_utils - creating metadata file for /root/.cache/huggingface/datasets/downloads/b8bb19735e1bb591510a01cc032f4c9f969bc0eeb081ae1b328cd306f3b24008
50%|███████████████████████████████████████████████████████████████████████████████████ | 1/2 [00:01<00:01, 1.08s/it]01/28/2022 23:44:39 - INFO - datasets.utils.file_utils - https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmpfyt6869a
Downloading: 4.85MB [00:00, 92.3MB/s]
01/28/2022 23:44:39 - INFO - datasets.utils.file_utils - storing https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json in cache at /root/.cache/huggingface/datasets/downloads/9d5462987ef5f814fe15a369c1724f6ec39a2018b3b6271a9d7d2598686ca2ff
01/28/2022 23:44:39 - INFO - datasets.utils.file_utils - creating metadata file for /root/.cache/huggingface/datasets/downloads/9d5462987ef5f814fe15a369c1724f6ec39a2018b3b6271a9d7d2598686ca2ff
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:01<00:00, 1.52it/s]
01/28/2022 23:44:39 - INFO - datasets.utils.download_manager - Downloading took 0.0 min
01/28/2022 23:44:39 - INFO - datasets.utils.download_manager - Checksum Computation took 0.0 min
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 2341.88it/s]
01/28/2022 23:44:39 - INFO - datasets.utils.info_utils - All the checksums matched successfully for dataset source files
01/28/2022 23:44:39 - INFO - datasets.builder - Generating split train
01/28/2022 23:44:45 - INFO - datasets.builder - Generating split validation
01/28/2022 23:44:46 - INFO - datasets.utils.info_utils - All the splits matched successfully.
Dataset squad downloaded and prepared to /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453. Subsequent calls will reuse this data.
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 728.49it/s]
[INFO|file_utils.py:2140] 2022-01-28 23:44:46,444 >> https://huggingface.co/bert-base-uncased/resolve/main/config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpzb4s_2o4
Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 570/570 [00:00<00:00, 1.25MB/s]
[INFO|file_utils.py:2144] 2022-01-28 23:44:46,529 >> storing https://huggingface.co/bert-base-uncased/resolve/main/config.json in cache at /root/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
[INFO|file_utils.py:2152] 2022-01-28 23:44:46,529 >> creating metadata file for /root/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
[INFO|configuration_utils.py:644] 2022-01-28 23:44:46,529 >> loading configuration file https://huggingface.co/bert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
[INFO|configuration_utils.py:680] 2022-01-28 23:44:46,530 >> Model config BertConfig {
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.17.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
[INFO|file_utils.py:2140] 2022-01-28 23:44:46,615 >> https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp7u1nz0yk
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 28.0/28.0 [00:00<00:00, 63.9kB/s]
[INFO|file_utils.py:2144] 2022-01-28 23:44:46,700 >> storing https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json in cache at /root/.cache/huggingface/transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
[INFO|file_utils.py:2152] 2022-01-28 23:44:46,700 >> creating metadata file for /root/.cache/huggingface/transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
[INFO|configuration_utils.py:644] 2022-01-28 23:44:46,786 >> loading configuration file https://huggingface.co/bert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
[INFO|configuration_utils.py:680] 2022-01-28 23:44:46,786 >> Model config BertConfig {
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.17.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
[INFO|file_utils.py:2140] 2022-01-28 23:44:46,966 >> https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp_ua9i1wf
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 226k/226k [00:00<00:00, 3.43MB/s]
[INFO|file_utils.py:2144] 2022-01-28 23:44:47,123 >> storing https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt in cache at /root/.cache/huggingface/transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|file_utils.py:2152] 2022-01-28 23:44:47,123 >> creating metadata file for /root/.cache/huggingface/transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|file_utils.py:2140] 2022-01-28 23:44:47,210 >> https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmp9xjs8r7s
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 455k/455k [00:00<00:00, 5.41MB/s]
[INFO|file_utils.py:2144] 2022-01-28 23:44:47,386 >> storing https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json in cache at /root/.cache/huggingface/transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|file_utils.py:2152] 2022-01-28 23:44:47,386 >> creating metadata file for /root/.cache/huggingface/transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1771] 2022-01-28 23:44:47,648 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/vocab.txt from cache at /root/.cache/huggingface/transformers/45c3f7a79a80e1cf0a489e5c62b43f173c15db47864303a55d623bb3c96f72a5.d789d64ebfe299b0e416afc4a169632f903f693095b4629a7ea271d5a0cf2c99
[INFO|tokenization_utils_base.py:1771] 2022-01-28 23:44:47,648 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/tokenizer.json from cache at /root/.cache/huggingface/transformers/534479488c54aeaf9c3406f647aa2ec13648c06771ffe269edabebd4c412da1d.7f2721073f19841be16f41b0a70b600ca6b880c8f3df6f3535cbc704371bdfa4
[INFO|tokenization_utils_base.py:1771] 2022-01-28 23:44:47,648 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1771] 2022-01-28 23:44:47,648 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1771] 2022-01-28 23:44:47,648 >> loading file https://huggingface.co/bert-base-uncased/resolve/main/tokenizer_config.json from cache at /root/.cache/huggingface/transformers/c1d7f0a763fb63861cc08553866f1fc3e5a6f4f07621be277452d26d71303b7e.20430bd8e10ef77a7d2977accefe796051e01bc2fc4aa146bc862997a1a15e79
[INFO|configuration_utils.py:644] 2022-01-28 23:44:47,735 >> loading configuration file https://huggingface.co/bert-base-uncased/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/3c61d016573b14f7f008c02c4e51a366c67ab274726fe2910691e2a761acf43e.37395cee442ab11005bcd270f3c34464dc1704b715b5d7d52b1a461abe3b9e4e
[INFO|configuration_utils.py:680] 2022-01-28 23:44:47,736 >> Model config BertConfig {
"_name_or_path": "bert-base-uncased",
"architectures": [
"BertForMaskedLM"
],
"attention_probs_dropout_prob": 0.1,
"classifier_dropout": null,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-12,
"max_position_embeddings": 512,
"model_type": "bert",
"num_attention_heads": 12,
"num_hidden_layers": 12,
"pad_token_id": 0,
"position_embedding_type": "absolute",
"transformers_version": "4.17.0.dev0",
"type_vocab_size": 2,
"use_cache": true,
"vocab_size": 30522
}
[INFO|file_utils.py:2140] 2022-01-28 23:44:47,914 >> https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpc8j01jc4
Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 420M/420M [00:27<00:00, 16.2MB/s]
[INFO|file_utils.py:2144] 2022-01-28 23:45:15,556 >> storing https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f
[INFO|file_utils.py:2152] 2022-01-28 23:45:15,556 >> creating metadata file for /root/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f
[INFO|modeling_utils.py:1427] 2022-01-28 23:45:15,557 >> loading weights file https://huggingface.co/bert-base-uncased/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/a8041bf617d7f94ea26d15e218abd04afc2004805632abc0ed2066aa16d50d04.faf6ea826ae9c5867d12b22257f9877e6b8367890837bd60f7c54a29633f7f2f
[WARNING|modeling_utils.py:1685] 2022-01-28 23:45:16,572 >> Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertForQuestionAnswering: ['cls.predictions.bias', 'cls.predictions.decoder.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.LayerNorm.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.transform.LayerNorm.weight']
- This IS expected if you are initializing BertForQuestionAnswering from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertForQuestionAnswering from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
[WARNING|modeling_utils.py:1696] 2022-01-28 23:45:16,572 >> Some weights of BertForQuestionAnswering were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['qa_outputs.bias', 'qa_outputs.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
Running tokenizer on train dataset: 0%| | 0/88 [00:00<?, ?ba/s]01/28/2022 23:45:16 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-2d1c8b33ff64daaf.arrow
Running tokenizer on train dataset: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 88/88 [00:27<00:00, 3.24ba/s]
Running tokenizer on validation dataset: 0%| | 0/11 [00:00<?, ?ba/s]01/28/2022 23:45:44 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/squad/plain_text/1.0.0/d6ec3ceb99ca480ce37cdd35555d6cb2511d223b9150cce08a837ef62ffea453/cache-401990ded98120b9.arrow
Running tokenizer on validation dataset: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 11/11 [00:27<00:00, 2.46s/ba]
01/28/2022 23:46:11 - INFO - datasets.utils.file_utils - https://raw.githubusercontent.com/huggingface/datasets/1.18.2/metrics/squad/squad.py not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmphozvyc0y
Downloading: 4.51kB [00:00, 6.12MB/s]
01/28/2022 23:46:11 - INFO - datasets.utils.file_utils - storing https://raw.githubusercontent.com/huggingface/datasets/1.18.2/metrics/squad/squad.py in cache at /root/.cache/huggingface/datasets/downloads/045393e52f825a7b706c615959251e42aec541d1b37158fb7be61cbbbc20d719.ab3a5db6a587c35cfd241275240e52547dd1e093c74b3ee4f7798d9f6c6304ec.py
01/28/2022 23:46:11 - INFO - datasets.utils.file_utils - creating metadata file for /root/.cache/huggingface/datasets/downloads/045393e52f825a7b706c615959251e42aec541d1b37158fb7be61cbbbc20d719.ab3a5db6a587c35cfd241275240e52547dd1e093c74b3ee4f7798d9f6c6304ec.py
01/28/2022 23:46:11 - INFO - datasets.utils.file_utils - https://raw.githubusercontent.com/huggingface/datasets/1.18.2/metrics/squad/evaluate.py not found in cache or force_download set to True, downloading to /root/.cache/huggingface/datasets/downloads/tmp6fnoaik5
Downloading: 3.31kB [00:00, 5.13MB/s]
01/28/2022 23:46:12 - INFO - datasets.utils.file_utils - storing https://raw.githubusercontent.com/huggingface/datasets/1.18.2/metrics/squad/evaluate.py in cache at /root/.cache/huggingface/datasets/downloads/0d7b02f7f3a5192b3530b14825397b3f988ff4b4062efb68456732908a35a909.6f69c3ff9e10aa1cbdc6e91d27e158ea86a785f54a36a9e964ef8b3b78cf3cd6.py
01/28/2022 23:46:12 - INFO - datasets.utils.file_utils - creating metadata file for /root/.cache/huggingface/datasets/downloads/0d7b02f7f3a5192b3530b14825397b3f988ff4b4062efb68456732908a35a909.6f69c3ff9e10aa1cbdc6e91d27e158ea86a785f54a36a9e964ef8b3b78cf3cd6.py
/root/.local/miniconda3/lib/python3.9/site-packages/transformers/optimization.py:306: FutureWarning: This implementation of AdamW is deprecated and will be removed in a future version. Use thePyTorch implementation torch.optim.AdamW instead, or set `no_deprecation_warning=True` to disable this warning
warnings.warn(
[INFO|trainer.py:1252] 2022-01-28 23:46:13,989 >> ***** Running training *****
[INFO|trainer.py:1253] 2022-01-28 23:46:13,989 >> Num examples = 88524
[INFO|trainer.py:1254] 2022-01-28 23:46:13,989 >> Num Epochs = 2
[INFO|trainer.py:1255] 2022-01-28 23:46:13,989 >> Instantaneous batch size per device = 12
[INFO|trainer.py:1256] 2022-01-28 23:46:13,989 >> Total train batch size (w. parallel, distributed & accumulation) = 12
[INFO|trainer.py:1257] 2022-01-28 23:46:13,989 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1258] 2022-01-28 23:46:13,989 >> Total optimization steps = 14754
{'loss': 2.483, 'learning_rate': 2.8983326555510372e-05, 'epoch': 0.07}
3%|█████▎ | 500/14754 [02:21<1:07:16, 3.53it/s][INFO|trainer.py:2103] 2022-01-28 23:48:35,144 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-500
[INFO|configuration_utils.py:430] 2022-01-28 23:48:35,144 >> Configuration saved in /tmp/debug_squad/checkpoint-500/config.json
[INFO|modeling_utils.py:1074] 2022-01-28 23:48:35,626 >> Model weights saved in /tmp/debug_squad/checkpoint-500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-28 23:48:35,627 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-28 23:48:35,627 >> Special tokens file saved in /tmp/debug_squad/checkpoint-500/special_tokens_map.json
{'loss': 1.498, 'learning_rate': 2.796665311102074e-05, 'epoch': 0.14}
7%|██████████▋ | 1000/14754 [04:44<1:04:54, 3.53it/s][INFO|trainer.py:2103] 2022-01-28 23:50:58,276 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-1000
[INFO|configuration_utils.py:430] 2022-01-28 23:50:58,277 >> Configuration saved in /tmp/debug_squad/checkpoint-1000/config.json
[INFO|modeling_utils.py:1074] 2022-01-28 23:50:58,757 >> Model weights saved in /tmp/debug_squad/checkpoint-1000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-28 23:50:58,757 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-1000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-28 23:50:58,757 >> Special tokens file saved in /tmp/debug_squad/checkpoint-1000/special_tokens_map.json
{'loss': 1.3703, 'learning_rate': 2.694997966653111e-05, 'epoch': 0.2}
10%|███████████████▉ | 1500/14754 [07:07<1:02:41, 3.52it/s][INFO|trainer.py:2103] 2022-01-28 23:53:21,517 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-1500
[INFO|configuration_utils.py:430] 2022-01-28 23:53:21,518 >> Configuration saved in /tmp/debug_squad/checkpoint-1500/config.json
[INFO|modeling_utils.py:1074] 2022-01-28 23:53:22,005 >> Model weights saved in /tmp/debug_squad/checkpoint-1500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-28 23:53:22,006 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-1500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-28 23:53:22,006 >> Special tokens file saved in /tmp/debug_squad/checkpoint-1500/special_tokens_map.json
{'loss': 1.3288, 'learning_rate': 2.593330622204148e-05, 'epoch': 0.27}
14%|█████████████████████▎ | 2000/14754 [09:30<1:00:17, 3.53it/s][INFO|trainer.py:2103] 2022-01-28 23:55:44,792 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-2000
[INFO|configuration_utils.py:430] 2022-01-28 23:55:44,793 >> Configuration saved in /tmp/debug_squad/checkpoint-2000/config.json
[INFO|modeling_utils.py:1074] 2022-01-28 23:55:45,272 >> Model weights saved in /tmp/debug_squad/checkpoint-2000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-28 23:55:45,272 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-2000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-28 23:55:45,273 >> Special tokens file saved in /tmp/debug_squad/checkpoint-2000/special_tokens_map.json
{'loss': 1.2396, 'learning_rate': 2.491663277755185e-05, 'epoch': 0.34}
17%|██████████████████████████▉ | 2500/14754 [11:54<58:01, 3.52it/s][INFO|trainer.py:2103] 2022-01-28 23:58:08,029 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-2500
[INFO|configuration_utils.py:430] 2022-01-28 23:58:08,030 >> Configuration saved in /tmp/debug_squad/checkpoint-2500/config.json
[INFO|modeling_utils.py:1074] 2022-01-28 23:58:08,509 >> Model weights saved in /tmp/debug_squad/checkpoint-2500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-28 23:58:08,509 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-2500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-28 23:58:08,509 >> Special tokens file saved in /tmp/debug_squad/checkpoint-2500/special_tokens_map.json
{'loss': 1.1863, 'learning_rate': 2.389995933306222e-05, 'epoch': 0.41}
20%|████████████████████████████████▎ | 3000/14754 [14:17<55:36, 3.52it/s][INFO|trainer.py:2103] 2022-01-29 00:00:31,193 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-3000
[INFO|configuration_utils.py:430] 2022-01-29 00:00:31,194 >> Configuration saved in /tmp/debug_squad/checkpoint-3000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:00:31,673 >> Model weights saved in /tmp/debug_squad/checkpoint-3000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:00:31,673 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-3000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:00:31,673 >> Special tokens file saved in /tmp/debug_squad/checkpoint-3000/special_tokens_map.json
{'loss': 1.1847, 'learning_rate': 2.288328588857259e-05, 'epoch': 0.47}
24%|█████████████████████████████████████▋ | 3500/14754 [16:40<53:18, 3.52it/s][INFO|trainer.py:2103] 2022-01-29 00:02:54,398 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-3500
[INFO|configuration_utils.py:430] 2022-01-29 00:02:54,400 >> Configuration saved in /tmp/debug_squad/checkpoint-3500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:02:54,877 >> Model weights saved in /tmp/debug_squad/checkpoint-3500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:02:54,878 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-3500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:02:54,878 >> Special tokens file saved in /tmp/debug_squad/checkpoint-3500/special_tokens_map.json
{'loss': 1.1146, 'learning_rate': 2.1866612444082963e-05, 'epoch': 0.54}
27%|███████████████████████████████████████████ | 4000/14754 [19:03<50:55, 3.52it/s][INFO|trainer.py:2103] 2022-01-29 00:05:17,606 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-4000
[INFO|configuration_utils.py:430] 2022-01-29 00:05:17,607 >> Configuration saved in /tmp/debug_squad/checkpoint-4000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:05:18,087 >> Model weights saved in /tmp/debug_squad/checkpoint-4000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:05:18,087 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-4000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:05:18,087 >> Special tokens file saved in /tmp/debug_squad/checkpoint-4000/special_tokens_map.json
{'loss': 1.0815, 'learning_rate': 2.084993899959333e-05, 'epoch': 0.61}
31%|████████████████████████████████████████████████▍ | 4500/14754 [21:26<48:32, 3.52it/s][INFO|trainer.py:2103] 2022-01-29 00:07:40,784 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-4500
[INFO|configuration_utils.py:430] 2022-01-29 00:07:40,785 >> Configuration saved in /tmp/debug_squad/checkpoint-4500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:07:41,262 >> Model weights saved in /tmp/debug_squad/checkpoint-4500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:07:41,263 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-4500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:07:41,263 >> Special tokens file saved in /tmp/debug_squad/checkpoint-4500/special_tokens_map.json
{'loss': 1.0743, 'learning_rate': 1.9833265555103702e-05, 'epoch': 0.68}
34%|█████████████████████████████████████████████████████▉ | 5000/14754 [23:49<46:05, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:10:03,934 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-5000
[INFO|configuration_utils.py:430] 2022-01-29 00:10:03,935 >> Configuration saved in /tmp/debug_squad/checkpoint-5000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:10:04,412 >> Model weights saved in /tmp/debug_squad/checkpoint-5000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:10:04,413 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-5000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:10:04,413 >> Special tokens file saved in /tmp/debug_squad/checkpoint-5000/special_tokens_map.json
{'loss': 1.0973, 'learning_rate': 1.8816592110614073e-05, 'epoch': 0.75}
37%|███████████████████████████████████████████████████████████▎ | 5500/14754 [26:13<43:42, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:12:27,110 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-5500
[INFO|configuration_utils.py:430] 2022-01-29 00:12:27,111 >> Configuration saved in /tmp/debug_squad/checkpoint-5500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:12:27,588 >> Model weights saved in /tmp/debug_squad/checkpoint-5500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:12:27,589 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-5500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:12:27,589 >> Special tokens file saved in /tmp/debug_squad/checkpoint-5500/special_tokens_map.json
{'loss': 1.0606, 'learning_rate': 1.779991866612444e-05, 'epoch': 0.81}
41%|████████████████████████████████████████████████████████████████▋ | 6000/14754 [28:36<41:31, 3.51it/s][INFO|trainer.py:2103] 2022-01-29 00:14:50,289 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-6000
[INFO|configuration_utils.py:430] 2022-01-29 00:14:50,290 >> Configuration saved in /tmp/debug_squad/checkpoint-6000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:14:50,768 >> Model weights saved in /tmp/debug_squad/checkpoint-6000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:14:50,769 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-6000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:14:50,769 >> Special tokens file saved in /tmp/debug_squad/checkpoint-6000/special_tokens_map.json
{'loss': 1.0263, 'learning_rate': 1.6783245221634812e-05, 'epoch': 0.88}
44%|██████████████████████████████████████████████████████████████████████ | 6500/14754 [30:59<39:02, 3.52it/s][INFO|trainer.py:2103] 2022-01-29 00:17:13,439 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-6500
[INFO|configuration_utils.py:430] 2022-01-29 00:17:13,440 >> Configuration saved in /tmp/debug_squad/checkpoint-6500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:17:13,919 >> Model weights saved in /tmp/debug_squad/checkpoint-6500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:17:13,919 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-6500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:17:13,919 >> Special tokens file saved in /tmp/debug_squad/checkpoint-6500/special_tokens_map.json
{'loss': 1.0332, 'learning_rate': 1.576657177714518e-05, 'epoch': 0.95}
47%|███████████████████████████████████████████████████████████████████████████▍ | 7000/14754 [33:22<36:36, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:19:36,630 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-7000
[INFO|configuration_utils.py:430] 2022-01-29 00:19:36,631 >> Configuration saved in /tmp/debug_squad/checkpoint-7000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:19:37,108 >> Model weights saved in /tmp/debug_squad/checkpoint-7000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:19:37,108 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-7000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:19:37,108 >> Special tokens file saved in /tmp/debug_squad/checkpoint-7000/special_tokens_map.json
{'loss': 0.9457, 'learning_rate': 1.4749898332655551e-05, 'epoch': 1.02}
51%|████████████████████████████████████████████████████████████████████████████████▊ | 7500/14754 [35:45<34:15, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:21:59,797 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-7500
[INFO|configuration_utils.py:430] 2022-01-29 00:21:59,798 >> Configuration saved in /tmp/debug_squad/checkpoint-7500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:22:00,278 >> Model weights saved in /tmp/debug_squad/checkpoint-7500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:22:00,279 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-7500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:22:00,279 >> Special tokens file saved in /tmp/debug_squad/checkpoint-7500/special_tokens_map.json
{'loss': 0.7377, 'learning_rate': 1.373322488816592e-05, 'epoch': 1.08}
54%|██████████████████████████████████████████████████████████████████████████████████████▏ | 8000/14754 [38:08<31:47, 3.54it/s][INFO|trainer.py:2103] 2022-01-29 00:24:22,740 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-8000
[INFO|configuration_utils.py:430] 2022-01-29 00:24:22,741 >> Configuration saved in /tmp/debug_squad/checkpoint-8000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:24:23,220 >> Model weights saved in /tmp/debug_squad/checkpoint-8000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:24:23,221 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-8000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:24:23,221 >> Special tokens file saved in /tmp/debug_squad/checkpoint-8000/special_tokens_map.json
{'loss': 0.7174, 'learning_rate': 1.271655144367629e-05, 'epoch': 1.15}
58%|███████████████████████████████████████████████████████████████████████████████████████████▌ | 8500/14754 [40:31<29:29, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:26:45,683 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-8500
[INFO|configuration_utils.py:430] 2022-01-29 00:26:45,683 >> Configuration saved in /tmp/debug_squad/checkpoint-8500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:26:46,162 >> Model weights saved in /tmp/debug_squad/checkpoint-8500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:26:46,162 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-8500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:26:46,162 >> Special tokens file saved in /tmp/debug_squad/checkpoint-8500/special_tokens_map.json
{'loss': 0.7188, 'learning_rate': 1.1699877999186661e-05, 'epoch': 1.22}
61%|████████████████████████████████████████████████████████████████████████████████████████████████▉ | 9000/14754 [42:54<27:08, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:29:08,648 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-9000
[INFO|configuration_utils.py:430] 2022-01-29 00:29:08,649 >> Configuration saved in /tmp/debug_squad/checkpoint-9000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:29:09,127 >> Model weights saved in /tmp/debug_squad/checkpoint-9000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:29:09,128 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-9000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:29:09,128 >> Special tokens file saved in /tmp/debug_squad/checkpoint-9000/special_tokens_map.json
{'loss': 0.7347, 'learning_rate': 1.0683204554697033e-05, 'epoch': 1.29}
64%|██████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 9500/14754 [45:17<24:48, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:31:31,545 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-9500
[INFO|configuration_utils.py:430] 2022-01-29 00:31:31,546 >> Configuration saved in /tmp/debug_squad/checkpoint-9500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:31:32,024 >> Model weights saved in /tmp/debug_squad/checkpoint-9500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:31:32,024 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-9500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:31:32,025 >> Special tokens file saved in /tmp/debug_squad/checkpoint-9500/special_tokens_map.json
{'loss': 0.7144, 'learning_rate': 9.666531110207402e-06, 'epoch': 1.36}
68%|███████████████████████████████████████████████████████████████████████████████████████████████████████████ | 10000/14754 [47:40<22:27, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:33:54,470 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-10000
[INFO|configuration_utils.py:430] 2022-01-29 00:33:54,471 >> Configuration saved in /tmp/debug_squad/checkpoint-10000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:33:54,949 >> Model weights saved in /tmp/debug_squad/checkpoint-10000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:33:54,950 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-10000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:33:54,950 >> Special tokens file saved in /tmp/debug_squad/checkpoint-10000/special_tokens_map.json
{'loss': 0.7472, 'learning_rate': 8.649857665717772e-06, 'epoch': 1.42}
71%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 10500/14754 [50:03<20:04, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:36:17,422 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-10500
[INFO|configuration_utils.py:430] 2022-01-29 00:36:17,423 >> Configuration saved in /tmp/debug_squad/checkpoint-10500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:36:17,901 >> Model weights saved in /tmp/debug_squad/checkpoint-10500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:36:17,902 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-10500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:36:17,902 >> Special tokens file saved in /tmp/debug_squad/checkpoint-10500/special_tokens_map.json
{'loss': 0.6929, 'learning_rate': 7.633184221228141e-06, 'epoch': 1.49}
75%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 11000/14754 [52:26<17:40, 3.54it/s][INFO|trainer.py:2103] 2022-01-29 00:38:40,348 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-11000
[INFO|configuration_utils.py:430] 2022-01-29 00:38:40,349 >> Configuration saved in /tmp/debug_squad/checkpoint-11000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:38:40,828 >> Model weights saved in /tmp/debug_squad/checkpoint-11000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:38:40,829 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-11000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:38:40,829 >> Special tokens file saved in /tmp/debug_squad/checkpoint-11000/special_tokens_map.json
{'loss': 0.7103, 'learning_rate': 6.616510776738511e-06, 'epoch': 1.56}
78%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ | 11500/14754 [54:49<15:22, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:41:03,359 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-11500
[INFO|configuration_utils.py:430] 2022-01-29 00:41:03,360 >> Configuration saved in /tmp/debug_squad/checkpoint-11500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:41:03,839 >> Model weights saved in /tmp/debug_squad/checkpoint-11500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:41:03,840 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-11500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:41:03,840 >> Special tokens file saved in /tmp/debug_squad/checkpoint-11500/special_tokens_map.json
{'loss': 0.7036, 'learning_rate': 5.5998373322488825e-06, 'epoch': 1.63}
81%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▌ | 12000/14754 [57:12<12:59, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:43:26,420 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-12000
[INFO|configuration_utils.py:430] 2022-01-29 00:43:26,421 >> Configuration saved in /tmp/debug_squad/checkpoint-12000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:43:26,902 >> Model weights saved in /tmp/debug_squad/checkpoint-12000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:43:26,902 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-12000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:43:26,902 >> Special tokens file saved in /tmp/debug_squad/checkpoint-12000/special_tokens_map.json
{'loss': 0.6791, 'learning_rate': 4.583163887759252e-06, 'epoch': 1.69}
85%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▊ | 12500/14754 [59:35<10:38, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:45:49,585 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-12500
[INFO|configuration_utils.py:430] 2022-01-29 00:45:49,586 >> Configuration saved in /tmp/debug_squad/checkpoint-12500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:45:50,065 >> Model weights saved in /tmp/debug_squad/checkpoint-12500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:45:50,065 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-12500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:45:50,065 >> Special tokens file saved in /tmp/debug_squad/checkpoint-12500/special_tokens_map.json
{'loss': 0.6995, 'learning_rate': 3.566490443269622e-06, 'epoch': 1.76}
88%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▍ | 13000/14754 [1:01:58<08:15, 3.54it/s][INFO|trainer.py:2103] 2022-01-29 00:48:12,721 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-13000
[INFO|configuration_utils.py:430] 2022-01-29 00:48:12,722 >> Configuration saved in /tmp/debug_squad/checkpoint-13000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:48:13,201 >> Model weights saved in /tmp/debug_squad/checkpoint-13000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:48:13,202 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-13000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:48:13,202 >> Special tokens file saved in /tmp/debug_squad/checkpoint-13000/special_tokens_map.json
{'loss': 0.698, 'learning_rate': 2.549816998779992e-06, 'epoch': 1.83}
92%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▋ | 13500/14754 [1:04:21<05:56, 3.52it/s][INFO|trainer.py:2103] 2022-01-29 00:50:35,826 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-13500
[INFO|configuration_utils.py:430] 2022-01-29 00:50:35,827 >> Configuration saved in /tmp/debug_squad/checkpoint-13500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:50:36,305 >> Model weights saved in /tmp/debug_squad/checkpoint-13500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:50:36,306 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-13500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:50:36,306 >> Special tokens file saved in /tmp/debug_squad/checkpoint-13500/special_tokens_map.json
{'loss': 0.6899, 'learning_rate': 1.533143554290362e-06, 'epoch': 1.9}
95%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████ | 14000/14754 [1:06:44<03:33, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:52:58,855 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-14000
[INFO|configuration_utils.py:430] 2022-01-29 00:52:58,856 >> Configuration saved in /tmp/debug_squad/checkpoint-14000/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:52:59,337 >> Model weights saved in /tmp/debug_squad/checkpoint-14000/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:52:59,337 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-14000/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:52:59,337 >> Special tokens file saved in /tmp/debug_squad/checkpoint-14000/special_tokens_map.json
{'loss': 0.6963, 'learning_rate': 5.164701098007319e-07, 'epoch': 1.97}
98%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▎ | 14500/14754 [1:09:07<01:11, 3.53it/s][INFO|trainer.py:2103] 2022-01-29 00:55:21,815 >> Saving model checkpoint to /tmp/debug_squad/checkpoint-14500
[INFO|configuration_utils.py:430] 2022-01-29 00:55:21,816 >> Configuration saved in /tmp/debug_squad/checkpoint-14500/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:55:22,295 >> Model weights saved in /tmp/debug_squad/checkpoint-14500/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:55:22,295 >> tokenizer config file saved in /tmp/debug_squad/checkpoint-14500/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:55:22,295 >> Special tokens file saved in /tmp/debug_squad/checkpoint-14500/special_tokens_map.json
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14754/14754 [1:10:21<00:00, 3.53it/s][INFO|trainer.py:1481] 2022-01-29 00:56:35,107 >>
Training completed. Do not forget to share your model on huggingface.co/models =)
{'train_runtime': 4221.1187, 'train_samples_per_second': 41.943, 'train_steps_per_second': 3.495, 'train_loss': 0.9828958572112595, 'epoch': 2.0}
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14754/14754 [1:10:21<00:00, 3.50it/s]
[INFO|trainer.py:2103] 2022-01-29 00:56:35,108 >> Saving model checkpoint to /tmp/debug_squad/
[INFO|configuration_utils.py:430] 2022-01-29 00:56:35,109 >> Configuration saved in /tmp/debug_squad/config.json
[INFO|modeling_utils.py:1074] 2022-01-29 00:56:35,588 >> Model weights saved in /tmp/debug_squad/pytorch_model.bin
[INFO|tokenization_utils_base.py:2074] 2022-01-29 00:56:35,588 >> tokenizer config file saved in /tmp/debug_squad/tokenizer_config.json
[INFO|tokenization_utils_base.py:2080] 2022-01-29 00:56:35,589 >> Special tokens file saved in /tmp/debug_squad/special_tokens_map.json
***** train metrics *****
epoch = 2.0
train_loss = 0.9829
train_runtime = 1:10:21.11
train_samples = 88524
train_samples_per_second = 41.943
train_steps_per_second = 3.495
01/29/2022 00:56:35 - INFO - __main__ - *** Evaluate ***
[INFO|trainer.py:553] 2022-01-29 00:56:35,623 >> The following columns in the evaluation set don't have a corresponding argument in `BertForQuestionAnswering.forward` and have been ignored: offset_mapping, example_id.
[INFO|trainer.py:2353] 2022-01-29 00:56:35,625 >> ***** Running Evaluation *****
[INFO|trainer.py:2355] 2022-01-29 00:56:35,625 >> Num examples = 10784
[INFO|trainer.py:2358] 2022-01-29 00:56:35,625 >> Batch size = 8
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1348/1348 [01:38<00:00, 15.27it/s]01/29/2022 00:58:16 - INFO - utils_qa - Post-processing 10570 example predictions split into 10784 features.
10%|███████████████ | 1005/10570 [00:02<00:22, 431.61it/s]
Traceback (most recent call last): | 977/10570 [00:02<00:22, 424.83it/s]
File "/workspace/transformers/examples/pytorch/question-answering/run_qa.py", line 647, in <module>
main()
File "/workspace/transformers/examples/pytorch/question-answering/run_qa.py", line 604, in main
metrics = trainer.evaluate()
File "/workspace/transformers/examples/pytorch/question-answering/trainer_qa.py", line 56, in evaluate
eval_preds = self.post_process_function(eval_examples, eval_dataset, output.predictions)
File "/workspace/transformers/examples/pytorch/question-answering/run_qa.py", line 540, in post_processing_function
predictions = postprocess_qa_predictions(
File "/workspace/transformers/examples/pytorch/question-answering/utils_qa.py", line 152, in postprocess_qa_predictions
"offsets": (offset_mapping[start_index][0], offset_mapping[end_index][1]),
IndexError: list index out of range
100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1348/1348 [01:43<00:00, 13.04it/s]
```
## Expected behavior
Evaluation ends without an `IndexError`.
| 01-29-2022 06:11:18 | 01-29-2022 06:11:18 | I was able to reproduce the same issue on another machine with Python 3.9.7 and a different type of GPU.<|||||>I am not able to reproduce the error, so I have made a blanket fix in the PR linked above. If you have a wait to debug a little bit more and print the values of:
- `len(offset_mapping)`
- `start_index`
- `end_index`
- `offset_mapping[start_index]`
- `offset_mapping[end_index]`
That would help us find the potential cause of the error.<|||||>Thanks for looking into this issue! I added this to `utils_qa.py` and re-ran the script. This time I only ran one epoch to save time. I suppose that won't make a difference.
```python
# We shouldn't get index errors now because of all the cehcks above, but for some reason, some
# are still there, see https://github.com/huggingface/transformers/issues/15401#issuecomment-1024845936
try:
example_offsets = (offset_mapping[start_index][0], offset_mapping[end_index][1])
except IndexError:
indexerror_count += 1
print("\n===========================")
print(f"{indexerror_count=}")
print(f"{len(offset_mapping)=}")
print(f"{start_index=}")
print(f"{end_index=}")
print(f"{offset_mapping[start_index]=}")
print(f"{offset_mapping[end_index]=}")
print("===========================")
continue
```
The generated log file is very long so I've attached it as a file: [indexerror.log](https://github.com/huggingface/transformers/files/7973386/indexerror.log)
<|||||>BTW this is the dockerfile for the container I use:
```Dockerfile
# PyTorch 1.10.0 uses cudatoolkit 11.3.1
FROM nvidia/cuda:11.3.1-cudnn8-devel-ubuntu20.04
# Basic installs
ARG DEBIAN_FRONTEND=noninteractive
ENV TZ='America/Detroit'
RUN apt-get update -qq \
&& apt-get -y --no-install-recommends install \
build-essential software-properties-common \
zsh wget curl git tar rsync ssh \
&& apt-get clean all \
&& rm -r /var/lib/apt/lists/*
# Install latest cmake
RUN wget https://github.com/Kitware/CMake/releases/download/v3.22.0/cmake-3.22.0-linux-x86_64.tar.gz \
&& tar xzf cmake-* \
&& rsync -a cmake-*/bin /usr/local \
&& rsync -a cmake-*/share /usr/local \
&& rm -r cmake-*
# Install python
ENV PATH="/root/.local/miniconda3/bin:$PATH"
RUN mkdir -p /root/.local \
&& wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b -p /root/.local/miniconda3 \
&& rm -f Miniconda3-latest-Linux-x86_64.sh \
&& ln -sf /root/.local/miniconda3/etc/profile.d/conda.sh /etc/profile.d/conda.sh
# Install frequently used python packages and pytorch build dependencies
RUN pip install matplotlib seaborn pandas pyyaml numpy typing_extensions
WORKDIR /workspace
RUN chsh -s /usr/bin/zsh
ENTRYPOINT /usr/bin/zsh
```<|||||>Thanks, that's very helpful. Could you give me you version of the `datasets` library as a last step? It looks like you have `[]`s where you should have `None`s<|||||>I'm using `datasets` 1.18.2. More info:
```console
$ pip freeze
accelerate==0.5.1
aiohttp==3.8.1
aiosignal==1.2.0
async-timeout==4.0.2
attrs==21.4.0
brotlipy==0.7.0
certifi==2021.10.8
cffi @ file:///tmp/build/80754af9/cffi_1625814692085/work
chardet @ file:///tmp/build/80754af9/chardet_1607706775000/work
charset-normalizer==2.0.11
click==8.0.3
conda==4.11.0
conda-package-handling @ file:///tmp/build/80754af9/conda-package-handling_1618262147379/work
cryptography @ file:///tmp/build/80754af9/cryptography_1616767007030/work
cycler==0.11.0
datasets==1.18.2
dill==0.3.4
filelock==3.4.2
fonttools==4.29.0
frozenlist==1.3.0
fsspec==2022.1.0
huggingface-hub==0.4.0
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
joblib==1.1.0
kiwisolver==1.3.2
matplotlib==3.5.1
mkl-fft==1.3.1
mkl-random @ file:///tmp/build/80754af9/mkl_random_1626186066731/work
mkl-service==2.4.0
multidict==6.0.2
multiprocess==0.70.12.2
numpy==1.22.1
olefile @ file:///Users/ktietz/demo/mc3/conda-bld/olefile_1629805411829/work
packaging==21.3
pandas==1.4.0
Pillow==9.0.0
pyarrow==6.0.1
pycosat==0.6.3
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
pyparsing==3.0.7
PySocks @ file:///tmp/build/80754af9/pysocks_1605305812635/work
python-dateutil==2.8.2
pytz==2021.3
PyYAML==6.0
regex==2022.1.18
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
ruamel-yaml-conda @ file:///tmp/build/80754af9/ruamel_yaml_1616016711199/work
sacremoses==0.0.47
scipy==1.7.3
seaborn==0.11.2
six @ file:///tmp/build/80754af9/six_1623709665295/work
tokenizers==0.11.4
torch==1.10.1
torchvision==0.11.2
tqdm==4.62.3
transformers @ file:///workspace/transformers-fix
typing-extensions==4.0.1
urllib3 @ file:///tmp/build/80754af9/urllib3_1625084269274/work
xxhash==2.0.2
yarl==1.7.2
```
`transformers @ file:///workspace/transformers-fix`: `git checkout fix_qa_utils` and then the logging modifications mentioned above.<|||||>The PR linked above should fix the error. There is something going wrong with Datasets here, but I can't fully reproduce, so made a better check in the function that caused the error :-)<|||||>Hi @sgugger this error now re-appears in `postprocess_qa_predictions_with_beam_search`. What I can see is that `postprocess_qa_predictions` is more defensive in checking if the answer is in scope, but `postprocess_qa_predictions_with_beam_search` lacks **some** of the checks.
I am preparing a PR for a cleaned-up version of examples. It will likely fix this too (so don't rush to fix **anything right now**). I guess xlnet is much longer to run compared to bert-base, so not many people test this use case (and don't know it will fail).
<|||||>PyArrow 10.0.0 fixed the bug that introduces `[]` instead of keeping the `None`. A PR in open in `datasets` to integrate the fix: https://github.com/huggingface/datasets/pull/5174 |
transformers | 15,400 | closed | [deepspeed doc] fix import, extra notes | As flagged by https://github.com/huggingface/transformers/issues/15399 the doc had a wrong import instruction so this PR is fixing it. Additionally adding some extra explanatory notes about non-HF Trainer DeepSpeed use.
@sgugger | 01-29-2022 01:30:51 | 01-29-2022 01:30:51 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,399 | closed | [deepspeed] `bigscience/T0*` multi-gpu inference with ZeRO | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.13.0-27-generic-x86_64-with-glibc2.10
- Python version: 3.8.0
- PyTorch version (GPU?): 1.10.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes (deepspeed)
- Note: I installed DeepSpeed from source
### Who can help
Models:
(I'm actually trying to use T0pp but T5 is close enough)
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
Library:
- Deepspeed: @stas00
- Text generation: @patrickvonplaten @narsil
## Information
Model I am using (Bert, XLNet ...): T0pp / T0_3B
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
## To reproduce
I want to load T0pp across 2 24GB GPUs and only run inference. I know Deepspeed wit zeRO stage 3 is the way to go for this from reading documentation. I am following the HuggingFace example [here](https://huggingface.co/docs/transformers/v4.16.1/en/main_classes/deepspeed#deepspeed-non-trainer-integration) to use Deepspeed without a `Trainer` object.
The error I get is
```
[2022-01-28 18:36:41,193] [INFO] [partition_parameters.py:456:__exit__] finished initializing model with 2.85B parameters
Traceback (most recent call last):
File "multi_gpu_T0pp.py", line 26, in <module>
engine = deepspeed.initialize(model=model, config_params=ds_config)
AttributeError: module 'transformers.deepspeed' has no attribute 'initialize'
```
My code:
Run with `CUDA_VISIBLE_DEVICES="0,1" deepspeed <script.py>`
```python
"""
Example code to load a PyTorch model across GPUs
"""
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.deepspeed import HfDeepSpeedConfig
from transformers import deepspeed
import pandas as pd
import torch
import pdb
import os
seed = 42
torch.manual_seed(seed)
ds_config = {
"fp16": {
"enabled": "auto",
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": "auto",
"stage3_prefetch_bucket_size": "auto",
"stage3_param_persistence_threshold": "auto",
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 0,
"steps_per_print": 2000,
"train_batch_size": 2,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": false
}
if __name__ == "__main__":
# must run before instantiating the model
# ds_config is deepspeed config object or path to the file
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model_name = "bigscience/T0_3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
engine = deepspeed.initialize(model=model, config_params=ds_config)
inputs = tokenizer.encode(
"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy",
return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
## Expected behavior
T0pp (or T0_3B) to load across 2 GPUs, generate an answer, and then quit.
| 01-28-2022 23:54:16 | 01-28-2022 23:54:16 | My apologies, it looks like I wrote wrong instructions for the non-HF Trainer case here: https://huggingface.co/docs/transformers/master/main_classes/deepspeed#nontrainer-deepspeed-integration - is that where you found this code or in another place. I'm asking so that we ensure it's fixed everywhere.
It should be just `import deepspeed` instead of `from transformers import deepspeed` - but let me double check that it all works.
<|||||>ok, so indeed the import was wrong. I will fix the doc at https://huggingface.co/docs/transformers/master/main_classes/deepspeed#nontrainer-deepspeed-integration => https://github.com/huggingface/transformers/pull/15400
But where did you take the rest of the code from? it can't possibly work.
You may want to look into using / adapting https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-generation/run_generation.py - I see it doesn't currently support t5 models.
@VictorSanh, you were working on a multi-gpu generation with t0 models, what's the latest incarnation of the code that you were using if you don't mind sharing. I think it was with Deepspeed-Inference, right? Thanks.
Perhaps `examples/pytorch/text-generation/run_generation.py` could be updated to include support for T0 models?
<|||||>> It should be just import deepspeed instead of from transformers import deepspeed - but let me double check that it all works.
That works! Now running into a different issue, figuring out the default config arguments to change.
> My apologies, it looks like I wrote wrong instructions for the non-HF Trainer case here: https://huggingface.co/docs/transformers/master/main_classes/deepspeed#nontrainer-deepspeed-integration - is that where you found this code or in another place. I'm asking so that we ensure it's fixed everywhere.
That was the only place I found that line.
> But where did you take the rest of the code from? it can't possibly work.
Which part of the code can't work? The T0pp/T0_3B is just from the model card: https://huggingface.co/bigscience/T0pp
Updated code:
```python
"""
Example code to load a PyTorch model across GPUs
"""
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import pandas as pd
import torch
import pdb
import os
seed = 42
torch.manual_seed(seed)
if __name__ == "__main__":
# must run before instantiating the model
# ds_config is deepspeed config object or path to the file
ds_config = "ds_config_zero3_gpu.json"
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model_name = "bigscience/T0_3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
engine = deepspeed.initialize(model=model, config_params=ds_config)
inputs = tokenizer.encode(
"Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy",
return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
```
I moved the config file outside because I was getting weird errors:
```json
{
"fp16": {
"enabled": true,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"optimizer": {
"type": "AdamW",
"params": {
"lr": "auto",
"betas": "auto",
"eps": "auto",
"weight_decay": "auto"
}
},
"scheduler": {
"type": "WarmupLR",
"params": {
"warmup_min_lr": "auto",
"warmup_max_lr": "auto",
"warmup_num_steps": "auto"
}
},
"zero_optimization": {
"stage": 3,
"overlap_comm": true,
"contiguous_gradients": true,
"sub_group_size": 1e9,
"reduce_bucket_size": 1,
"stage3_prefetch_bucket_size": 1,
"stage3_param_persistence_threshold": 1,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": true
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 0,
"steps_per_print": 2000,
"train_batch_size": 1,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": false
}
```
New error:
```bash
self._configure_train_batch_size()
File "/home/aadelucia/miniconda3/envs/fda_cersi_tobacco/lib/python3.8/site-packages/deepspeed/runtime/config.py", line 1050, in _configure_train_batch_size
self._batch_assertion()
File "/home/aadelucia/miniconda3/envs/fda_cersi_tobacco/lib/python3.8/site-packages/deepspeed/runtime/config.py", line 997, in _batch_assertion
assert train_batch == micro_batch * grad_acc * self.world_size, (
AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size1 != 1 * 1 * 2
[2022-01-28 22:06:24,208] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 643653
```
Now just playing with the arguments... I'm not even training, I just want to run inference.
<|||||>Looking in the `HfTrainerDeepSpeedConfig` for some argument clues: https://github.com/huggingface/transformers/blob/v4.16.1/src/transformers/deepspeed.py#L206
Every time I change `train_batch_size`, it's still broken. I change it to 2, it says it's supposed to be 1, I change it to 1, it's supposed to be 2!
```bash
self._batch_assertion()
File "/home/aadelucia/miniconda3/envs/fda_cersi_tobacco/lib/python3.8/site-packages/deepspeed/runtime/config.py", line 997, in _batch_assertion
assert train_batch == micro_batch * grad_acc * self.world_size, (
AssertionError: Check batch related parameters. train_batch_size is not equal to micro_batch_per_gpu * gradient_acc_step * world_size2 != 1 * 1 * 1
```<|||||>Inference is a relatively new thing, I think until now most work was done on training, so please bear with us. Lots of tech is being developed as we speak and it's being polished to be super easy and fast.
Until then, let's focus on something that works now.
So this works with gpt2
```
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, GPT2LMHeadModel
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import os
import torch
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
# model_name = "bigscience/T0_3B"
model_name = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
#model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)
ds_config = {
"fp16": {
"enabled": True,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 3,
"overlap_comm": True,
"contiguous_gradients": True,
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": True
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 0,
"steps_per_print": 2000,
"train_batch_size": world_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
engine = deepspeed.initialize(model=model, config_params=ds_config)
text = "Is this review "
inputs = tokenizer.encode(text, return_tensors="pt").to(device=local_rank)
outputs = model.generate(inputs)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(tokenizer.decode(outputs[0]))
```
run:
```
$ deepspeed --num_gpus 2 gpt2.py
[...]
Is this review of the book?
I'm not sure if I'm going to read
```
With a small adjustment it works with t5:
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import os
import torch
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
model_name = "t5-base"
#model_name = "bigscience/T0_3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
ds_config = {
"fp16": {
"enabled": True,
"loss_scale": 0,
"loss_scale_window": 1000,
"initial_scale_power": 16,
"hysteresis": 2,
"min_loss_scale": 1
},
"zero_optimization": {
"stage": 3,
"overlap_comm": True,
"contiguous_gradients": True,
"sub_group_size": 1e9,
"stage3_max_live_parameters": 1e9,
"stage3_max_reuse_distance": 1e9,
"stage3_gather_fp16_weights_on_model_save": True
},
"gradient_accumulation_steps": 1,
"gradient_clipping": 0,
"steps_per_print": 2000,
"train_batch_size": world_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
engine = deepspeed.initialize(model=model, config_params=ds_config)
text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer.encode(text, return_tensors="pt").to(device=local_rank)
outputs = model.generate(inputs)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(tokenizer.decode(outputs[0]))
```
run:
```
$ deepspeed --num_gpus 2 t0.py
[...]
<pad> True</s>
```
but if I switch the t5 model in the example above to "bigscience/T0_3B" it generates gibberish under Deepspeed but works fine w/o Deepspeed.
```
<pad> d is is is is is is is is is is is is is is is is is
```
This is very puzzling
<|||||>OK, I figured out the culprit - the model breaks when run under fp16! like many other bf16-pretrained models - most t5 models have this issue.
Here are 2 possible solutions:
1. So if you're on Ampere GPU (A100, RTX-30*) you can use `bf16` like so:
```
"bf16": {
"enabled": True,
},
```
(and as of this moment deepspeed@master is needed to use bf16 - they will make a new release any day now)
2. or you have to turn off fp16, like so:
```
"fp16": {
"enabled": False,
},
```
Now you're running in fp32 (more memory).
So this works with either of the 2 fixes from above:
```
from transformers import AutoTokenizer, T5ForConditionalGeneration
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import os
import torch
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
model_name = "bigscience/T0_3B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
ds_config = {
"fp16": {
"enabled": False,
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
},
"steps_per_print": 2000,
"train_batch_size": world_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
engine = deepspeed.initialize(model=model, config_params=ds_config, optimizer=None, lr_scheduler=None)
text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer.encode(text, return_tensors="pt").to(device=local_rank)
outputs = model.generate(inputs)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(tokenizer.decode(outputs[0]))
```
run:
```
$ deepspeed --num_gpus 2 t0.py
[...]
<pad> Positive</s>
```
Note that in the code above I pass `optimizer=None, lr_scheduler=None` which saves a huge amount of GPU memory as you don't need those for inference. And also in the ds config I enabled cpu offloading which nicely uses some of your CPU memory to aid with shortages of GPU memory.
The ds config needs more work to become efficient if you plan to use this in production or something where you care for speed. Since you're not using the HF Integration you will have to put the right numbers together, hint:
https://github.com/huggingface/transformers/blob/16d4acbfdb547cb922361ba07a13de12e1503fb8/src/transformers/deepspeed.py#L261-L264
If you don't care to shave a few %s off, then leave the above as is and it'll use the Deepspeed untuned defaults.
Please let me know if you're able to see it working for yourself.
If you have any questions please ask.
If all is satisfactory you may close this Issue.
As I said in the previous comment the whole inference experience should get much much better really soon now.<|||||>It's working! Thank you SO MUCH!!! I did have to use all the space-saving tips (bf16 and changing the defaults) because I want the entire model on GPU without off-loading parameters to CPU.
Here is the completed code:
```python
"""
Example code to load a PyTorch model across GPUs
"""
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import torch
import pdb
import os
seed = 42
torch.manual_seed(seed)
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
model_hidden_size = 4096 # this is hard-coded to T0pp
ds_config = {
"fp16": {
"enabled": False,
},
"bf16": {
"enabled": True,
},
"zero_optimization": {
"stage": 3,
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": world_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
if __name__ == "__main__":
# must run before instantiating the model
# ds_config is deepspeed config object or path to the file
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
model_name = "bigscience/T0pp"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model.eval()
engine = deepspeed.initialize(model=model, config_params=ds_config, optimizer=None, lr_scheduler=None)
text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
inputs = tokenizer.encode(text, return_tensors="pt").to(device=local_rank)
outputs = model.generate(inputs)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```<|||||>So glad it worked for you, Alexandra. But you can do better.
Currently, with this code each gpu processes the same input, i.e. duplicated effort, but you can do parallel processing at 0 extra cost.
You just need to change the end to something like:
```
rank = torch.distributed.get_rank()
if rank == 0:
text = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text = "Is this review positive or negative? Review: this is the worst restaurant ever"
inputs = tokenizer.encode(text, return_tensors="pt").to(device=local_rank)
outputs = model.generate(inputs)
print(f"rank{rank}", tokenizer.decode(outputs[0], skip_special_tokens=True))
```
(the code is untested)
You can of course do that for more than 2 gpus, and each gpu will handle its own unique input.
And of course, you can do batches too if you have enough memory left.
Beware the multiple-processes prints tend to interleave - so you can use the following hack to overcome this issue:
https://github.com/stas00/toolbox/blob/master/pytorch/multi-gpu-non-interleaved-print.py
This Issue would be a good example for the DYI Deepspeed integration inference docs.
<|||||>Also please add the distributed init, so that the logging knows to not repeat the same logs for more than 1 gpu:
```
deepspeed.init_distributed()
engine = deepspeed.initialize(model=model, config_params=ds_config, optimizer=None, lr_scheduler=None)
```<|||||>OK, here is a much improved program which also integrates some enhancements from @VictorSanh's [work](https://github.com/bigscience-workshop/t-zero/blob/master/inference/model_offload.py).
This script can now handle both cpu offload and/or multiple gpu.
e.g. I can process "bigscience/T0_3B" on a 8GB GPU no problem.
I added a bunch of notes before and through the code - please let me know if anything is unclear or missing:
```
#!/usr/bin/env python
# This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
# into a single GPU
#
# 1. Use 1 GPU with CPU offload
# 2. Or use multiple GPUs instead
#
# First you need to install deepspeed: pip install deepspeed
#
# Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
# small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
#
# To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
# you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
# process multiple inputs at once.
#
# The provided deepspeed config also activates CPU memory offloading, so chances are that if you
# have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
# model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
# run faster if you don't want offload to CPU - so disable that section then.
#
# To deploy on 1 gpu:
#
# deepspeed --num_gpus 1 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=1 t0.py
#
# To deploy on 2 gpus:
#
# deepspeed --num_gpus 2 t0.py
# or:
# python -m torch.distributed.run --nproc_per_node=2 t0.py
from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
from transformers.deepspeed import HfDeepSpeedConfig
import deepspeed
import os
import torch
os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
# distributed setup
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
torch.cuda.set_device(local_rank)
deepspeed.init_distributed()
model_name = "bigscience/T0_3B"
config = AutoConfig.from_pretrained(model_name)
model_hidden_size = config.d_model
# batch size has to be divisible by world_size, but can be bigger than world_size
train_batch_size = 1 * world_size
# ds_config notes
#
# - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
# faster.
#
# - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
# all official t5 models are bf16-pretrained
#
# - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
# - want CPU offload
#
# - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
# - which params should remain on gpus - the larger the value the smaller the offload size
#
# For indepth info on Deepspeed config see
# https://huggingface.co/docs/transformers/master/main_classes/deepspeed
ds_config = {
"fp16": {
"enabled": False,
},
"bf16": {
"enabled": False,
},
"zero_optimization": {
"stage": 3,
"offload_param": {
"device": "cpu",
"pin_memory": True
},
"overlap_comm": True,
"contiguous_gradients": True,
"reduce_bucket_size": model_hidden_size * model_hidden_size,
"stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
"stage3_param_persistence_threshold": 10 * model_hidden_size
},
"steps_per_print": 2000,
"train_batch_size": train_batch_size,
"train_micro_batch_size_per_gpu": 1,
"wall_clock_breakdown": False
}
# next line instructs transformers to partition the model directly over multiple gpus using
# deepspeed.zero.Init when model's `from_pretrained` method is called.
#
# **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
#
# otherwise the model will first be loaded normally and only partitioned at forward time which is
# less efficient and when there is little CPU RAM may fail
dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
# now a model can be loaded.
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# we are ready to initialise deepspeed ZeRO now
ds_engine = deepspeed.initialize(model=model,
config_params=ds_config,
model_parameters=None,
optimizer=None,
lr_scheduler=None)[0]
ds_engine.module.eval() # inference
# Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
# If you use more GPUs adjust for more.
# And of course if you have just one input to process you then need to pass the same string to both gpus
# If you use only one GPU, then you will have only rank 0.
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
Let's take it for a run:
```
$ deepspeed --num_gpus 2 t0.py
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
rank1:
in=Is this review positive or negative? Review: this is the worst restaurant ever
out=negative
```
<|||||>I tested the provided script with `bigscience/T0_3B` and it worked as expected. However when I run it with `bigscience/T0pp` I get the following output:
```bash
[2022-01-30 20:09:26,784] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 861020
[2022-01-30 20:09:26,784] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 861021
[2022-01-30 20:09:26,784] [ERROR] [launch.py:184:sigkill_handler] ['/home/aadelucia/miniconda3/envs/fda_cersi_tobacco/bin/python', '-u', 'hf_zero_example.py', '--local_rank=1'] exits with return code = -9
```
I tried with only using `rank 0` (just not doing anything with `rank 1`) and the same thing happened.
And when I change `bf16` to `True`, only `rank 0` runs and then the program hangs. I see it's still on GPU 1 but exits out of GPU 0. Rank 1 never finishes and the process just hangs.
```bash
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
[2022-01-30 18:42:22,779] [INFO] [launch.py:210:main] Process 775166 exits successfully.
```
```bash
Sun Jan 30 19:34:16 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 470.86 Driver Version: 470.86 CUDA Version: 11.4 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... Off | 00000000:01:00.0 Off | N/A |
| 0% 56C P8 24W / 350W | 70MiB / 24260MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA GeForce ... Off | 00000000:21:00.0 Off | N/A |
| 49% 65C P2 155W / 350W | 14083MiB / 24268MiB | 100% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1491 G /usr/lib/xorg/Xorg 56MiB |
| 0 N/A N/A 1696 G /usr/bin/gnome-shell 9MiB |
| 1 N/A N/A 1491 G /usr/lib/xorg/Xorg 4MiB |
| 1 N/A N/A 775167 C ..._cersi_tobacco/bin/python 14033MiB |
+-----------------------------------------------------------------------------+
```
And then with `bf16=True` and passing the same input to both GPUs, it works.
```python
rank = torch.distributed.get_rank()
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
if rank == 0:
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
```bash
rank0:
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
out=Positive
[2022-01-30 20:19:47,097] [INFO] [launch.py:210:main] Process 861553 exits successfully.
[2022-01-30 20:19:48,099] [INFO] [launch.py:210:main] Process 861552 exits successfully.
```
Also,
> To use a larger model like "bigscience/T0" which needs about 50GB
Why does T0 need 50GB? The model plus the vocab/config files is ~42GB according to the [card](https://huggingface.co/bigscience/T0pp/tree/main). I have two 24GB GPUs (48GB total) and I was hoping to put the entire model on both GPUs without offloading to CPU, but I seem to run out of space, even with the ZeRO optimizations you suggested. Is there really ~8GB of extras loaded onto the GPU? (I only expect to fit batch size of 1)
---
Summary:
* The parallelism doesn't seem to work with T0
* What is taking up space on the GPUs?
<|||||>Ah right! I keep forgetting this not HF Trainer integration, so everything has to be done manually. In the HF Trainer integration it's all done already and you don't need to think about any of this.
Please change the code to:
```
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
```
https://github.com/huggingface/transformers/blob/6915174e68f9a3951b6e00d260812c71f099c602/src/transformers/generation_utils.py#L838
I fixed the example above.
All gpus have to work in sync even if their output is shorter than other gpu, which is what may happen when inputs are different. W/o sync if one gpu finished early the whole ensemble hangs because each gpu has a shard of a model and other gpus depend on it. and the gpus gather the missing shards in pre-forward call. So if one gpu stopped, the rest can't continue.
when you use the same input, it automatically syncs the gpus because all gpus finish at the same time.
---
> What is taking up space on the GPUs?
We are now talking Inference only:
1. cuda kernels - 1-2GB per gpu
2. and then it also depends on mixed precision vs fp32. if fp32 you have params * 4 bytes = 44GB. In mixed precision I actually need to measure for inference with because mixed precision in training actually costs more memory than fp32 in some parts - 4+2 bytes per param, but then saved in other parts of the program path - at the point of matrix multiply.
3. memory taken by activations and temps - these depend primarily on seqlen and batch size hence I gave very rough estimations.
4. additionally the setting for `generate` can make an additional memory demand - depending on the size of the beam search for example
For training please see:
https://huggingface.co/docs/transformers/performance#anatomy-of-models-memory
<|||||>Thank you, it works with T0pp now!
And thanks for the memory analysis. My confusion was because I kept getting this cryptic error when I ran the provided script without offloading to CPU:
```bash
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling `cublasCreate(handle)`
[2022-01-31 17:33:28,620] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 883030
[2022-01-31 17:33:28,621] [INFO] [launch.py:178:sigkill_handler] Killing subprocess 883031
[2022-01-31 17:33:28,621] [ERROR] [launch.py:184:sigkill_handler] ['/home/aadelucia/miniconda3/envs/fda_cersi_tobacco/bin/python', '-u', 'hf_zero_example.py', '--local_rank=1'] exits with return code = 1
```
My guess is it is a weird manifestation of an out of memory error, since the program works just fine when I run the program with `CUDA_LAUNCH_BLOCKING=1`. This essentially undoes the parallelism of the process and makes it run serially, correct?
More reading on `CUBLAS_STATUS_NOT_INITIALIZED` which all point to different culprits:
* https://stackoverflow.com/questions/61473330/cuda-error-cublas-status-alloc-failed-when-calling-cublascreatehandle
* https://github.com/huggingface/transformers/issues/6263
* https://github.com/chaitanya100100/TailorNet/issues/27
<|||||>Glad to hear it's not hanging any longer.
`CUDA_LAUNCH_BLOCKING=1` is a useful debugging tool which tells cuda not to run any async operations, because normally when an async operation fails it often lacks context it was called from and thus it's very difficult to debug. So that env var forces blocking operations which slows everything down, but when the same code fails it now tells exactly what happened including full context.
Enabling it however shouldn't impact the overall memory usage, so if everything works when you enable `CUDA_LAUNCH_BLOCKING=1` then something is broken.
How do I reproduce this issue?
<|||||>I adjusted the title of this Issue and re-opened this Issue since we are clearly still working through it.<|||||>I have been trying this @stas00 . I am using 1 A100 GPU, but it's pretty slow, I tried batch size 10, but don't see much difference . Any idea how to improve inference size? it's using only 4285MiB / 40536MiB<|||||>Please give me more context, @tuhinjubcse. Are you trying the script I shared above?
So 40GB A100 - got that part.
for T0_3B disable cpu offloading (instructions in the script) and it will be much faster.
the bigger T0-ones I don't think will fit into a single GPU w/o offload, unless you use full bf16, instead of mixed precision, so basically cast your model and input to `.to(dtype=torch.bfloat16)` and don't use deepspeed at all. e.g. you can do a simple experiment with the examples, so you'd just pass `--bf16_full_eval` - of course it may affect the results somewhat but as it was training in amp/bf16 it should be pretty close. You will also want the latest pytorch, as some earlier versions weren't doing the right thing in full half-precision (no amp) mode.
at bf16 you will only need 22GB for the weights and then some for activations. Let me know how it went.
I don't think Deepspeed can do non-mixed precision. Let me ask if this can be done somehow<|||||>I am using T0pp and yes trying the script shared above. Happy to use multiple GPUs but do you have any idea if that will make it faster?
I didn't get this part. what is the logic of assigning one prompt to rank 0 and another to rank 1
```
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
```<|||||>> Happy to use multiple GPUs but do you have any idea if that will make it faster?
disable cpu offload.
> what is the logic of assigning one prompt to rank 0 and another to rank 1
If you get a chance please read https://huggingface.co/docs/transformers/parallelism#zero-data-parallelism - you will hopefully see that each gpu will assemble a full layer and run the inputs as if it had the full model all along.
So when you use 2 gpus, you can process 2 unrelated batches at once.
With 4 gpus, 4, etc.
If you're using a single batch and replicate it to all gpus, they will each calculate an identical output. so your efficiency is 1/n_gpus.
You're asking how to make things run faster. If you have 4 gpus give each gpu a different input and you have 4x speed up! whoah!
The more gpus you use the bigger batch size you can use. So that will give you further speedup. the example above is a demo, so you want to switch to batches and not single inputs.
<|||||>> How do I reproduce this issue?
* enable `bf16`
* do not offload to CPU
* 2 24GB NVIDIA GeForce RTX 3090
* pytorch=1.10.1
* cudatoolkit =11.3.1
Then run the script you provided with T0pp<|||||>I'm trying to think where to find a similar setup, as I only have 1x RTX 3090, I've been trying to buy a 2nd one for a long time and it's just not available to buy :( <|||||>Is there anything I can try in the meantime?<|||||>I tried to reproduce on 2x A100 40GB and there is no problem there. The program completes w/o hanging or errors.<|||||>Not sure what the issue was, but it works now. Thanks @stas00 for checking it out!<|||||>Is there any example for DeepSpeed ZeRO Train without trainer?
I have write an T5 Pretrain experiment project followed this issue.but some error occured :https://github.com/microsoft/DeepSpeed/issues/1837
Could anyone help me fix it or give an example?
@stas00 <|||||>If you're not using the integration, then please head to https://github.com/microsoft/DeepSpeed where they have lots of examples, most of which are at https://github.com/microsoft/DeepSpeedExamples
update: oh, I see you have already opened an issue there. Sorry, no idea what that error you run into. This looks like a recently added new feature - I hope someone from the Deepspeed team will address your issue.
<|||||>> If you're not using the integration, then please head to https://github.com/microsoft/DeepSpeed where they have lots of examples, most of which are at https://github.com/microsoft/DeepSpeedExamples
>
> update: oh, I see you have already opened an issue there. Sorry, no idea what that error you run into. This looks like a recently added new feature - I hope someone from the Deepspeed team will address your issue.
Thanks for your reply! To be honest, I think there should be examples for deepspeed model training with transformers but no trainer. Because trainer is not flexible enough for academic innovation (some research partners also explained about it), and using transformers with no trainer integration is different with general deepspeed (for example ,dschf and deepspeed.initialize in transformers' doc is different with deepspeed's doc) but the doc for non-trainer is very primitive,diffcult to build a project by reading it.<|||||>Please let me try to give you a map and perhaps you will have a different view of the primitive documentation.
1. The only time you need [HfDeepSpeedConfig](https://huggingface.co/docs/transformers/master/main_classes/deepspeed#transformers.deepspeed.HfDeepSpeedConfig) is when you're using zero 3 and you want to preload the models during `from_pretrained` directly to gpus or handle various other things that the modeling code does to work with zero3. I will update the docs to make this more clear.
2. Then if you don't want the HF Trainer the rest of HF Transformers is a single line of:
```
model = AutoModel.from_transformers(mname)
```
and then you do what you want with the model with the methods supported by the model. And of course there are other APIs, but I'm talking about the central pieces.
So if you use ZeRO<3, it's one line of code, if you use ZeRO==3 you have at most 2 lines of code.
Everything else is your custom code that has nothing to do with HF Transformers directly if you don't want the HF Trainer.
BTW, there is also https://github.com/huggingface/accelerate that perhaps fits your needs better. And there are many other trainer-loop frameworks designed specifically to cater to very custom needs. There are probably at least 10 of those. Some integrate deepspeed some probably don't.
> and deepspeed.initialize in transformers' doc is different with deepspeed's doc
it's the same `deepspeed.initialize` - when you use HF Trainer the config file is customized to make things simpler for the user, when you're not using the HF Trainer it's just the default `deepspeed.initialize` - it's exactly how it is documented on the deepspeed website.<|||||>> Please let me try to give you a map and perhaps you will have a different view of the primitive documentation.
>
> 1. The only time you need [HfDeepSpeedConfig](https://huggingface.co/docs/transformers/master/main_classes/deepspeed#transformers.deepspeed.HfDeepSpeedConfig) is when you're using zero 3 and you want to preload the models during `from_pretrained` directly to gpus or handle various other things that the modeling code does to work with zero3. I will update the docs to make this more clear.
> 2. Then if you don't want the HF Trainer the rest of HF Transformers is a single line of:
>
> ```
> model = AutoModel.from_transformers(mname)
> ```
>
> and then you do what you want with the model with the methods supported by the model. And of course there are other APIs, but I'm talking about the central pieces.
>
> So if you use ZeRO<3, it's one line of code, if you use ZeRO==3 you have at most 2 lines of code.
>
> Everything else is your custom code that has nothing to do with HF Transformers directly if you don't want the HF Trainer.
>
> BTW, there is also https://github.com/huggingface/accelerate that perhaps fits your needs better. And there are many other trainer-loop frameworks designed specifically to cater to very custom needs. There are probably at least 10 of those. Some integrate deepspeed some probably don't.
>
> > and deepspeed.initialize in transformers' doc is different with deepspeed's doc
>
> it's the same `deepspeed.initialize` - when you use HF Trainer the config file is customized to make things simpler for the user, when you're not using the HF Trainer it's just the default `deepspeed.initialize` - it's exactly how it is documented on the deepspeed website.
I have basically understood. Thanks for your reply! <|||||>Thanks @stas00 for the great example. I'm trying to run T0_3B inference on a single A10 GPU, so I don't need ZeRO here or multi-GPU inference. Using your suggestion to run bf16 inference without deepspeed, I'm casting both the model and inputs to bfloat16, but PyTorch returns `RuntimeError: Expected tensor for argument #1 'indices' to have one of the following scalar types: Long, Int; but got CUDABFloat16Type instead (while checking arguments for embedding)`.
The rough flow is as follows:
```
t0_tokenizer = AutoTokenizer.from_pretrained('bigscience/T0_3B')
t0_model = AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0_3B').to(device='cuda:0', dtype=torch.bfloat16)
_ = t0_model.eval()
inputs = t0_tokenizer.encode(text_input, return_tensors="pt").to(device='cuda:0', dtype=torch.bfloat16)
outputs = t0_model.generate(inputs, min_length=64,
max_length=256,
do_sample=False,
num_beams=2,
early_stopping=True,
no_repeat_ngram_size=3)
answer = t0_tokenizer.decode(outputs[0], skip_special_tokens=True)
```
Your help would be greatly appreciated. I've tried it with casting only the model to bfloat16 but not the inputs -- it runs, but there's no speedup over FP32. GPU utilization is also a lot lower with BF16 model weights vs. FP32.
Hardware: Nvidia A10, 24GB VRAM
Software: Python 3.8, torch==1.11.0, transformers==4.17.0, deepspeed=0.6.1<|||||>@michaelroyzen,
1. As you discovered you shouldn't cast inputs away from its `torch.int64` (`int`). The model will automatically convert the activations to the correct float `dtype` at the moment of embedding lookup. So only the model needs to be cast.
2. By default when you run in fp32, you actually run under tf32 which on [A10](https://www.nvidia.com/en-us/data-center/products/a10-gpu/) is 2x faster than the former. If you want to compare bf16 to fp32 you have to explicitly turn tf32 off. See [this](https://huggingface.co/docs/transformers/performance#tf32).
3. you can load the model directly in bf16 if you're short on memory and want it to load faster - instead of casting it from fp32:
```
AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0_3B', torch_dtype=torch.bfloat16)
```
4. on a small batch usually it's very difficult to notice any speed improvements by tweaking `dtype` as the overhead of all other components dominates the back-to-back path. You can study these reports to get a feeling to when one starts benefiting from mixed or half precision: https://github.com/huggingface/transformers/issues/15026 and this is with training which has about 3x math to do than inference. The inference only has a forward path and thus is likely to require an even bigger batch to have a noticeable impact over a batch size of 1.
You could for example experiment with a smaller model and a larger batch size to compare the different dtypes as they speed things up with a growing batch size.
5. in the future for new discussions it's best to start a new Issue.
<|||||>Thank you for your response @stas00. Yeah, by casting I meant the approach you described in part 3 of your answer -- `AutoModelForSeq2SeqLM.from_pretrained('bigscience/T0_3B', torch_dtype=torch.bfloat16)` is what I ended up using.
And my apologies, next time I'll start a new issue.<|||||>Hello,
I was able to run the code stas00 mentioned above.
Though my task is along the same lines, it is a little more demanding. And I struggle to make the adjustments.
I would appreciate any help.
The description of the task:
1) I have a list of csv documents.
list = [doc1,doc2,doc3,...,docN]
2) Each document contains a dataframe with two columns: dataframe['question'] and dataframe['context']. There are around 25 rows in each dataframe.
3) Without parallelization, I generate the text by using:
```
for element in list:
dataframe = pd.read_csv(element)
for index, row in dataframe.iterrows():
query_and_docs = "question: {} context: {}".format(row['question'], row['context'])
model_input = tokenizer(query_and_docs, padding=True, return_tensors="pt")
generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device),
attention_mask=model_input["attention_mask"].to(device),
min_length=200,
max_length=400,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,
top_k=None,
top_p=None,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
num_return_sequences=1)
Answer = tokenizer.batch_decode(generated_answers_encoded, skip_special_tokens=True,clean_up_tokenization_spaces=True)[0]
row['answer'] = Answer
```
_____________________________________________________________________
Question:
1) How can I adjust this code for multi-gpu inference?
2) Would the combination of GPUs and CPUs (let's say 40 GPU and 40 CPU) be more efficient than just GPUs (40)?
<|||||>> * How can I adjust this code for multi-gpu inference?
You just get each rank to generate its unique sequence. See the simple example for 2 gpus here:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#custom-deepspeed-zero-inference
Specifically this part:
```
rank = torch.distributed.get_rank()
if rank == 0:
text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
elif rank == 1:
text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
tokenizer = AutoTokenizer.from_pretrained(model_name)
inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
with torch.no_grad():
outputs = ds_engine.module.generate(inputs, synced_gpus=True)
text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(f"rank{rank}:\n in={text_in}\n out={text_out}")
```
If you run that example verbatim, you will see that each rank (gpu) will print its own generated answer.
I hope you can see that you code remains exactly the same. You just make the input different for each rank.
> * Would the combination of GPUs and CPUs (let's say 40 GPU and 40 CPU) be more efficient than just GPUs (40)?
How do you imagine to use both together?
You can use CPU offload if the gpus are too few, but that would make the speed slower. Please see:
https://huggingface.co/docs/transformers/main/main_classes/deepspeed#deployment-with-one-gpu
But overall, no, it won't be more efficient.<|||||>Additionally, Deepspeed has recently released a new product called Deepspeed-Inference which speeds up inference by splitting the processing over multiple GPUs using Tensor Parallelism.
And it can even handle quantized int8 input and thus requiring half the resources at a cost of course of slower execution.
See https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/scripts/bloom-inference-scripts#deepspeed-inference though it's a temporary home - most likely the scripts will move to another location soon.
The demo scripts are written for BLOOM but can be adapted to other models. If you run into problems please ask directly at Deepspeed Issues as this is not my code (just the bloom demos are mine ;).<|||||>>
Thank you, Stas. This is exactly my question.
Let's say I have 40 GPUs but just 24 inputs in one document. Would that mean that I can take advantage only of 24 GPUs at a time? It is particularly an issue since the number of inputs in each document varies. If I assign each input for each GPU using the example you mentioned, I will run into the issue that while processing some documents, I will have many idle GPUs (if I have 40 GPUs and 24 inputs in the particular document).
1) Is it possible to assign not the inputs, but the documents to different GPUs while the model (T0) itself is deployed among all the GPUs?
Or maybe there is a smarter way to iterate the GPUs over the many inputs in many documents.
Thank you for your help with this issue.
<|||||>you need to understand how ZeRO works - all gpus must always work in sync, so you never have idling gpus. I suggest to perhaps read the main paper: https://arxiv.org/abs/1910.02054
you can do a single stream and then all other gpus will process it too, you can send unique streams - so you can have 24 unique streams and the rest will get whatever input and you can ignore the results. but again all gpus have to work in sync, because each gpu carries a unique shard of weights, and other gpus can't continue w/o it.
I think most likely you will want to research Deepspeed-Inference which is faster than ZeRO-Inference and there you don't need to bother with multiple streams, as it always has just a single stream - you just feed it a large batch-size instead and of course you can change its size on every `generate` call.
Deepspeed-Inference also uses custom fused kernels, which makes it super-fast. If some model isn't supported you can ask the Deepspeed team to add the support - it should be pretty quick.
You can see the benchmark results here (albeit for a much larger model - bloom-176b)
https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/scripts/bloom-inference-scripts#bloom-inference-solutions
If you're planning to build a server solution, there are several WIP solutions as well, one is:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/tree/main/scripts/bloom-inference-server<|||||>Thank you, Stas. I will follow your advice and do more reading.<|||||>Hello Stas,
It's me again.
I encountered the following problem. According to my limited understanding deepspeed has to initialize the model first. For this initialization phase, I need to have the CPU memory proportional to the number of GPUs I use for the inference. The faster I want my inference (more GPUs), the more CPU memory I have to have for the initialization of the model.
This is very unfortunate. Since this bottleneck deprives of so many benefits of parallelization. For example, if I have 4 GPUs with 32GB RAM each, I still can't run T0pp if I have just 50 GB of CPU RAM total.
Are there any ways around this issue?<|||||>Yes, there is. You can pre-shard the model weights into small shards.
You're in luck since I already did that for T0pp: https://huggingface.co/bigscience/T0pp/tree/sharded
so all you need to do is `from_pretrained(..., revision="sharded")`
please note that the revision is named `sharded` just because I called it so, there is no standard.
All new models starting from a few months ago are added as sharded into 5-10GB shards by default, the old ones - I sharded many - you can see the status here: https://github.com/huggingface/transformers/issues/16884
And you can further reshard those models into even smaller chunks, now you can have little CPU RAM and concurrently load into many gpus no problem. e.g. to 5GB
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp"); \
model.save_pretrained("t0pp-sharded", max_shard_size="5GB")'
```
<|||||>Wonderful news! Thank you very much.
Is there any document/readme which can educate me on how to choose the right parameter choice (5GB, 10GB) and how this would affect the speed of the inference?
Do I understand correctly that according to your calculations in the above-mentioned issue, to run the inference with T0pp would take (5GB (max_shard_size) * #-of-GPUs) CPU RAM?<|||||>The size of the shard is only important at the loading time for many concurrent processes.
The formula would be roughly this:
```
model loading RAM required = n_gpus * shard_size * 2
```
so say 4 gpus and 5gb shard:
```
4 * 5 * 2 = 40
```
so 40GB of additional CPU memory will be needed to load the model.
the 2x is because at some point you have the `state_dict` containing the shard and the model itself, though in the case of deepspeed/zero3 it'll get offloaded to the gpus right away. So in this case it'll be 1x shard + largest submodule weights size as it needs to gather those from all the gpus to update with the loaded weights.
But also look into the very recent solutions which will be even faster than deepspeed zero: https://huggingface.co/blog/bloom-inference-pytorch-scripts - the article is written for BLOOM, but there is no reason why it shouldn't work for t0 models. Definitely for Accelerate as it's model agnostic. For Deepspeed-Inference which uses custom CUDA kernels - I haven't tried - if it doesn't work with the latter please ask at Deepspeed Issue - but these will be much faster solutions - unless you infer different streams for each gpu with deepspeed-zero - please read the article and you should be able to see what would work the best for you.
If you try Deepspeed-Inference w/ T0 please report back success/failure I'd like to know. Thank you!
<|||||>> OK, here is a much improved program which also integrates some enhancements from @VictorSanh's [work](https://github.com/bigscience-workshop/t-zero/blob/master/inference/model_offload.py).
>
> This script can now handle both cpu offload and/or multiple gpu.
>
> e.g. I can process "bigscience/T0_3B" on a 8GB GPU no problem.
>
> I added a bunch of notes before and through the code - please let me know if anything is unclear or missing:
>
> ```
> #!/usr/bin/env python
>
> # This script demonstrates how to use Deepspeed ZeRO in an inference mode when one can't fit a model
> # into a single GPU
> #
> # 1. Use 1 GPU with CPU offload
> # 2. Or use multiple GPUs instead
> #
> # First you need to install deepspeed: pip install deepspeed
> #
> # Here we use a 3B "bigscience/T0_3B" model which needs about 15GB GPU RAM - so 1 largish or 2
> # small GPUs can handle it. or 1 small GPU and a lot of CPU memory.
> #
> # To use a larger model like "bigscience/T0" which needs about 50GB, unless you have an 80GB GPU -
> # you will need 2-4 gpus. And then you can adapt the script to handle more gpus if you want to
> # process multiple inputs at once.
> #
> # The provided deepspeed config also activates CPU memory offloading, so chances are that if you
> # have a lot of available CPU memory and you don't mind a slowdown you should be able to load a
> # model that doesn't normally fit into a single GPU. If you have enough GPU memory the program will
> # run faster if you don't want offload to CPU - so disable that section then.
> #
> # To deploy on 1 gpu:
> #
> # deepspeed --num_gpus 1 t0.py
> # or:
> # python -m torch.distributed.run --nproc_per_node=1 t0.py
> #
> # To deploy on 2 gpus:
> #
> # deepspeed --num_gpus 2 t0.py
> # or:
> # python -m torch.distributed.run --nproc_per_node=2 t0.py
>
>
> from transformers import AutoTokenizer, AutoConfig, AutoModelForSeq2SeqLM
> from transformers.deepspeed import HfDeepSpeedConfig
> import deepspeed
> import os
> import torch
>
> os.environ["TOKENIZERS_PARALLELISM"] = "false" # To avoid warnings about parallelism in tokenizers
>
> # distributed setup
> local_rank = int(os.getenv('LOCAL_RANK', '0'))
> world_size = int(os.getenv('WORLD_SIZE', '1'))
> torch.cuda.set_device(local_rank)
> deepspeed.init_distributed()
>
> model_name = "bigscience/T0_3B"
>
> config = AutoConfig.from_pretrained(model_name)
> model_hidden_size = config.d_model
>
> # batch size has to be divisible by world_size, but can be bigger than world_size
> train_batch_size = 1 * world_size
>
> # ds_config notes
> #
> # - enable bf16 if you use Ampere or higher GPU - this will run in mixed precision and will be
> # faster.
> #
> # - for older GPUs you can enable fp16, but it'll only work for non-bf16 pretrained models - e.g.
> # all official t5 models are bf16-pretrained
> #
> # - set offload_param.device to "none" or completely remove the `offload_param` section if you don't
> # - want CPU offload
> #
> # - if using `offload_param` you can manually finetune stage3_param_persistence_threshold to control
> # - which params should remain on gpus - the larger the value the smaller the offload size
> #
> # For indepth info on Deepspeed config see
> # https://huggingface.co/docs/transformers/master/main_classes/deepspeed
> ds_config = {
> "fp16": {
> "enabled": False,
> },
> "bf16": {
> "enabled": False,
> },
> "zero_optimization": {
> "stage": 3,
> "offload_param": {
> "device": "cpu",
> "pin_memory": True
> },
> "overlap_comm": True,
> "contiguous_gradients": True,
> "reduce_bucket_size": model_hidden_size * model_hidden_size,
> "stage3_prefetch_bucket_size": 0.9 * model_hidden_size * model_hidden_size,
> "stage3_param_persistence_threshold": 10 * model_hidden_size
> },
> "steps_per_print": 2000,
> "train_batch_size": train_batch_size,
> "train_micro_batch_size_per_gpu": 1,
> "wall_clock_breakdown": False
> }
>
> # next line instructs transformers to partition the model directly over multiple gpus using
> # deepspeed.zero.Init when model's `from_pretrained` method is called.
> #
> # **it has to be run before loading the model AutoModelForSeq2SeqLM.from_pretrained(model_name)**
> #
> # otherwise the model will first be loaded normally and only partitioned at forward time which is
> # less efficient and when there is little CPU RAM may fail
> dschf = HfDeepSpeedConfig(ds_config) # keep this object alive
>
> # now a model can be loaded.
> model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
>
> # we are ready to initialise deepspeed ZeRO now
> ds_engine = deepspeed.initialize(model=model,
> config_params=ds_config,
> model_parameters=None,
> optimizer=None,
> lr_scheduler=None)[0]
> ds_engine.module.eval() # inference
>
> # Deepspeed ZeRO can process unrelated inputs on each GPU. So for 2 gpus you process 2 inputs at once.
> # If you use more GPUs adjust for more.
> # And of course if you have just one input to process you then need to pass the same string to both gpus
> # If you use only one GPU, then you will have only rank 0.
> rank = torch.distributed.get_rank()
> if rank == 0:
> text_in = "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy"
> elif rank == 1:
> text_in = "Is this review positive or negative? Review: this is the worst restaurant ever"
>
> tokenizer = AutoTokenizer.from_pretrained(model_name)
> inputs = tokenizer.encode(text_in, return_tensors="pt").to(device=local_rank)
> with torch.no_grad():
> outputs = ds_engine.module.generate(inputs, synced_gpus=True)
> text_out = tokenizer.decode(outputs[0], skip_special_tokens=True)
> print(f"rank{rank}:\n in={text_in}\n out={text_out}")
> ```
>
> Let's take it for a run:
>
> ```
> $ deepspeed --num_gpus 2 t0.py
> rank0:
> in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
> out=Positive
> rank1:
> in=Is this review positive or negative? Review: this is the worst restaurant ever
> out=negative
> ```
@stas00 Hello Stas, I have been experimenting with this code you posted, and I got strange results. I was wondering if you have any thoughts/suggestions for me to improve the code, especially speed-wise.
I ran my code with deepspeed using 4 and 1 V100 32 GB GPUs, respectively. To generate answers for the same exact questions, 1 GPU finishes the job in 70 minutes; using four GPUs takes 112 minutes!!!
I use your code except for the last couple of lines, which I change to:
```
number_cores = 4 #Or 1 depending on number of GPUs I am using
for i in range(int(len(questions_list)/number_cores)):
if rank == 0:
number = number_cores*i
query_and_docs = f"question: {questions_list[number]} context: {context}"
elif rank == 1:
number = number_cores*i+1
query_and_docs = f"question: {questions_list[number]} context: {context}"
elif rank == 2:
number = number_cores*i+2
query_and_docs = f"question: {questions_list[number]} context: {context}"
elif rank == 3:
number = number_cores*i+3
query_and_docs = f"question: {questions_list[number]} context: {context}"
device = 'cuda' if torch.cuda.is_available() else 'cpu'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model_input = tokenizer(query_and_docs, max_length=512, padding='max_length', return_tensors="pt")
with torch.no_grad():
generated_answers_encoded = ds_engine.module.generate(input_ids=model_input["input_ids"].to(device),
attention_mask=model_input["attention_mask"].to(device),
synced_gpus=True,
min_length=20,
max_length=400,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,)
Answer = tokenizer.batch_decode(generated_answers_encoded, skip_special_tokens=True, clean_up_tokenization_spaces=True)[0]
answers.append(Answer)
```<|||||>It's not strange at all. When using ZeRO 1 gpu will always be faster if you can fit the model in - this is because of the overhead of comms with multiple gpus which a 1-gpu setup doesn't have.
This is about using the right tool for the right job. ZeRO was written for models that can't be fit into a single GPU. If you can use a single GPU use it ;)
Also the following nuance is very important: a 4-gpu set up in your case generates 4 different outputs and 1 gpu only one, so the effective speed of 4 gpus is 1/4th of 112 minutes, so 28 minutes per gpu. Does it make sense?
And there is a much faster Deepspeed-Inference solution that was released just recently https://huggingface.co/blog/bloom-inference-pytorch-scripts that will indeed make your 4 gpus speed up the inference by much and faster than 1 gpu. The Accelerate solution is also likely to be faster or on par with ZeRO when you feed the latter unique streams. I haven't tried with this particular model to tell for sure.
<|||||>Oh, I see. Although I knew the purpose of Zero is to fit big models, I assumed it would also help with the speed!!
I do not understand the nuance you mentioned, though; I generated the same number of answers in both the 4-GPU and 1-GPU setups. To make it more specific, to answer 60 questions, 4-GPU set up took 112 minutes while 1-GPU set up only took 70 minutes.
I will try to implement the model using the document you mentioned. That's very helpful.<|||||>> I do not understand the nuance you mentioned, though; I generated the same number of answers in both the 4-GPU and 1-GPU setups. To make it more specific, to answer 60 questions, 4-GPU set up took 112 minutes while 1-GPU set up only took 70 minutes.
I wasn't able to derive that this was the case - it's possible I missed that.
If it is as you say it is then the overhead of comms is really big then and 4 gpus are indeed slower even with unique streams.
When you have fast intranode connectivity like NVLink as compared to PCIe usually the comms overhead is lower and then compute dominates and gpus excel at what they do - fast results. when comms are slow then the gpus idle a lot - slow results.
same goes for multiple nodes - one node is fast, more than one node is usually much slower since inter-node networks are usually slower than intra-node - but it's not always the case (e.g. NVSwitch can connect many nodes at almost the same speed as NVLink on one node).
You can also watch your GPU utilization in nvidia-smi while processing in 1 vs 4 gpus - if it's always close to 100% then comms are super fast - if it's jumping between 0 and 100, then the overhead of getting the data around is large. nvidia-smi doesn't show the exact compute util, but it's a good enough indication.
If for example you try NVMe offload gpu will be heavily under-utilized since disc IO will be slow.<|||||>> The size of the shard is only important at the loading time for many concurrent processes.
>
> The formula would be roughly this:
>
> ```
> model loading RAM required = n_gpus * shard_size * 2
> ```
>
> so say 4 gpus and 5gb shard:
>
> ```
> 4 * 5 * 2 = 40
> ```
>
> so 40GB of additional CPU memory will be needed to load the model.
>
> the 2x is because at some point you have the `state_dict` containing the shard and the model itself, though in the case of deepspeed/zero3 it'll get offloaded to the gpus right away. So in this case it'll be 1x shard + largest submodule weights size as it needs to gather those from all the gpus to update with the loaded weights.
>
> But also look into the very recent solutions which will be even faster than deepspeed zero: https://huggingface.co/blog/bloom-inference-pytorch-scripts - the article is written for BLOOM, but there is no reason why it shouldn't work for t0 models. Definitely for Accelerate as it's model agnostic. For Deepspeed-Inference which uses custom CUDA kernels - I haven't tried - if it doesn't work with the latter please ask at Deepspeed Issue - but these will be much faster solutions - unless you infer different streams for each gpu with deepspeed-zero - please read the article and you should be able to see what would work the best for you.
>
> If you try Deepspeed-Inference w/ T0 please report back success/failure I'd like to know. Thank you!
Hello Stas,
Here is my report.
I have been able to run the DeepSpeed inference with T0_3B. I used the code from End-to-End GPT NEO 2.7B Inference tutorial:
https://www.deepspeed.ai/tutorials/inference-tutorial/
The tricky part is that I had to downgrade my version of "transformers" to version 4.21.3. Otherwise, it doesn't work.
I provide code here for convenience:
```import os
import deepspeed
import torch
from transformers import pipeline
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
generator = pipeline('text-generation', model='bigscience/T0_3B',
device=local_rank)
generator.model = deepspeed.init_inference(generator.model,
mp_size=world_size,
dtype=torch.float,
replace_method='auto',
replace_with_kernel_inject=True)
string = generator("DeepSpeed is", do_sample=True, min_length=20)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(string)
```
However, I struggle with integrating "sharded revision" into this code. I assume a pipeline can't be used for it, right?
<|||||>Thank you for sharing the outcome, @archieCanada
I'd recommend opening an Issue at https://github.com/microsoft/DeepSpeed/issues so that it can be fixed on their side.
Besides your repro code please include the traceback of the failure you had with the latest transformers.
I wonder if they need to add tests to ensure.
> However, I struggle with integrating "sharded revision" into this code. I assume a pipeline can't be used for it, right?
I'm not quite sure, I use explicit code rather than pipelines so that I have full control over all parts.
I think you could open a feature request to have `pipeline` support a `revision` argument if it doesn't already - this makes total sense.
To hack around it you could download the revision you want and load it locally using the path to the clone rather than the model name.
<|||||>@stas00 I see that you have mentioned `synced_gpus=True` when using `model.module.generate` but in my case, I am generating the logits using forward method `model.module(input_ids=input_ids, attention_mask=attn_mask, labels=decoder_ids).logits`. This doesn't support `synced_gpus=True`. Currently the process hangs after showing ` Process 36678 exits successfully.` The rank 1 GPU becomes empty while the rank 0 is still running but since the rank 1 GPU no longer has the shard with it that rank 0 needs, could you please tell me if there is any way I could make this work?<|||||>That's the thing about ZeRO - all participating in sharding gpus must always work in sync. If one gpu finished early for some reason it must continue running `forward` until all gpus finish. If it doesn't the other gpus will block waiting for all gpus to send their shards to each other, which leads to hanging.
You can see how I implemented it in `generate` - follow `synced_gpus` code branches - but of course any other way would work as well.
You can also use `py-spy` to see where the gpus hang, but it'll be some `forward` call.<|||||>Hello Stas and community,
I try to implement the sharded version of the T0pp model, but I fail to do so for reasons I don't understand.
Here is my code:
``` import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import os
import deepspeed
model_name = "bigscience/T0pp"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model.save_pretrained("t0pp-sharded", max_shard_size="5GB")
```
I have the following resources:
2 GPUs (Tesla V100-SXM2-32GB)
60 GB CPU RAM per GPU (120 GB CPU RAM total)
According to my understanding these resources should be enough. But the system crushes already on the line:
`model = AutoModelForSeq2SeqLM.from_pretrained(model_name)`
with "not-enough-memory" Error.
<|||||>most likely you have too little cpu memory, try with `AutoModelForSeq2SeqLM.from_pretrained(model_name, low_cpu_mem_usage=True)`
normally with unsharded model you need 2x model size of CPU memory to load it.
<|||||>Hello Stas,
I probably don't understand everything, because I am missing your point.
Here is my situation:
I **want to use the sharded model** you mentioned earlier to avoid the bottleneck with many GPUs and limited CPU RAM. But I don't understand the algorithm of how to do that.
What I have tried:
1)
`model_name = "bigscience/t0pp-sharded"`
`model = AutoModelForSeq2SeqLM.from_pretrained(model_name)`
2)
`model_name = "t0pp-sharded"`
`model = AutoModelForSeq2SeqLM.from_pretrained(model_name)`
In both cases, I get an Error:
t0pp-sharded is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
Probably this is not the right way of loading the sharded model.
Should I then myself save the model as sharded using the following algorithm?:
1) download the full model using 2 GPUs and 200 GB CPU RAM
2) save the model as sharded using:
`model.save_pretrained("t0pp-sharded", max_shard_size="5GB")`
3) Start a new session with more than 2 GPUs but less CPU RAM (because now I can use the sharded model)
4) Download the sharded model using:
`model_name = "t0pp-sharded"`
`model = AutoModelForSeq2SeqLM.from_pretrained(model_name)`
Please, correct me where I am wrong.
Thank you.
<|||||>I was replying to your comment of getting your program killed.
I think now I perhaps understand what you are struggling with.
there is no such model as ` bigscience/t0pp-sharded`
Here is what you can do:
1. Use the 10GB-shard presharded model:
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp", revision="sharded");'
```
if 10GB is too big then try next to make your own sharded model:
2. say 5-GB shards
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp"); \
model.save_pretrained("/path/to/t0pp-sharded", max_shard_size="5GB")'
```
this step will require 2x model-size cpu memory and then a bit more. So 100GB of CPU memory should be enough
and then use the resulting model like so:
```
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("/path/to/t0pp-sharded");'
```
3. you can also take the output in `/path/to/t0pp-sharded` and upload it to the hub and then use that model:
python -c 'from transformers import AutoModelForSeq2SeqLM; \
model = AutoModelForSeq2SeqLM.from_pretrained("MYUSERNAME/MYMODELNAME");'
you will of course will have to adapt the upcase name to your situation
None of these needs a GPU.
Please let me know if any of the 3 worked for you<|||||>Hello Stas,
Thank you for your kind help.
I tried method 2) you described above.
Here is my code:
```
#Inputs
questionSet = ['question0', 'question1', 'question2']
context = 'Some context'
#installation of the necessary packages
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import os
import deepspeed
#download the model
model_name_token = "bigscience/T0pp"
model_name = "/path/to/t0pp-sharded"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
tokenizer = AutoTokenizer.from_pretrained(model_name_token)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
model = model.to(device=local_rank)
model = deepspeed.init_inference(model,
mp_size=world_size,
dtype=torch.float,
replace_method='auto',
replace_with_kernel_inject=True)
#generate answers
answerSet = []
for query in questionSet:
query_and_docs = "question: {} context: {}".format(query, context)
model_input = tokenizer(query_and_docs, padding=True, return_tensors="pt")
generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device=local_rank),
attention_mask=model_input["attention_mask"].to(device=local_rank),
min_length=200,
max_length=400,
do_sample=False,
early_stopping=True,
num_beams=8,
temperature=1.0,
top_k=None,
top_p=None,
eos_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
num_return_sequences=1)
answer = tokenizer.batch_decode(generated_answers_encoded, skip_special_tokens=True,clean_up_tokenization_spaces=True)[0]
answerSet.append(answer)
for i in range(0,len(machineAnswerSet)):
print(machineAnswerSet[i])
```
My resources:
I tried two different options:
1) 2 GPUs (Tesla V100-SXM2-32GB), 60 GB CPU RAM per GPU (120 GB CPU RAM total)
1) 3 GPUs (Tesla V100-SXM2-32GB), 60 GB CPU RAM per GPU (180 GB CPU RAM total)
Both times I received the following error:
```
RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 31.75 GiB total capacity; 30.99 GiB already allocated; 118.19 MiB free; 30.99 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```<|||||>Does it work if you follow this recipe?
https://github.com/huggingface/transformers-bloom-inference/blob/bd8af123275e3f622300ea897a712ff274f5762a/bloom-inference-scripts/bloom-ds-inference.py#L139-L141
<|||||>but really as I suggested originally you should post an Issue at https://github.com/microsoft/DeepSpeed/issues and tag RezaYazdaniAminabadi - DeepSpeed-Inference isn't something that is integrated into `transformers` like Deepspeed-ZeRO is. So all problems related to the former should be reported to the Deepspeed project and not here. This thread is about Deepspeed-ZeRO.
Deepspeed is a project that has multiple related frameworks - we have only the ZeRO project integrated into HF Trainer and Accelerate. DeepSpeed-Inference requires no integration. |
transformers | 15,398 | closed | Error with run_seq2seq_qa.py official script (pyarrow.lib.ArrowInvalid: Column 4 named labels expected length 1007 but got length 1000) | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
pinging @sgugger and @patil-suraj
## Information
Model I am using (Bert, XLNet ...): T5-base
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [x] an official GLUE/SQUaD task: SQUaD
* [ ] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
Here is a notebook to reproduce the issue: REMOVED
I have been able to reproduce this issue on Google Colab as well as my local machine.
1. Clone the transformers repo and run the train_seq2seq_qa.py script with the following command:
python examples/pytorch/question-answering/run_seq2seq_qa.py \
--model_name_or_path t5-small \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answers \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_seq2seq_squad/
The issue arises during the preprocessing step on the training set. Particularly I get the following error when trying to cache the preprocessed dataset:
```
Running tokenizer on train dataset: 0% 0/131 [00:00<?, ?ba/s]01/28/2022 18:26:06 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/09187c73c1b837c95d9a249cd97c2c3f1cebada06efe667b4427714b27639b1d/cache-61cf0f14d28995e4.arrow
Running tokenizer on train dataset: 100% 131/131 [01:12<00:00, 1.81ba/s]
Running tokenizer on validation dataset: 0% 0/12 [00:00<?, ?ba/s]01/28/2022 18:27:18 - INFO - datasets.arrow_dataset - Caching processed dataset at /root/.cache/huggingface/datasets/squad_v2/squad_v2/2.0.0/09187c73c1b837c95d9a249cd97c2c3f1cebada06efe667b4427714b27639b1d/cache-d54ee58948263e43.arrow
Running tokenizer on validation dataset: 0% 0/12 [00:08<?, ?ba/s]
Traceback (most recent call last):
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 678, in <module>
main()
File "examples/pytorch/question-answering/run_seq2seq_qa.py", line 522, in main
desc="Running tokenizer on validation dataset",
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2125, in map
desc=desc,
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 519, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 486, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/fingerprint.py", line 413, in wrapper
out = func(self, *args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py", line 2503, in _map_single
writer.write_batch(batch)
File "/usr/local/lib/python3.7/dist-packages/datasets/arrow_writer.py", line 500, in write_batch
pa_table = pa.Table.from_arrays(arrays, schema=schema)
File "pyarrow/table.pxi", line 1532, in pyarrow.lib.Table.from_arrays
File "pyarrow/table.pxi", line 1181, in pyarrow.lib.Table.validate
File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Column 4 named labels expected length 1007 but got length 1000
```
## Expected behavior
The preprocessing step uses the map function which runs successfully through the entire dataset but the issue arises during the caching of the preprocessed training set. I found that the map function preprocess in batches of 1000 samples so I am anticipating that one of the batches has different dimensions which leads to this error.
| 01-28-2022 19:22:23 | 01-28-2022 19:22:23 | I have faced the same issue and I hope patil-suraj could fix the problem because seq2seq models have many applications in QA especially for open domain tasks. This also will standardize the evaluation script for both BART and T5 which is helpful for comparing both performance on QA. Just to add one thing, i notice that this happen only for the validation dataset not the training dataset. so if you use do_train without do_eval or do_predict the code will run smoothly which mean the problem is on def preprocess_validation_function(examples): function. Also validation dataset tend to have multiple answers for one question so maybe the problem related to this issue.<|||||>> I have faced the same issue and I hope patil-suraj could fix the problem because seq2seq models have many applications in QA especially for open domain tasks. This also will standardize the evaluation script for both BART and T5 which is helpful for comparing both performance on QA. Just to add one thing, i notice that this happen only for the validation dataset not the training dataset. so if you use do_train without do_eval or do_predict the code will run smoothly which mean the problem is on def preprocess_validation_function(examples): function. Also validation dataset tend to have multiple answers for one question so maybe the problem related to this issue.
Interesting. Yes I observed similar behavior with the validation set. <|||||>Thank you for reporting the issue, I can reproduce it. Looking into it.<|||||>Hi,
I am facing the same issue. Is there any update?<|||||>Thank you for taking a look @patil-suraj. Also curious if you found a fix for this.<|||||>This is a temporary solution till @patil-suraj fixes the issue. This solution is taken from the T5 squad colab by @patil-suraj https://colab.research.google.com/github/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb#scrollTo=H8AbD1B7TR0k . You can use this notebook also to train seq2seq model with TPU and pytorch xla. I use it to fine-tunein BART on SQuAD and it woks like charm.
However, if you want to use the training script by huggingface transformers library then first train your model using
run_seq2seq_qa.py with do_train flag only and save the model in "out" dir. Then create a file called eval.py and copy this code:
```
from __future__ import print_function
from collections import Counter
import string
import re
import argparse
import json
import sys
import torch
from tqdm import tqdm
import torch
import torch
import datasets
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("out")
# process the examples in input and target text format and the eos token at the end
def add_eos_to_examples(example):
example['input_text'] = 'question: %s context: %s </s>' % (example['question'], example['context'])
example['target_text'] = '%s </s>' % example['answers']['text'][0]
return example
# tokenize the examples
def convert_to_features(example_batch):
input_encodings = tokenizer.batch_encode_plus(example_batch['input_text'], pad_to_max_length=True, max_length=512)
target_encodings = tokenizer.batch_encode_plus(example_batch['target_text'], pad_to_max_length=True, max_length=16)
encodings = {
'input_ids': input_encodings['input_ids'],
'attention_mask': input_encodings['attention_mask'],
'target_ids': target_encodings['input_ids'],
'target_attention_mask': target_encodings['attention_mask']
}
return encodings
from datasets import load_dataset
# load train and validation split of squad
train_dataset = load_dataset("squad", split="train")
valid_dataset = load_dataset("squad", split="validation")
# map add_eos_to_examples function to the dataset example wise
train_dataset = train_dataset.map(add_eos_to_examples)
# map convert_to_features batch wise
train_dataset = train_dataset.map(convert_to_features, batched=True)
valid_dataset = valid_dataset.map(add_eos_to_examples, load_from_cache_file=False)
valid_dataset = valid_dataset.map(convert_to_features, batched=True, load_from_cache_file=False)
# set the tensor type and the columns which the dataset should return
columns = ['input_ids', 'target_ids', 'attention_mask', 'target_attention_mask']
train_dataset.set_format(type='torch', columns=columns)
valid_dataset.set_format(type='torch', columns=columns)
torch.save(train_dataset, 'train_data.pt')
torch.save(valid_dataset, 'valid_data.pt')
valid_dataset = torch.load('valid_data.pt')
dataloader = torch.utils.data.DataLoader(valid_dataset, batch_size=32)
from transformers import AutoModelForSeq2SeqLM,AutoTokenizer
model = AutoModelForSeq2SeqLM.from_pretrained("out").to('cuda')
tokenizer=AutoTokenizer.from_pretrained("out")
def normalize_answer(s):
"""Lower text and remove punctuation, articles and extra whitespace."""
def remove_articles(text):
return re.sub(r'\b(a|an|the)\b', ' ', text)
def white_space_fix(text):
return ' '.join(text.split())
def remove_punc(text):
exclude = set(string.punctuation)
return ''.join(ch for ch in text if ch not in exclude)
def lower(text):
return text.lower()
return white_space_fix(remove_articles(remove_punc(lower(s))))
def f1_score(prediction, ground_truth):
prediction_tokens = normalize_answer(prediction).split()
ground_truth_tokens = normalize_answer(ground_truth).split()
common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
num_same = sum(common.values())
if num_same == 0:
return 0
precision = 1.0 * num_same / len(prediction_tokens)
recall = 1.0 * num_same / len(ground_truth_tokens)
f1 = (2 * precision * recall) / (precision + recall)
return f1
def exact_match_score(prediction, ground_truth):
return (normalize_answer(prediction) == normalize_answer(ground_truth))
def metric_max_over_ground_truths(metric_fn, prediction, ground_truths):
scores_for_ground_truths = []
for ground_truth in ground_truths:
score = metric_fn(prediction, ground_truth)
scores_for_ground_truths.append(score)
return max(scores_for_ground_truths)
def evaluate(gold_answers, predictions):
f1 = exact_match = total = 0
for ground_truths, prediction in zip(gold_answers, predictions):
total += 1
exact_match += metric_max_over_ground_truths(
exact_match_score, prediction, ground_truths)
f1 += metric_max_over_ground_truths(
f1_score, prediction, ground_truths)
exact_match = 100.0 * exact_match / total
f1 = 100.0 * f1 / total
return {'exact_match': exact_match, 'f1': f1}
answers = []
for batch in tqdm(dataloader):
outs = model.generate(input_ids=batch['input_ids'].to('cuda'),
attention_mask=batch['attention_mask'].to('cuda'),
max_length=32,
early_stopping=True)
outs = [tokenizer.decode(ids,skip_special_tokens=True) for ids in outs]
answers.extend(outs)
predictions = []
references = []
for ref, pred in zip(valid_dataset["answers"], answers):
predictions.append(pred)
references.append(ref['text'])
print(evaluate(references, predictions))
```
<|||||>Thanks @salrowili this seems like a good temporary fix! I would love to see if the script can be fixed still as it provides a much smoother development experience.<|||||>Hi, I am facing the same issue. The temporary fix seems to work, but I'm just wondering if there is any update?<|||||>This error occurs because some columns do not have the same number of examples as the other columns.<|||||>Hi, that happens because tokenizer will return multiple instances if question+context > max_seq_length. Commenting out return_overflowing_tokens and related lines will help to solve this problem.
```
# Validation preprocessing
def preprocess_validation_function(examples):
inputs, targets = preprocess_squad_batch(examples, question_column, context_column, answer_column)
model_inputs = tokenizer(
inputs,
max_length=max_seq_length,
padding=padding,
truncation=True,
return_offsets_mapping=True,
# return_overflowing_tokens=True,
)
# Setup the tokenizer for targets
with tokenizer.as_target_tokenizer():
labels = tokenizer(targets, max_length=max_answer_length, padding=padding, truncation=True)
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
# sample_mapping = model_inputs.pop("overflow_to_sample_mapping")
sample_mapping = list(range(len(model_inputs["input_ids"])))
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
# corresponding example_id and we will store the offset mappings.
model_inputs["example_id"] = []
for i in range(len(model_inputs["input_ids"])):
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
model_inputs["example_id"].append(examples["id"][sample_index])
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
model_inputs["labels"] = labels["input_ids"]
return model_inputs
```
You may also need to optionally adjust the listed command w/ about the following arguments:
```
--answer_column answers \
--eval_accumulation_steps 1 \
--predict_with_generate \
--version_2_with_negative \
```<|||||>Sure this works as in it gets the code to run but then the evaluation results might be lower than the true performance of the model because you may be cutting out parts of the context that do contain the answer and some samples may become unanswerable.<|||||>Hi @patil-suraj , Thanks for looking into the issue. Is there any update?<|||||>I think an update to fix this issue is crucial for seq2seq qa task because I observe that seq2seq models are more likely to overfit so evaluating the model during the training steps will help us definitely find the best checkpoint which may reside in the middle.<|||||>@patil-suraj @LysandreJik Hi, thank you for digging into that issue, is there, by any chance, any update on your end about this bug?<|||||>Sorry to come back to this only now, will push a fix soon (either today or tomorrow)<|||||>@patil-suraj Thank you for this amazing effort to take care of seq2seq models and their applications. it would be great if this issue got fixed.<|||||>Hi @patil-suraj, Thanks for looking into this. Is there any update?<|||||>Hi @sgugger @patil-suraj @LysandreJik ! is there any update on that issue? Thanks in advance :) <|||||>+1 @sgugger @patil-suraj @LysandreJik Would be great if this got fixed! Thanks :)<|||||>> Sure this works as in it gets the code to run but then the evaluation results might be lower than the true performance of the model because you may be cutting out parts of the context that do contain the answer and some samples may become unanswerable.
@anas-awadalla Does the current script handle this problem? Isn't it just mapping each example to its last associated feature? https://github.com/huggingface/transformers/blob/db377a0b3715e1adc492114ec5af58f72bd2172a/examples/pytorch/question-answering/run_seq2seq_qa.py#L580 If I understand anything incorrectly, it would be appreciated if you could point it out.<|||||>@YianZhang Thanks for pointing that out! Yes I think you are right here. It seems like they take the last feature. In that case @chaojiang06's fix is a pretty good starting spot? I would argue that this is still not the ideal way to do this as in the case of squad v1 all questions should be answerable. Do you have an intuition why they chose to do this?
Also @patil-suraj this has become a fairly popular issue! Would love if you can give us an update or ways we can help. I am happy to implement a fix if you can provide some guidence.<|||||>Hi @anas-awadalla , if you want to fix the problem, you can refer to the [preprocessing](https://github.com/huggingface/transformers/blob/a59eb349c5616c1b48ae9225028fb41ec1feb6aa/examples/pytorch/question-answering/run_qa.py#L451) function in the original BERT-QA code. Basically, if a sample is too long, it will be split into multiple instances, and finally, need to do a merge. <|||||>Thanks for the reference @chaojiang06 do you mind explaining what you mean by merge? I get that samples should be split into smaller instances when they are too long but what I am confused about is handling samples where the answer is not found in the truncated context (in the case of extractive qa). Maybe we need to add something like [this](https://github.com/huggingface/transformers/blob/db377a0b3715e1adc492114ec5af58f72bd2172a/examples/pytorch/question-answering/run_qa_beam_search.py#L420)<|||||>Hi, for example, a test example is split into 3 instances, and the model makes predictions for these 3 instances. You will need to merge the predictions from these 3 instances and rank them by the probability. You can refer to the [code](https://github.com/huggingface/transformers/blob/db377a0b3715e1adc492114ec5af58f72bd2172a/examples/pytorch/question-answering/utils_qa.py#L108) here. <|||||>I totally agree with others that we need to have this code fixed because I notice that seq2seq models (e.g. BART and T5) are easy to be overfitted with more fine-tuning training unlike other Transformers models (e.g. ELECTRA, BERT, and ALBERT). That means the best epoch or checkpoint may reside in the middle and to catch that checkpoint when need an evaluation code within this fine-tuning code to evaluate the model at the end of each epoch or x steps.
I have also noticed that this code works very well on TPU XLA but i am wondering what "--per_device_train_batch_size" represents in this case? is it the total batch after on all cores on TPUv3-8 or in the single-core? with other pytorch codes it represent per core so i am assuming its the same?<|||||>@chaojiang06 Hi Chaojiang, I tried the solution you posted to comment out return_overflowing_tokens and related lines. It did get rid of the preprocessing error, but during evaluation a `CUDA out of memory` error was raised. Did you encounter this? Do you have a fix for this? Thanks!<|||||>> @chaojiang06 Hi Chaojiang, I tried the solution you posted to comment out return_overflowing_tokens and related lines. It did get rid of the preprocessing error, but during evaluation a `CUDA out of memory` error was raised. Did you encounter this? Do you have a fix for this? Thanks!
This probably is not related to the preprocessing? I would think it depends on your specific gpu memory, model size, and batch size. You can try to play around with a smaller model (T5-small?) or lowering your batch size.<|||||>@anas-awadalla I used a V100 GPU and the exact command provided by huggingface: ```
python run_seq2seq_qa.py \
--model_name_or_path t5-small \
--dataset_name squad_v2 \
--context_column context \
--question_column question \
--answer_column answer \
--do_train \
--do_eval \
--per_device_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 2 \
--max_seq_length 384 \
--doc_stride 128 \
--output_dir /tmp/debug_seq2seq_squad/```
The only changes I made were those proposed by @chaojiang06. I also tried smaller batch_sizes and it didn't help, so I think this is less of a gpu memory/batch size issue. Someone else also encountered this issue in [#17089](https://github.com/huggingface/transformers/issues/17089). @anas-awadalla So you don't have this issue?<|||||>@YianZhang , you mat consider these arguments
--answer_column answers \
--eval_accumulation_steps 1 \
--predict_with_generate \
--version_2_with_negative \
--eval_accumulation_steps will solve your problem.<|||||>> @YianZhang , you mat consider these arguments
>
> --answer_column answers --eval_accumulation_steps 1 --predict_with_generate --version_2_with_negative \
>
> --eval_accumulation_steps will solve your problem.
@chaojiang06 That works. Big thanks!<|||||>@anas-awadalla Assuming GPU memory is not a bottleneck, another solution is to set a large enough max_seq_length (like 900) and replace `max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length)` with `max_seq_length = data_args.max_seq_length`. Compared with merging, this is simpler to implement. Compared with @chaojiang06's solution, this solution allows the model to see the complete input. <|||||>@YianZhang isn’t the max sequence length T5 can handle 512? I believe something like BART can handle longer sequences but for my purposes I need to use T5.<|||||>@anas-awadalla T5 does not have a hard limit on sequence length. As long as you are not restricted by GPU memory or compute speed, you can feed in sequences longer than 512 as input.<|||||>Are there any updates on fixing the evaluation script? The problem with both SQuAD and TyDiQA is that it has an evaluation dataset only which allows us to train our models and evaluate at each specific step and report our numbers in the paper. This mean with a working evaluation script number could get better since the best checkpoint may reside in the middle.<|||||>> Hi, that happens because tokenizer will return multiple instances if question+context > max_seq_length. Commenting out return_overflowing_tokens and related lines will help to solve this problem.
>
> ```
> # Validation preprocessing
> def preprocess_validation_function(examples):
> inputs, targets = preprocess_squad_batch(examples, question_column, context_column, answer_column)
>
> model_inputs = tokenizer(
> inputs,
> max_length=max_seq_length,
> padding=padding,
> truncation=True,
> return_offsets_mapping=True,
> # return_overflowing_tokens=True,
> )
>
> # Setup the tokenizer for targets
> with tokenizer.as_target_tokenizer():
> labels = tokenizer(targets, max_length=max_answer_length, padding=padding, truncation=True)
>
> # Since one example might give us several features if it has a long context, we need a map from a feature to
> # its corresponding example. This key gives us just that.
> # sample_mapping = model_inputs.pop("overflow_to_sample_mapping")
> sample_mapping = list(range(len(model_inputs["input_ids"])))
>
> # For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
> # corresponding example_id and we will store the offset mappings.
> model_inputs["example_id"] = []
>
> for i in range(len(model_inputs["input_ids"])):
> # One example can give several spans, this is the index of the example containing this span of text.
> sample_index = sample_mapping[i]
> model_inputs["example_id"].append(examples["id"][sample_index])
>
> # If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
> # padding in the loss.
> if padding == "max_length" and data_args.ignore_pad_token_for_loss:
> labels["input_ids"] = [
> [(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
> ]
>
> model_inputs["labels"] = labels["input_ids"]
>
> return model_inputs
> ```
>
> You may also need to optionally adjust the listed command w/ about the following arguments:
>
> ```
> --answer_column answers \
> --eval_accumulation_steps 1 \
> --predict_with_generate \
> --version_2_with_negative \
> ```
I would stress that we have to do the following updates:
```
model_inputs = tokenizer(
inputs,
max_length=max_seq_length,
padding=padding,
truncation=True,
return_offsets_mapping=True,
#### HERE - coment out this line
# return_overflowing_tokens=True,
####
)
```
And:
```
#### HERE replace the first line with the second
# sample_mapping = model_inputs.pop("overflow_to_sample_mapping")
sample_mapping = list(range(len(model_inputs["input_ids"])))
####
```
I have provided only the second update and received an error.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>Any update?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>What I realised, mentioned above as well, is that the instances that go beyond the max_seq_length cause the error. This happens only in the validation and test setting since the function (`preprocess_validation_function`) tries to predict the label for the truncated part and the remaining part. In the training setting, there is no error cause the `preprocess_function` only uses the truncated part and ignores the remaining part. FYI there are different ways in literature to tackle this, apart from using the longformer. I remember one study said that is better to remove the mid bit (usually less informative), anyhow I am digressing.
I found the error in the `preprocess_validation_function`, which is quite straightforward. I just adjusted the labels to not only include the truncated part but also the labels of the remaining instances, that's all. Here is the code:
# Validation preprocessing
def preprocess_validation_function(examples):
inputs, targets = preprocess_squad_batch(examples, question_column, context_column, answer_column)
model_inputs = tokenizer(
inputs,
max_length=max_seq_length,
padding=padding,
truncation=True,
return_overflowing_tokens=True,
return_offsets_mapping=True,
)
# Tokenize targets with the `text_target` keyword argument
labels = tokenizer(text_target=targets, max_length=max_answer_length, padding=padding, truncation=True)
# If we are padding here, replace all tokenizer.pad_token_id in the labels by -100 when we want to ignore
# padding in the loss.
if padding == "max_length" and data_args.ignore_pad_token_for_loss:
labels["input_ids"] = [
[(l if l != tokenizer.pad_token_id else -100) for l in label] for label in labels["input_ids"]
]
# Since one example might give us several features if it has a long context, we need a map from a feature to
# its corresponding example. This key gives us just that.
sample_mapping = model_inputs.pop("overflow_to_sample_mapping")
# For evaluation, we will need to convert our predictions to substrings of the context, so we keep the
# corresponding example_id and we will store the offset mappings.
model_inputs["example_id"] = []
# Augment the overflowing tokens to the labels
labels_out = []
for i in range(len(model_inputs["input_ids"])):
# One example can give several spans, this is the index of the example containing this span of text.
sample_index = sample_mapping[i]
model_inputs["example_id"].append(examples["id"][sample_index])
labels_out.append(labels["input_ids"][sample_index])
# model_inputs["labels"] = labels["input_ids"]
model_inputs["labels"] = labels_out
return model_inputs<|||||>@FilipposVentirozos Would you like to suggest a PR with your fix? |
transformers | 15,397 | open | [activations] pytorch-1.11+ Tanh Gelu Approximation | # 🚀 Feature request
As kindly flagged by @vadimkantorov pt-1.11 will have a fast Tanh Gelu Approximation as implemented here https://github.com/pytorch/pytorch/pull/61439 so we could replace our manual implementation with the fast one when pt>=1.11 is detected.
for additional context please see this thread: https://github.com/pytorch/pytorch/issues/39853 | 01-28-2022 17:57:58 | 01-28-2022 17:57:58 | Hi @stas00, may I take a look into this? Below is a list of to-do's I can think of, but of course, there could be more.
1. Compare output diffs: Using the `torch` version should not affect model output.
2. Review affected models/code: The upstream PR seems to be a drop-in replacement for HF transformers' `gelu_new` (replaced with `F.gelu(x, approximate="tanh")`), but more investigation might be needed to see if other `gelu` occurrences in this repo can be replaced.
3. Performance benchmarks: Evaluate the speedup gains from using the new implementation.
What do you think?<|||||>By all means, @jaketae - thank you!
the most important thing is numerical backward compatibility. Since `transformers` are used for research - we must not change the outputs even by a little bit - and any new changes that do change the outcomes need to be explicitly enabled. At least that's my understanding of the "policy" - I don't know though how to quantify how much is ok of a change. When I proposed to add torch jit to our activation functions which makes them more correct since they are then accumulated in fp32, I was told to create new functions instead. Hope that give you enough forewarning of what will be accepted and what not.
<|||||>I definitely agree numerical BC is key here. I think we can have extensive tests using (1) random tensors as input, and (2) within the context of model forward and backward. I assume we'll also have to check for device and dtypes.
Should this issue be tabled until PyTorch 1.11 is released? IIRC the current stable is 1.10.2. Alternatively, I could use the nightly build to get started.<|||||>It's your call, you can wait till pt-1.11 is released - probably in a month or so, or you can start with nightly get everything ready and merge it once it's released.<|||||>PyTorch 1.11 is out! I could maybe get started on this if you haven't already @stas00?<|||||>go for it, Jaesung!<|||||>Upon more investigation, I realized 1.11 didn't ship with hyperbolic tangent GELU. I'll use the nightly instead. <|||||>Apologies for the delay @stas00. Here is a quick update of where I am at the moment.
**1. Understanding PyTorch's GELU Approximation**
The implementation can be found [here](https://github.com/pytorch/pytorch/blob/4917f8ce0ae086b92d2385f1d696b93e03fa6221/aten/src/ATen/native/cpu/Activation.cpp#L190-L206):
```cpp
if (approximate == GeluType::Tanh) {
AT_DISPATCH_FLOATING_TYPES_AND(
ScalarType::BFloat16, it.dtype(), "GeluKernelImpl", [&]() {
using Vec = vec::Vectorized<scalar_t>;
const Vec kBetaVec(scalar_t(M_SQRT2 * M_2_SQRTPI * 0.5));
const Vec kKappaVec(scalar_t(0.044715));
const Vec kOneVec(scalar_t(1));
const Vec kPointFiveVec(scalar_t(0.5));
cpu_kernel_vec(
it,
[](scalar_t x) {
const scalar_t kBeta = M_SQRT2 * M_2_SQRTPI * 0.5;
const scalar_t kKappa = 0.044715;
auto x_cube = x * x * x;
auto inner = kBeta * (x + kKappa * x_cube);
return scalar_t(0.5) * x * (scalar_t(1) + std::tanh(inner));
},
```
As noted in the [docs](https://pytorch.org/docs/master/generated/torch.nn.functional.gelu.html#torch.nn.functional.gelu), this boils down to
```
\text{GELU}(x) = 0.5 * x * (1 + text{Tanh}(sqrt(2 / pi) * (x + 0.044715 * x^3)))
```
HF transformers has a number of GELU implementations, but the one which corresponds to this specific variant appears to be `ACT2FN["gelu_new"]`, which is implemented as
https://github.com/huggingface/transformers/blob/df32b5d89ba2f86d9e3650dde84686a96d46fe65/src/transformers/activations.py#L34
Hence, I investigated whether the output of `gelu_new(x)` equals that of `F.gelu(x, approximate="tanh")`.
**2. Preliminary Experiments**
A simple first-level check might be to generate a random tensor and compare the output of the two functions. Here is the experiment, with link to the [Colab notebook](https://colab.research.google.com/drive/1IbzJmaxiolpte7HnCcdeeYtY_kaZ7tFB?usp=sharing).
```python
NUM_TRIALS = 1000
cpu_equal = []
cpu_allclose = []
for _ in range(NUM_TRIALS):
x_cpu = torch.randn(3, 3)
torch_cpu = F.gelu(x_cpu, approximate="tanh")
hf_cpu = gelu_new(x_cpu)
cpu_equal.append(torch.equal(torch_cpu, hf_cpu))
cpu_allclose.append(torch.allclose(torch_cpu, hf_cpu, rtol=1e-6))
print(average(cpu_equal))
print(average(cpu_allclose))
```
The same experiment was conducted with GPU by replacing `x_cpu = torch.randn(3, 3)` with `x_gpu = torch.randn(3, 3, device="cuda")`. There is no particular reason behind the choice of tensor size `(3, 3)`; it seemed reasonably small enough to manually check tensor values.
Given an `rtol` of `1e-6` and 1000 iterations per device, here are the results:
```
cpu equal: 0.213
cpu allclose: 0.991
gpu equal: 0.952
gpu allclose: 0.997
```
Computations seem to be more robust on the GPU. In particular, `torch.equal` passes only in about 1 out of 5 runs on CPU.
**3. Next Steps**
Here is a non-exhaustive list of tasks.
- Conduct more experiments with, different tensor sizes, `rtol` parameters, and `dtype`
- Run basic experiments (e.g. training and inference) with some models with replaced activation functions to see if results are reproducible
Generally, my intuition is that replacing `gelu_fast` with the new PyTorch GELU is a risky move given that `torch.equal` does not pass in all cases. As you noted previously,
> we must not change the outputs even by a little bit
Unless there is a performance overhead we are concerned with, I do not see a compelling reason to make what could be a dangerous transition.<|||||>That's a great report, Jaesung! Thank you!
Well, it can't be equal since it's an approximation, so it's really about deciding on the `atol/rtol` values and whether the results are acceptable.
and of course you want to experiment with much larger tensors than 3x3 to come up with conclusions.
If I try with more realistic sizes I get 0 matches with close or equal:
https://colab.research.google.com/drive/1RrG_R_gdOI3c9ut4Emh-ausg7vwsStKy?usp=sharing<|||||>@vadimkantorov, please have a look - the results are very different between the nightly fast version and the slow python approximation function.<|||||>I guess we need to tag @rdspring1 who authored https://github.com/pytorch/pytorch/pull/61439...<|||||>Hey @rdspring1, could you kindly look into this? In particular, we've observed that the number of `torch.allclose` fail cases increases as the size of the input tensor gets larger.
In the meantime, @stas00, do you think there's something actionable on our end? Perhaps I could load a model and replace HF GELUs with the PyTorch approximation to see if model outputs differ (they most surely will). What I'm not sure about is how to quantify/analyze this difference, as it's difficult to say with certainty that an X amount of `atol/rtol` difference is "fine" (or not).<|||||>Probably the creator of this feature would know the best expected tolerance - we will probably need to wait for their reply before we can proceed. Unless of course you'd like to dig into the source code and try to understand it yourself.<|||||>In my personal tests, I used `torch.allclose(torch_cpu, hf_cpu, rtol=1e-6, atol=1e-6)`.
For pytorch internal testing, they use double, long, and complex128 for their numpy reference check.
https://github.com/pytorch/pytorch/blob/master/test/test_ops.py#L207-L215
Here is the numpy tanh gelu reference implementation:
https://github.com/pytorch/pytorch/blob/master/torch/testing/_internal/common_methods_invocations.py#L8238-L8250
Here are the tolerances used in pytorch.
https://github.com/pytorch/pytorch/blob/master/torch/testing/_comparison.py#L1182-L1213
For `torch.float32`, the default tolerances are `rtol=1.3e-6` and `atol=1e-5`
|
transformers | 15,396 | closed | ImportError: cannot import name 'ImageGPTFeatureExtractor' from 'transformers' (unknown location) | Hi all,
I am running the script in kaggle and face the error mentioned below:
```
----> 1 from transformers import ImageGPTFeatureExtractor, ImageGPTModel
ImportError: cannot import name 'ImageGPTFeatureExtractor' from 'transformers' (unknown location)
```
How could I fix this error?
Thank you! | 01-28-2022 17:29:45 | 01-28-2022 17:29:45 | Hi,
What's the version of Transformers you have?<|||||>Hi,
4.5.1.
Do I need to upgrade ?<|||||>ImageGPT was released in v4.13 I believe<|||||>Yes, you are right.
Thank you very much! |
transformers | 15,395 | closed | Make links explicit | This small change makes the links in the examples `README.md` explicit. Right now the text assumes you're reading it on GitHub, and that you can see the directory tree above. However, the text is copied to https://huggingface.co/docs/transformers/examples where you can't see the directory tree and so can't access any of the examples! | 01-28-2022 17:06:20 | 01-28-2022 17:06:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,394 | closed | Use argument for preprocessing workers in run_summairzation | # What does this PR do?
The argument `preprocessing_num_workers` is defined but not used, this PR fixes that.
Fixes #15338 | 01-28-2022 16:10:46 | 01-28-2022 16:10:46 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,393 | closed | snapshot_download() got an unexpected keyword argument 'allow_regex' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.0
- Platform: Linux-3.10.0-1160.24.1.el7.ug.x86_64-x86_64-with-glibc2.2.5
- Python version: 3.8.6
- PyTorch version (GPU?): 1.10.1+cu102 (False)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: not yet (testing)
- Using distributed or parallel set-up in script?: no (but on HPC)
### Who can help
@patrickvonplaten (loading ASR model with kenlm and pyctcdecode)
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
```
>>> processor = AutoProcessor.from_pretrained("flozi00/wav2vec2-large-xlsr-53-german-with-lm")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/kyukon/scratch/gent/427/vsc42730/hftransformers/transformers_env/lib/python3.8/site-packages/transformers/models/auto/processing_auto.py", line 161, in from_pretrained
return processor_class.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/kyukon/scratch/gent/427/vsc42730/hftransformers/transformers_env/lib/python3.8/site-packages/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py", line 174, in from_pretrained
decoder = BeamSearchDecoderCTC.load_from_hf_hub(
File "/kyukon/scratch/gent/427/vsc42730/hftransformers/transformers_env/lib/python3.8/site-packages/pyctcdecode/decoder.py", line 757, in load_from_hf_hub
cached_directory = snapshot_download(model_id, cache_dir=cache_dir, **kwargs)
TypeError: snapshot_download() got an unexpected keyword argument 'allow_regex'
```
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
```
from transformers import AutoModelForCTC, AutoProcessor
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
```
## To reproduce
Steps to reproduce the behavior:
1. pip install transformers
2. pip install pyctcdecode
3. pip install https://github.com/kpu/kenlm/archive/master.zip
Possibly related to recent bugfix https://github.com/huggingface/transformers/pull/14881#discussion_r773823212
Similar result with
```
processor = Wav2Vec2ProcessorWithLM.from_pretrained
```
May also result in the following error when trying Wav2Vec2CTCTokenizer as intermediate step
```
TypeError: from_pretrained() takes 2 positional arguments but 3 were given
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
load processor
The current behaviour seems inherited from
```
from pyctcdecode import BeamSearchDecoderCTC
```
and in accordance with https://github.com/huggingface/transformers/blob/master/src/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py
I hope this is the right place to ask after the integration, thanks in advance and keep up the good work
<!-- A clear and concise description of what you would expect to happen. -->
| 01-28-2022 16:09:01 | 01-28-2022 16:09:01 | Hey @gnmarten,
Could you copy-paste this command and state your` output here:
```bash
python -c "import huggingface_hub; print(huggingface_hub.__version__)"
```
If your version is lower than 0.4.0 it would be nice if you could upgrade `huggingface_hub`<|||||>yes, it was <0.4.0, and after upgrade the download works (with remark "Loading the LM will be faster if you build a binary file.")
Thanks a lot for the fast reaction<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi everyone,
My huggingface_hub is 0.11.1, and the error
```
TypeError: snapshot_download() got an unexpected keyword argument 'local_dir_use_symlinks'
snapshot_download() got an unexpected keyword argument 'local_dir_use_symlinks'
snapshot_download() got an unexpected keyword argument 'local_dir'
```
which version should I update to?<|||||>Could you upgrade `huggingface_hub` to the newest version? |
transformers | 15,392 | closed | Logits size does not match vocabulary size when using pyctcdecode for a fine-tuned wav2vec 2.0 model | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.0-1063-azure-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.2+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes (device = 'cuda')
- Using distributed or parallel set-up in script?: No
## Who can help
@patrickvonplaten, @anton-l
## Dataset used
https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0
## Information
Model I am using: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm, (dataset = https://huggingface.co/datasets/mozilla-foundation/common_voice_8_0) The model fine-tuned version of https://huggingface.co/facebook/wav2vec2-xls-r-300m
The problem arises when using:
```python
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_7.0_nl_lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
## To reproduce
Steps to reproduce the behavior:
1. install packages using: `!pip install https://github.com/kpu/kenlm/archive/master.zip pyctcdecode
!pip install git+https://github.com/huggingface/transformers.git
!pip install git+https://github.com/huggingface/datasets.git
!pip install torchaudio soundfile librosa Levenshtein telwoord wandb jiwer`
2. run:
```
import torch
from datasets import load_dataset
from transformers import AutoModelForCTC, AutoProcessor
import torchaudio.functional as F
model_id = "Iskaj/xlsr300m_cv_7.0_nl_lm"
sample_iter = iter(load_dataset("mozilla-foundation/common_voice_8_0", "nl", split="test", streaming=True, use_auth_token=True))
sample = next(sample_iter)
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), 48_000, 16_000).numpy()
model = AutoModelForCTC.from_pretrained(model_id)
processor = AutoProcessor.from_pretrained(model_id)
input_values = processor(resampled_audio, return_tensors="pt").input_values
with torch.no_grad():
logits = model(input_values).logits
transcription = processor.batch_decode(logits.numpy()).text
```
3. Observe the error: `ValueError: Input logits of size 48, but vocabulary is size 50
## Expected behavior
I would expect pyctc-decode to work correctly and give me a transcription.
I suspect it has something to do with `<s>` and `</s>`. I've been struggling a bit with the length of the logits not matching up with the length of the vocabulary, when using pyctcdecode. For example in this repo that uses the LM: https://huggingface.co/patrickvonplaten/wav2vec2-base-100h-with-lm/blob/main/vocab.json the vocab.json includes <s> and </s>, but in this repo it doesn't: https://huggingface.co/hf-test/xls-r-300m-sv/blob/main/vocab.json Maybe that helps | 01-28-2022 16:05:21 | 01-28-2022 16:05:21 | Hey @iskaj,
The problem is that there are 48 tokens in the official vocabulary: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/blob/e0290ad21fd43cf69d9f4de754067c02f9d6641e/vocab.json , but two more in the alphabet of the language model: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/blob/e0290ad21fd43cf69d9f4de754067c02f9d6641e/alphabet.json
This leads to an error as the two vocabularies have to match.
To fix the problem above we need to do the following change: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/commit/714875260cb8e156ca4e4059bd7d07b863e03aad<|||||>Having done the change, executing your code currently leads to the following error:
```bash
~/python_bin/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
186 missing_decoder_tokens = cls.get_missing_alphabet_tokens(decoder, tokenizer)
187 if len(missing_decoder_tokens) > 0:
--> 188 raise ValueError(
189 f"The tokens {missing_decoder_tokens} are defined in the tokenizer's "
190 "vocabulary, but not in the decoder's alphabet. "
ValueError: The tokens {'</s>', '<s>'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'</s>', '<s>'} in the decoder's alphabet.
```
which is a bug in Transformers that I'll try to fix tomorrow (should be relatively easy) :-)
I'll keep you posted here<|||||>I indeed tried removing the 2 tokens and encountering the error. Thought because of the error message that it was a fault on my end in terms of modelling. Thanks for the quick response!<|||||>I just took a more detailed look into it and it's actually not really a bug in Wav2Vec2, but a somewhat of an edge case scenario coupled with unintuitive design in Transformers.
Let's explain. Both your vocabulary file and your alphabet file have been correctly constructed:
- Vocab: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/blob/main/vocab.json
- Alphabet: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/blob/main/alphabet.json
=> They match in vocabulary which is exactly what we want!
**However**, in Transformers tokenizers can also have additional tokens that when not added to the vocabulary don't necessarily have to have a corresponding output character / logit id. This was the case here. The EOS and BOS token were added as "additional tokens" even though they have no corresponding logit id - see: https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/blob/e0290ad21fd43cf69d9f4de754067c02f9d6641e/added_tokens.json .
This means they correspond to the vocabulary of the tokenizer which in turn would force them to also be part of the alphabet which doesn't make sense though since the alphabet has to correspond 1-to-1 to the logit ids and EOS and BOS have no logit id.
=> So to make your model work, we actually need to remove those special tokens which I have done in the last three commits:
- https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/commit/fa01c8fc60031ec64e9c0cb965ebd2d629e2c4fb
- https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/commit/4c37833bd9ac1c717a4dedaaeb328f46160677a1
- https://huggingface.co/Iskaj/xlsr300m_cv_7.0_nl_lm/commit/17f4f68b9c4bd778e087244d882a47eb9c5cdc9a
So the above command now works as expected.
Finally there is one more difficulty to consider. By default Wav2Vec2CTCTokenizer sets the `eos_token` and the `bos_token` to a value, being `<eos>` and `<bos>` - see: https://huggingface.co/docs/transformers/v4.16.1/en/model_doc/wav2vec2#transformers.Wav2Vec2CTCTokenizer
=> this means we actually need to overwrite this with `None` (or `null` in json). So to remove EOS and BOS one would have to do:
```python
from transformers import Wav2Vec2CTCTokenizer
tok = Wav2Vec2CTCTokenizer.from_pretrained("<path/to/vocab/file/without/eos/and/bos>", eos_token=None, bos_token=None)
```
The original Wav2Vec2 Tokenizers all had EOS and BOS defined in the vocab and a corresponding logit id, which is why it is the default. However this doesn't alwasy have to be the case.<|||||>Hope this help to understand the problem for future use-cases as well :-)<|||||>> Hope this help to understand the problem for future use-cases as well :-)
Patrick, thanks for your help!
Are you planning to fix this issue in Transformers?
Currently this issue makes impossible to run your code example from this repo:
[Transformers Wav2Vec2 + PyCTCDecode](https://github.com/patrickvonplaten/Wav2Vec2_PyCTCDecode)
If vocabulary doesn't contain `<s>, </s>` chars, it will give error described in the thread:
> ValueError: Input logits of size X, but vocabulary is size X + 2<|||||>Hey @BakuDev,
It's not really an issue in Transformers IMO - one needs to make sure that the tokenizer configs are correct<|||||>> Hey @BakuDev,
>
> It's not really an issue in Transformers IMO - one needs to make sure that the tokenizer configs are correct
I think I am running into a similar problem. When I remove/replace` '<s>' and '</s>'` as described above, I am getting the following error, when loading the processor:
```
from transformers import Wav2Vec2ProcessorWithLM, AutoModelForCTC
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("pgilles/wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM")
```
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-42-3afdbbbb10fb> in <module>
1 from transformers import Wav2Vec2ProcessorWithLM, AutoModelForCTC
2
----> 3 processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("/Users/peter.gilles/Documents/_Daten/Finetuning - Traningsdaten/wav2vec2-large-xls-r-LUXEMBOURGISH2")
/usr/local/lib/python3.8/site-packages/transformers/models/wav2vec2_with_lm/processing_wav2vec2_with_lm.py in from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
186 missing_decoder_tokens = cls.get_missing_alphabet_tokens(decoder, tokenizer)
187 if len(missing_decoder_tokens) > 0:
--> 188 raise ValueError(
189 f"The tokens {missing_decoder_tokens} are defined in the tokenizer's "
190 "vocabulary, but not in the decoder's alphabet. "
```
`ValueError: The tokens {'</s>', '<s>'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'</s>', '<s>'} in the decoder's alphabet.`
When I leave all the `<s>, </s>`, I am getting this error after trying to run an interference:
```
import torch
import torchaudio
import soundfile
waveform, sampling_rate = soundfile.read( audio_sample )
features = processor_with_lm(waveform, sampling_rate=16000, return_tensors="pt")
with torch.no_grad():
logits = model(**features).logits
transcription = processor_with_lm.batch_decode(logits.numpy()).text
transcription[0].lower()
```
`ValueError: Input logits of size 39, but vocabulary is size 41`
It seems at the moment I am stuck with the integration of a language model.
<|||||>@PeterGilles,
I'm pretty sure the problem is exactly the same as solved in https://github.com/huggingface/transformers/issues/15392#issuecomment-1024905216 . Could you try to solve it following the comments?<|||||>@patrickvonplaten , thanks. I guess my problem is exactly what is described in the comment above. However, I am not succeeding. Maybe I am not getting the last step mentioned in the comment about the Wav2Vec2CTCTokenizer.
My setup at the moment is the following:
```
from pyctcdecode import build_ctcdecoder
decoder = build_ctcdecoder(
labels=list(sorted_vocab_dict.keys()),
kenlm_model_path="5gram.arpa",
)
from transformers import Wav2Vec2CTCTokenizer
tokenizer = Wav2Vec2CTCTokenizer.from_pretrained("wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM", eos_token=None, bos_token=None)
from transformers import Wav2Vec2ProcessorWithLM, AutoModelForCTC
processor_with_lm = Wav2Vec2ProcessorWithLM(
feature_extractor=processor.feature_extractor,
tokenizer=tokenizer,
decoder=decoder
)
processor_with_lm.save_pretrained("wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM")
from transformers import Wav2Vec2ProcessorWithLM, AutoModelForCTC
processor_with_lm = Wav2Vec2ProcessorWithLM.from_pretrained("wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM")
model = AutoModelForCTC.from_pretrained("wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM")
import IPython.display as ipd
audio_sample = "pg_8_124_4dfst.wav"
ipd.Audio(data=audio_sample, autoplay=True, rate=audio_sample)
import torch
import torchaudio
import soundfile
waveform, sampling_rate = soundfile.read( audio_sample )
features = processor_with_lm(waveform, sampling_rate=16000, return_tensors="pt")
#input_values = features.input_values
#attention_mask = features.attention_mask
with torch.no_grad():
logits = model(**features).logits
transcription = processor_with_lm.batch_decode(logits.numpy()).text
transcription[0].lower()
```
`ValueError: Input logits of size 39, but vocabulary is size 41`
The model on [pgilles/wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM](https://huggingface.co/pgilles/wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM) is up-to-date. I think the error comes in when I include the modified tokenizer.
Thanks.<|||||>Hey @PeterGilles,
I updated your config here: https://huggingface.co/pgilles/wav2vec2-large-xls-r-LUXEMBOURGISH-with-LM/commit/dbfa456977153ef0e9d37f449854de9fddd34595 . The tokenizer should be correct now - can you check your example again? Sorry the LM is too huge for me to check it locally<|||||>Dear @patrickvonplaten , indeed this little change made the difference and it seems to work. Merci!
When running some inferences, the transcription of some audio files which are also part of the training data is running fine. However, when trying with some other audios (2-3 seconds), the output is weird and nonsense, i.e. either one or a few letters or simply nothing. I guess this now a general problem with the language model.<|||||>Hi @patrickvonplaten I got the same error as you mentioned above. I did what you said but I still get an error like below. Can you please help?
ValueError: The tokens {'null'} are defined in the tokenizer's vocabulary, but not in the decoder's alphabet. Make sure to include {'null'} in the decoder's alphabet.
My alphabet.json:
{"labels": ["", "<s>", "</s>", "\u2047", " ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e7", "\u00f6", "\u00fc", "\u011f", "\u0131", "\u015f"], "is_bpe": false}
my vocab.json:
{"[PAD]": 0, "<s>": 1, "</s>": 2, "[UNK]": 3, "|": 4, "'": 5, "a": 6, "b": 7, "c": 8, "d": 9, "e": 10, "f": 11, "g": 12, "h": 13, "i": 14, "j": 15, "k": 16, "l": 17, "m": 18, "n": 19, "o": 20, "p": 21, "q": 22, "r": 23, "s": 24, "t": 25, "u": 26, "v": 27, "w": 28, "x": 29, "y": 30, "z": 31, "ç": 32, "ö": 33, "ü": 34, "ğ": 35, "ı": 36, "ş": 37}
my added_tokens.json:
{}
my special_tokens_map.json:
{"bos_token": "null", "eos_token": "null", "unk_token": "[UNK]", "pad_token": "[PAD]"}
my tokenizer_config.json:
{"unk_token": "[UNK]", "bos_token": "null", "eos_token": "null", "pad_token": "[PAD]", "do_lower_case": false, "word_delimiter_token": "|", "replace_word_delimiter_char": " ", "special_tokens_map_file": null, "name_or_path": "model/checkpoint-6000", "tokenizer_class": "Wav2Vec2CTCTokenizer", "processor_class": "Wav2Vec2ProcessorWithLM"}
<|||||>I think there should be no " " around null, i.e. `"bos_token": null, "eos_token": null`.<|||||>@patrickvonplaten I have set eos and bos token to null, still i am getting the same error.
Here is my model
https://huggingface.co/Talha/URDU-ASR<|||||>Hey @talhaanwarch, can you please provide a code snippet showcasing your error?<|||||>@patrickvonplaten I am looking for the best way to fine-tune wave2vec with trainer of Huggingface and language model ? can you help <|||||>The blog post: https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 should be the most helpful one<|||||>Where does one specify the alpha and beta values for decoding? or is that not applicable here? |
transformers | 15,391 | closed | [Fix doc example] FlaxMarianPreTrainedModel | # What does this PR do?
Tiny fix:
change
```
>>> tokenizer = MarianTokenizer.from_pretrained("facebook/marian-large-cnn")
```
to
```
>>> tokenizer = MarianTokenizer.from_pretrained("Helsinki-NLP/opus-mt-en-de")
```
BTW, `Helsinki-NLP/opus-mt-en-de` has no Flax model checkpoint. The next line
```
>>> model = FlaxMarianMTModel.from_pretrained("Helsinki-NLP/opus-mt-en-de")
```
will fail I think.
| 01-28-2022 15:18:51 | 01-28-2022 15:18:51 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@ydshieh `Helsinki-NLP/opus-mt-en-de` already has a flax checkpoint. cf
https://huggingface.co/Helsinki-NLP/opus-mt-en-de/blob/main/flax_model.msgpack<|||||>> @ydshieh `Helsinki-NLP/opus-mt-en-de` already has a flax checkpoint. cf https://huggingface.co/Helsinki-NLP/opus-mt-en-de/blob/main/flax_model.msgpack
Great. Sorry, Google search leads me to the wrong page (`de-en` instead ...) |
transformers | 15,390 | closed | can't use with pytorch dataloader | RuntimeError: einsum(): the number of subscripts in the equation (3) does not match the number of dimensions (4) for operand 0 and no ellipsis was given | 01-28-2022 14:44:50 | 01-28-2022 14:44:50 | Please paste the whole code and describe your task in detail<|||||>@sipie800 Seems like a dimension error while resizing since you are using einsum. If you could share your code I maybe able to help.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,389 | closed | Fine Tuned GPT2 model performs very poorly on token classification task | ## Environment info
- `transformers` version: 4.17.0.dev0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
###Models:
- GPT-2: @patrickvonplaten, @LysandreJik
## Information
Model I am using gpt2:
The problem arises when using:
* [run_ner ] the official example scripts:
The tasks I am working on is:
Token classification on ncbi_disease dataset
## To reproduce
python run_ner.py \
--model_name_or_path gpt2 \
--dataset_name ncbi_disease \
--output_dir /tmp/test-ner \
--do_train \
--do_eval
The f1-score after fine-tuning for 3 epoch is only 0.6114 whereas roberta-base provides f1 score of 0.8358 on the evaluation dataset. | 01-28-2022 12:42:22 | 01-28-2022 12:42:22 | Hey @Dhanachandra,
As this is not an obvious bug, could you please try to make use of the forum: https://discuss.huggingface.co/ here instead?<|||||>Sorry, @patrickvonplaten,
Due to curiosity to know the reason, I was posting the querry here. <|||||>That's totally fine! I hope you'll find some help on the forum :-)<|||||>Just commenting on this since I was trying to do a similar thing. I unfortunately had some false-confidence since the code specifically checks for `gpt2` config. Hopefully this will help people looking at this in the future!
There is a subtle oversight in the code for decoder-only models. The `run_ner.py` code puts the label on the first of tokenized inputs. So for example:
```
Inputs: "He is on the ketogenic diet"
Label: "ketogenic diet" -> treatment
Tokenized-inputs: ["He", "is", "on", "the", "ke", "to", "genic", "diet"]
Old Model Labels: [0, 0, 0, 0, 1, -100, -100, 1]
```
So it will only see "ke" before needing to make a prediction. This should be changed to the following:
```
Tokenized-inputs: ["He", "is", "on", "the", "ke", "to", "genic", "diet"]
New Model Labels: [0, 0, 0, 0, -100, -100, 1, 1]
``` |
transformers | 15,388 | closed | Prepare deprecated ONNX exporter for torch v1.11 | # What does this PR do?
This PR extends #15270 to ensure the deprecated `transformers.convert_graph_to_onnx` package can also run on `torch` v1.11.
I also took the liberty of adding a deprecation warning to this package.
| 01-28-2022 11:57:23 | 01-28-2022 11:57:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,387 | open | Evidentiality-guided Generator - Retrieval model | # 🌟 New model addition
## Model description
In this paper, we introduce Evidentiality-guided GGenerator, which incorporates evidentiality of passages---whether a passage contains correct evidence to support the output---into training the generator via multi-task learning of answer generation and evidentiality prediction for retrieval-augmented generation. Experimental results show large improvements across three knowledge intensive tasks: open question answering, fact verification and knowledge-enhanced dialogue.
## Open source status
* [x] the model implementation is available: https://github.com/AkariAsai/evidentiality_qa/tree/main/evi_gen
* [x] the model weights are available: https://github.com/AkariAsai/evidentiality_qa#fine-tuned-models
* [x] who are the authors: @AkariAsai
Happy to guide anyone who is interested through adding this model! Seems like it gives some nice improvements over RAG!
@qqaatw - maybe interesting for you ;-)
| 01-28-2022 11:46:12 | 01-28-2022 11:46:12 | cc @patil-suraj <|||||>I think we could firstly add Fusion-in-Decoder (FiD), [paper](https://arxiv.org/pdf/2007.01282.pdf) and [code](https://github.com/facebookresearch/FiD), since Evidentiality-guided Generator is based on it.<|||||>Hi everyone, Can I work on adding Fusion-in-Decoder (FiD) to Transformers?<|||||>Yeah that makes sense - @patil-suraj actually mentioned the same!<|||||>@bhavitvyamalik - definitely! I think @qqaatw and I would be happy to guide you. Do you want to open a PR for it?<|||||>I was just going through the paper and will open a PR for it! Since this will be my first model contribution to Transformers, I wanted to know if there any guidlines that I should follow for the same? <|||||>@patrickvonplaten - absolutely! After Fusion-in-Decoder (FiD) is merged, I can open a follow-up addition for Evidentiality-guided Generator.
@bhavitvyamalik - I think having a look at [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) would be helpful in the first step :-)<|||||>Update: Preparing the original train/val/test data took some time as they were loading the full wikipedia dump (~13GB) in memory which was not possible from my side. I tried forward pass with available pretrained weights after removing functions that were not required and it worked well! I had a doubt related to `FiD`:
Integrating the retrieval in `FiD` is part of their future work, which means the users will have to input the context by themselves. So in what format should I get input from user? Current implementation is taking input like this:
```
[
{
"question":"who got the first nobel prize in physics",
"answers":[
"Wilhelm Conrad Röntgen"
],
"ctxs":[
{
"title":"Nobel Prize in Physics",
"text":"Nobel Prize in Physics The Nobel Prize in Physics () is a yearly award given by the Royal Swedish Academy of Sciences for those who have made the most outstanding contributions for mankind in the field of physics. It is one of the five Nobel Prizes established by the will of Alfred Nobel in 1895 and awarded since 1901; the others being the Nobel Prize in Chemistry, Nobel Prize in Literature, Nobel Peace Prize, and Nobel Prize in Physiology or Medicine. The first Nobel Prize in Physics was awarded to physicist Wilhelm Röntgen in recognition of the extraordinary services he"
},
{
"title":"Nobel Prize",
"text":"His son, George Paget Thomson, received the same prize in 1937 for showing that they also have the properties of waves. William Henry Bragg and his son, William Lawrence Bragg, shared the Physics Prize in 1915 for inventing the X-ray spectrometer. Niels Bohr was awarded the Physics prize in 1922, as was his son, Aage Bohr, in 1975. Manne Siegbahn, who received the Physics Prize in 1924, was the father of Kai Siegbahn, who received the Physics Prize in 1981. Hans von Euler-Chelpin, who received the Chemistry Prize in 1929, was the father of Ulf von Euler, who was awarded"
},
{
"title":"Nobel Prize in Physics",
"text":"receive a diploma, a medal and a document confirming the prize amount. Nobel Prize in Physics The Nobel Prize in Physics () is a yearly award given by the Royal Swedish Academy of Sciences for those who have made the most outstanding contributions for mankind in the field of physics. It is one of the five Nobel Prizes established by the will of Alfred Nobel in 1895 and awarded since 1901; the others being the Nobel Prize in Chemistry, Nobel Prize in Literature, Nobel Peace Prize, and Nobel Prize in Physiology or Medicine. The first Nobel Prize in Physics was"
}
]
},
]
```<|||||>Great question @bhavitvyamalik ! We've discussed this a bit with @patil-suraj as well. @patil-suraj - would you be interested in guiding @bhavitvyamalik here a bit regarding the design? <|||||>Happy to help otherwise as well if you're too busy :-) |
transformers | 15,386 | closed | Adding a test for hashing tokenizer state. | # What does this PR do?
Related to https://github.com/huggingface/transformers/issues/14931 and https://github.com/huggingface/datasets/issues/3638
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> | 01-28-2022 10:37:24 | 01-28-2022 10:37:24 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15386). All of your documentation changes will be reflected on that endpoint.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,385 | closed | Force use_cache to be False in PyTorch | # What does this PR do?
In `TFxxxForForConditionalGeneration` models, there is
```
if inputs["labels"] is not None:
...
inputs["use_cache"] = False
```
while there is no such change in `xxxForForConditionalGeneration` models.
This would fail a more complete pt/tf equivalence test.
I understand why TF models doing this (which might be beneficial for efficiency.
We can do it in another way: add `use_cache=False` for PyTorch models if `labels` is passed. Let me know which way HF prefers.
| 01-28-2022 09:08:06 | 01-28-2022 09:08:06 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Hmm, I think we can leave those statements @ydshieh no? The don't do any harm, but reduce the memory footprint slightly by not creating the past tensor. The reduced memory footprint might however be neglectible - in this case, ok for me to remove it.
For more context, I think the reason we've added `use_cache=False` here is because:
- `use_cache` can be used to sped up generation by not re-computing the past key value states for previous generation steps. This however does not make sense during training because there is only one forward pass through the decoder.
- `use_cache` is set to `True` by default, which means that the model outputs a tuple of past key value states that can (but don't have to) be used for sped-up generation if they are passed to the forward function in the next step. Since we are never passing `past_key_values` to the forward function during training, it's not really an error to leave `use_cache` to True.
- The reason why we however do set it to `False` is because this way some memory can be saved since the `past_key_values` are never actually returned.
<|||||>Keen to hear @Rocketknight1 and @gante opinion here<|||||>I totally agree. My purpose it to make the PT/TF returns the same results, not the specific way to use.
Currently, PT returns `past_key_values`, and TF don't in this case. This makes a more thorough version of `test_pt_tf_model_equivalence` almost impossible (or very difficult).
I can do it in another way: keep TF code as it is, but add `use_cache=False` for PT models if `labels` is passed (so PT/TF works the same way).
I believe a thorough test will be beneficial to HF's transformers - I have use it to identify several other issues that require real fixes (unlike this one).<|||||>Given the mismatches that @ydshieh has found so far, I'm leaning towards approving the PR so we can do more thorough testing.
Can we get some numbers (memory utilization and runtime, before and after the change), so we can make an informed decision? If the difference turns out to be small, then I believe it is a no-brainer :)<|||||>> Given the mismatches that @ydshieh has found so far, I'm leaning towards approving the PR so we can do more thorough testing.
>
> Can we get some numbers (memory utilization and runtime, before and after the change), so we can make an informed decision? If the difference turns out to be small, then I believe it is a no-brainer :)
I could try to measure it. But I prefer to go another way: add `use_cache = False` to PT models if `labels` is passed.
(no matter if there is gain or not). It's just a few files, and if cache is never used when `labels` is there, no reason to create it I think. Would it be OK for you guys?<|||||>Hi,
Use a setting `batch_size = 16, seq_len = 128 (both encoder/decoder)` for `BART large`: the difference (on the screen) of with/without cache is about `512 MB`.
I calculated the number of float32: `12 (layers) * 4 (2 for decoder attn, 2 for cross attn) * 16 (batch) * 16 (heads) * 128 (seq len) * 64 (dim per head) * 4 Byte (float32) = 384 MB`. (not very sure why this is different from the above observed difference.).
I tried with other settings, and it is linear. So for `batch_size = 256, seq_len = 512`, the difference would be like `32 GB`.
I am using PT's BART in the experimentation.<|||||>The savings could be helpful, especially on larger models. I'm in favour of setting `use_cache` to `False` when `labels` is passed, since we can assume we are in a training regime -- but with a `warnings.warn` if `use_cache` is `True`, so we know we are overwriting an input option.<|||||>I reverted the changes done in TF models, and add `use_cache = False` for related PT models if `labels` is passed. <|||||>I think we should give a warning only if `use_cache` as `argument` is `True` (when `labels` is provided).
If `self.config.use_cache=True` but `use_cache` (as arg) is not specified, we can safely change `use_cache=False` without warning, right?
(This way, we can avoid showing repeated warning due to the default `self.config.use_cache=True`).
Or, we shouldn't set `use_cache=False` if a user provides `use_cache=True` explicitly?
Since the changes are in PT models now, cc @LysandreJik @patrickvonplaten @sgugger
https://github.com/huggingface/transformers/blob/aee4e7aa1b418c1c6c8fca28403571fbfbfcc8ed/src/transformers/models/bart/modeling_bart.py#L987<|||||>> I think we should give a warning only if `use_cache` as `argument` is `True` (when `labels` is provided). If `self.config.use_cache=True` but `use_cache` (as arg) is not specified, we can safely change `use_cache=False` without warning, right? (This way, we can avoid showing repeated warning due to the default `self.config.use_cache=True`).
>
> Or, we shouldn't set `use_cache=False` if a user provides `use_cache=True` explicitly?
>
> Since the changes are in PT models now, cc @LysandreJik @patrickvonplaten @sgugger
>
> https://github.com/huggingface/transformers/blob/aee4e7aa1b418c1c6c8fca28403571fbfbfcc8ed/src/transformers/models/bart/modeling_bart.py#L987
Yes I very much agree that's a nice insight! If `use_cache` is passed as `True` I think we should **still** change `use_cache=False` but this time add a warning. If it's not passed no need to throw a warning to not get repeated warnings. Think then this PR is good for merge. Could you also change the title to something like "Force use_cache to be False in PyTorch"?<|||||>> > I think we should give a warning only if `use_cache` as `argument` is `True` (when `labels` is provided). If `self.config.use_cache=True` but `use_cache` (as arg) is not specified, we can safely change `use_cache=False` without warning, right? (This way, we can avoid showing repeated warning due to the default `self.config.use_cache=True`).
> > Or, we shouldn't set `use_cache=False` if a user provides `use_cache=True` explicitly?
> > Since the changes are in PT models now, cc @LysandreJik @patrickvonplaten @sgugger
> > https://github.com/huggingface/transformers/blob/aee4e7aa1b418c1c6c8fca28403571fbfbfcc8ed/src/transformers/models/bart/modeling_bart.py#L987
>
> Yes I very much agree that's a nice insight! If `use_cache` is passed as `True` I think we should **still** change `use_cache=False` but this time add a warning. If it's not passed no need to throw a warning to not get repeated warnings. Think then this PR is good for merge. Could you also change the title to something like "Force use_cache to be False in PyTorch"?
Hi, @patrickvonplaten
I will change the title, but I haven't done anything for the warning (if `use_cache=True`). Should I do it in this PR, or leave it to another one?<|||||>> > > I think we should give a warning only if `use_cache` as `argument` is `True` (when `labels` is provided). If `self.config.use_cache=True` but `use_cache` (as arg) is not specified, we can safely change `use_cache=False` without warning, right? (This way, we can avoid showing repeated warning due to the default `self.config.use_cache=True`).
> > > Or, we shouldn't set `use_cache=False` if a user provides `use_cache=True` explicitly?
> > > Since the changes are in PT models now, cc @LysandreJik @patrickvonplaten @sgugger
> > > https://github.com/huggingface/transformers/blob/aee4e7aa1b418c1c6c8fca28403571fbfbfcc8ed/src/transformers/models/bart/modeling_bart.py#L987
> >
> >
> > Yes I very much agree that's a nice insight! If `use_cache` is passed as `True` I think we should **still** change `use_cache=False` but this time add a warning. If it's not passed no need to throw a warning to not get repeated warnings. Think then this PR is good for merge. Could you also change the title to something like "Force use_cache to be False in PyTorch"?
>
> Hi, @patrickvonplaten
>
> I will change the title, but I haven't done anything for the warning (if `use_cache=True`). Should I do it in this PR, or leave it to another one?
Think we can do it in this PR<|||||>Added warning. (We can add the same warning for TF models in a follow-up PR)<|||||>Great PR @ydshieh - thanks a lot! |
transformers | 15,384 | closed | Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5330' | ## Environment
- `transformers` version:4.16.0.dev0
- Platform:centos7
- Python version:3.6.5
- PyTorch version (GPU?):1.10.1
- CUDA:10.2
- Cudnn:7.65
Models:
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
Library:
- Text generation: @patrickvonplaten @narsil
## Information
The problem arises when using:
* [[ ] the official example scripts: (give details below)](https://github.com/huggingface/transformers/tree/v4.15.0/examples/onnx/pytorch/summarization)
## Add
Thank you very much for your open source code! I encountered the following problems when using the official code to convert BART into onnx model.
Debug info:
2022-01-28 15:48:27.150774771 [W:onnxruntime:, graph.cc:3526 CleanUnusedInitializersAndNodeArgs] Removing initializer '1751'. It is not used by any node and should be removed from the model.
2022-01-28 15:48:27.309347842 [W:onnxruntime:, graph.cc:3526 CleanUnusedInitializersAndNodeArgs] Removing initializer '5836'. It is not used by any node and should be removed from the model.
2022-01-28 15:48:27.309405007 [W:onnxruntime:, graph.cc:3526 CleanUnusedInitializersAndNodeArgs] Removing initializer '5882'. It is not used by any node and should be removed from the model.
2022-01-28 15:48:27.309424807 [W:onnxruntime:, graph.cc:3526 CleanUnusedInitializersAndNodeArgs] Removing initializer '5790'. It is not used by any node and should be removed from the model.
2022-01-28 15:48:27.322407481 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5330'
2022-01-28 15:48:27.322776093 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_808'
2022-01-28 15:48:27.322825883 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_1'
2022-01-28 15:48:27.326471594 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5128'
2022-01-28 15:48:27.362769188 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5330'
2022-01-28 15:48:27.363078495 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_808'
2022-01-28 15:48:27.363121543 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_1'
2022-01-28 15:48:27.366445273 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5128'
2022-01-28 15:48:27.429176464 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5330'
2022-01-28 15:48:27.429415152 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_808'
2022-01-28 15:48:27.429447650 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_1'
2022-01-28 15:48:27.431914509 [W:onnxruntime:, constant_folding.cc:202 ApplyImpl] Unsupported output type of N11onnxruntime22SequenceTensorTypeBaseE. Can't constant fold SequenceEmpty node 'SequenceEmpty_5128'
2022-01-28 15:48:27.570257513 [W:onnxruntime:, graph.cc:3526 CleanUnusedInitializersAndNodeArgs] Removing initializer '5152'. It is not used by any node and should be removed from the model.
2022-01-28 15:48:27.570293380 [W:onnxruntime:, graph.cc:3526 CleanUnusedInitializersAndNodeArgs] Removing initializer '5149'. It is not used by any node and should be removed from the model.
| 01-28-2022 07:56:59 | 01-28-2022 07:56:59 | Pinging @mfuntowicz for the onnx export.<|||||>Add
This error runs fine under onnxruntime, but when deployed to Tensorrt, it will not paser onnx model<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,383 | closed | Validate models are loadable | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
Validating models uploaded to the hub in some way
## Motivation
<!-- Please outline the motivation for the proposal. Is your feature request
related to a problem? e.g., I'm always frustrated when [...]. If this is related
to another GitHub issue, please link here too. -->
I so far only loaded t5-small models en masse (or generally that are not the pretrained version). I have found a third of the models are unloadable!
Followed this discussion: https://discuss.huggingface.co/t/cant-download-some-models-although-they-are-in-the-hub/13944
## Your contribution
<!-- Is there any way that you could help, e.g. by submitting a PR?
Make sure to read the CONTRIBUTING.MD readme:
https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md -->
I am not sure, I don't know how the code to the hub itself works, I think the fix is not too complicated for others that do.
I suggest a loading code must be added to the uploading to the hub (with the default of autoWhatever.from_pretrained() or even not adding and always assuming that). Then if the users' own code fails when they try to upload, then you say upload failed unloadable. In that way there won't be models where they have a description, but actually, the files are not there, or they miss some of the files needed to load.
Moreover, this would make for an easy way for people to start, loading the model would be known (again, may be just skipped if assumed the default from_pretrained is always this way). | 01-28-2022 05:50:49 | 01-28-2022 05:50:49 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,382 | closed | Update modeling_blenderbot.py | Made the `BlenderbotEncoderLayer` have similar optional arguments to the `BlenderbotDecoderLayer` and other classes.
Fixes the `forward` call of the `BlenderbotEncoderLayer`.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-28-2022 05:30:01 | 01-28-2022 05:30:01 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15382). All of your documentation changes will be reflected on that endpoint.<|||||>Could you also update this in `modeling_mbart.py` as well, because the encoder and decoder modules are in blenderbot are copied from mbart so we need to fix both of those. So
- update the mbart file
- and run `make fix-copies`, which will then fix this for all other models which have a similar copy.
Let me know if you need help with it :) <|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,381 | closed | Does huggingface allow to perform n-fold cross validation and custom loss function using Hugging face Trainer and save best model? | I am solving binary classification problem using `Roberta-Toxic` model. My classes are highly skewed. ( `2% positive sample` )
I thought to perform `n-fold cross validation`. First thing that came to my mind is used Trainer in loop.
`I initialise the model every-time in the loop.`
Here is the simple code:
```
kf = KFold(n_splits=10, shuffle=True, random_state=0)
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
acc = np.sum(predictions == labels) / predictions.shape[0]
return {"accuracy" : acc}
training_args = tr.TrainingArguments(
#report_to = 'wandb',
output_dir='/home/pc/results_new1', # output directory
overwrite_output_dir = True,
num_train_epochs=2, # total number of training epochs
per_device_train_batch_size=32, # batch size per device during training
per_device_eval_batch_size=32, # batch size for evaluation
learning_rate=2e-5,
warmup_steps=1000, # number of warmup steps for learning rate scheduler
weight_decay=0.01, # strength of weight decay
logging_dir='./logs_new1', # directory for storing logs
logging_steps=220,
evaluation_strategy="epoch"
,save_strategy="epoch"
,load_best_model_at_end=True
)
counter = 0
results_lst = []
train_text=train["text"]
train_label=train["label"]
for train_idx, val_idx in kf.split(train_text):
print("Starting fold", counter)
# split data
train_texts_base = train_text.iloc[train_idx].tolist()
train_labels_base = train_label.iloc[train_idx].tolist()
val_texts = train_text.iloc[val_idx].tolist()
val_labels = train_label.iloc[val_idx].tolist()
# do tokenization
train_encodings = tokenizer(train_texts_base, truncation=True, padding=True, max_length=512, return_tensors="pt")
val_encodings = tokenizer(val_texts, truncation=True, padding=True, max_length=512, return_tensors="pt")
# make datasets
train_data = SEDataset(train_encodings, train_labels_base)
val_data = SEDataset(val_encodings, val_labels)
# train
model = tr.RobertaForSequenceClassification.from_pretrained("/home/unbiased_toxic_roberta",problem_type="single_label_classification", num_labels=2, ignore_mismatched_sizes=True)
model.to(device)
trainer = tr.Trainer(
model=model, # the instantiated 🤗 Transformers model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_data, # training dataset
eval_dataset=val_data, # evaluation dataset
compute_metrics=compute_metrics
)
trainer.train()
# eval
preds = trainer.predict(val_data)
result_df = pd.DataFrame({
"text" :val_texts,
"score" : softmax(preds[0], axis=1)[:,1]
})
results_lst.append(result_df)
counter+=1
trainer.save_model("/home/pc/result_toxic_model_new1")
```
`The possible drawback I see in this approach is, the model saved will be last model in the loop which might not be robust model.?`
`Is there any capability which saves the average model or best model? Or does `Trainer provides` n-fold cross validation automatically .`
Also, apart from `accuracy ` which `compute metrics` can I add for highly skewed data?
```
def compute_metrics(eval_pred):
logits, labels = eval_pred
predictions = np.argmax(logits, axis=-1)
acc = np.sum(predictions == labels) / predictions.shape[0]
return {"accuracy" : acc}
```
**Does the above function only capture accuracy on evaluation data?** How can I capture same for training loss?
What is the loss function used here?
I get this output:
```
{'train_loss': 0.12655824422836304, 'train_accuracy': 0.9573275368950921, 'train_runtime': 49.3499, 'train_samples_per_second': 177.123, 'train_steps_per_second': 2.776, 'epoch': 3.0}
{'eval_loss': 0.07398280501365662, 'eval_accuracy': 0.9754901960784313, 'eval_runtime': 0.9742, 'eval_samples_per_second': 418.785, 'eval_steps_per_second': 7.185, 'epoch': 3.0}
So the `train and eval accuracy` is coming from `my compute_metrics` function?
Where from is `train_loss` and `eval_loss` coming?
```
| 01-28-2022 04:21:19 | 01-28-2022 04:21:19 | Hello, thanks for opening an issue! We try to keep the github issues for bugs/feature requests.
Could you ask your question on the [forum](https://discuss.huggingface.co) instead?
cc @sgugger
Thanks!<|||||>Hello @pratikchhapolika ,
I am using your simple code for a 10-fold cross validation on BERT model using hugging face if the code is public can you share the repository. I am having hard time with **SEDataset** function converting tensors to dataframe and then training.
Thanks!
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,380 | closed | [docs] fix wrong file name in `pr_check` | # What does this PR do?
This corrects the file name `tests_fetcher.py` in the document of PR check.
@sgugger @LysandreJik | 01-28-2022 02:22:31 | 01-28-2022 02:22:31 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,379 | closed | Save code of registered custom models | # What does this PR do?
This PR changes the way `save_pretrained` works for user custom objects when they are registered to an auto API via the the new command `register_for_auto_class`. Here is an example:
```py
# Custom config
from test_model.configuration import NewModelConfig
# Custom model
from test_model.modeling import NewModel
# No need to specify AutoConfig here, that's the default and the only auto class that makes sense for a config.
NewModelConfig.register_for_auto_class()
# AutoModel is also the default but we show it here as you could use an other auto model class.
NewModel.register_for_auto_class("AutoModel")
# Just an example of model definition
model = NewModel.from_pretrained("hf-internal-testing/tiny-bert")
model.save_pretrained("test-dynamic")
# Model weights, config and the files with the code of NewModel and NewModelConfig are copied in this folder
```
If the user pushes the folder to the Hub, anyone will then be able to use the Auto api to load this model and config (as long as they pass `trust_remote_code=True`. | 01-27-2022 21:05:05 | 01-27-2022 21:05:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,378 | closed | Add init to BORT | Adds an `__init__.py` file to the `src/transformers/models/bort` folder so that it can be imported. | 01-27-2022 19:41:36 | 01-27-2022 19:41:36 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Will release a patch with this by tomorrow. |
transformers | 15,377 | closed | No module named 'transformers.models.bort' | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.16.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
### Who can help
@sgugger @Narsil
## Error
```
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py", line 2704, in _get_module
return importlib.import_module("." + module_name, self.__name__)
File "/usr/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 965, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'transformers.models.bort'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/content/infoADAPET/src/test.py", line 5, in <module>
from transformers import *
File "<frozen importlib._bootstrap>", line 1031, in _handle_fromlist
File "<frozen importlib._bootstrap>", line 1032, in _handle_fromlist
File "/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py", line 2692, in __getattr__
value = self._get_module(name)
File "/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py", line 2708, in _get_module
) from e
RuntimeError: Failed to import transformers.models.bort because of the following error (look up to see its traceback):
No module named 'transformers.models.bort'
```
| 01-27-2022 18:42:49 | 01-27-2022 18:42:49 | v4.16.0 has this issue changing to v4.15.0 fixed it<|||||>What command did you run exactly?<|||||>from transformers import *<|||||>Thanks for flagging, a fix is on its way :-)<|||||>Fixing on https://github.com/huggingface/transformers/pull/15378 |
transformers | 15,376 | closed | Fix tests_fetcher | # What does this PR do?
This fixes the `test_fetcher` module which was broken by the "add-new-model-like" PR. The reason is that its test file has some relative imports in example modules that are picked by the test fetcher. | 01-27-2022 16:27:03 | 01-27-2022 16:27:03 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,375 | closed | Example script for PushToHubCallback | null | 01-27-2022 15:56:23 | 01-27-2022 15:56:23 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,374 | closed | Add proper documentation for Keras callbacks | # What does this PR do?
Fixes the Keras Callback doc page. | 01-27-2022 15:41:05 | 01-27-2022 15:41:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,373 | closed | Super-small fix stops us confusing Keras console logging by modifying… | Super-small PR - some small cosmetic issues were caused by me editing the Keras `logs` dict without realizing that it would get used by Keras again afterwards. | 01-27-2022 15:40:20 | 01-27-2022 15:40:20 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,372 | closed | Run LayoutXLM without image input not possible | ## Environment info
- `transformers` version: 4.15.0
- Platform:
- Python version: 3.6.9
- PyTorch version (GPU?): 1.8
- Tensorflow version (GPU?): 1.14.0
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
@NielsRogge
## Information
Model I am using (Bert, XLNet ...): LayoutXLM
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
import_tokennizer='microsoft/layoutxlm-base'
tokenizer = LayoutXLMTokenizer.from_pretrained(import_tokennizer)
model = LayoutLMv2Model.from_pretrained(import_tokennizer, output_attentions = True)
model.to(device)
encoding = tokenizer(['Image Problem'],boxes=[[56,78,102,89]], padding='max_length', add_special_tokens = True, truncation=True, return_tensors="pt", max_length=512, return_attention_mask = True, return_token_type_ids=True)
outputs = model(input_ids=encoding.input_ids.to(device),
bbox=encoding.bbox.to(device),
attention_mask=encoding.attention_mask.to(device),
token_type_ids=encoding.token_type_ids.to(device))
## Error messages

## Expected behavior
I would expect to be able to use LayoutXLM without image input, but I can't make it work. I think it is related with the fact that there is no "if image is None" as there is for other inputs in the forward function.
And in that way, when I do not pass image as input and the execution gets here:

in the forward function, it throughs an error. I believe this makes it impossible to use the model without the image input. I am right? If I am, weren't we supose to?
Thank you.
| 01-27-2022 14:40:03 | 01-27-2022 14:40:03 | Hi,
LayoutXLM, like LayoutLMv2, expects the `image` as input. Internally, LayoutLMv2 (and LayoutXLM) will forward the `image` through their visual backbone, which will serve as additional features in order to make predictions.
The `image` is a required input.<|||||>Hi, thank you for the reply. Should I close the issue ?<|||||>Yes. Wondering why you want to use the model without `image` input?<|||||>I wanted to try a simpler approach first. I am working with PDF files, and already have words and their coordinates, but to have the images it would imply for me to create one img for each pdf page and that would lead to a great increase in space.
I had tried something similar, but I think it was with LayoutLM where I used no image, and I think I got confused. My first though was to test the impact of the relative positions, without considering the image yet.<|||||>Note that in any case, you can fork the library and tweak the code to your liking (i.e. tweak `modeling_layoutlmv2.py`).
You can also leverage [LayoutLM](https://huggingface.co/docs/transformers/model_doc/layoutlm) (v1) which does not take the image as input. Note that providing the image as input will result in a performance increase.<|||||>Thank you for the advice. Will do. <|||||>just subclass the model, change forward pass to encode a 3x3 white pixel grid and it will work ;) |
transformers | 15,371 | closed | Add a device argument to the eval script | This should help speed up the evaluation and resolve some RAM OOM errors. | 01-27-2022 14:36:56 | 01-27-2022 14:36:56 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,370 | closed | Implement fixes for TrainingArguments doc | Co-authored-by: osanseviero <[email protected]>
This implements the suggestions I didn't add in #15327 because I forgot to click on a button :-/ | 01-27-2022 14:18:57 | 01-27-2022 14:18:57 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,369 | closed | Bump numpy from 1.19.2 to 1.21.0 in /examples/research_projects/lxmert | Bumps [numpy](https://github.com/numpy/numpy) from 1.19.2 to 1.21.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p>
<blockquote>
<h2>v1.21.0</h2>
<h1>NumPy 1.21.0 Release Notes</h1>
<p>The NumPy 1.21.0 release highlights are</p>
<ul>
<li>continued SIMD work covering more functions and platforms,</li>
<li>initial work on the new dtype infrastructure and casting,</li>
<li>universal2 wheels for Python 3.8 and Python 3.9 on Mac,</li>
<li>improved documentation,</li>
<li>improved annotations,</li>
<li>new <code>PCG64DXSM</code> bitgenerator for random numbers.</li>
</ul>
<p>In addition there are the usual large number of bug fixes and other
improvements.</p>
<p>The Python versions supported for this release are 3.7-3.9. Official
support for Python 3.10 will be added when it is released.</p>
<p>:warning: Warning: there are unresolved problems compiling NumPy 1.21.0 with gcc-11.1 .</p>
<ul>
<li>Optimization level <code>-O3</code> results in many wrong warnings when running the tests.</li>
<li>On some hardware NumPy will hang in an infinite loop.</li>
</ul>
<h2>New functions</h2>
<h3>Add PCG64DXSM BitGenerator</h3>
<p>Uses of the PCG64 BitGenerator in a massively-parallel context have
been shown to have statistical weaknesses that were not apparent at the
first release in numpy 1.17. Most users will never observe this weakness
and are safe to continue to use PCG64. We have introduced a new
PCG64DXSM BitGenerator that will eventually become the new default
BitGenerator implementation used by <code>default_rng</code> in future releases.
PCG64DXSM solves the statistical weakness while preserving the
performance and the features of PCG64.</p>
<p>See <code>upgrading-pcg64</code> for more details.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/18906">gh-18906</a>)</p>
<h2>Expired deprecations</h2>
<ul>
<li>The <code>shape</code> argument <code>numpy.unravel_index</code> cannot be
passed as <code>dims</code> keyword argument anymore. (Was deprecated in NumPy
1.16.)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/numpy/numpy/commit/b235f9e701e14ed6f6f6dcba885f7986a833743f"><code>b235f9e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19283">#19283</a> from charris/prepare-1.21.0-release</li>
<li><a href="https://github.com/numpy/numpy/commit/34aebc2824cf8c2bdbe19040b82f98f18557c8ba"><code>34aebc2</code></a> MAINT: Update 1.21.0-notes.rst</li>
<li><a href="https://github.com/numpy/numpy/commit/493b64bfe9c5396498325b87e5e80e1917555c41"><code>493b64b</code></a> MAINT: Update 1.21.0-changelog.rst</li>
<li><a href="https://github.com/numpy/numpy/commit/07d7e72ab6880c05b5fdd98482cf88982e778393"><code>07d7e72</code></a> MAINT: Remove accidentally created directory.</li>
<li><a href="https://github.com/numpy/numpy/commit/032fca5e2e9749b152ec56153f476e05efdff287"><code>032fca5</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19280">#19280</a> from charris/backport-19277</li>
<li><a href="https://github.com/numpy/numpy/commit/7d25b81025a50cc0368f5727c65e875ca769469a"><code>7d25b81</code></a> BUG: Fix refcount leak in ResultType</li>
<li><a href="https://github.com/numpy/numpy/commit/fa5754e8c159a37fcd9345df261cf82821088ea0"><code>fa5754e</code></a> BUG: Add missing DECREF in new path</li>
<li><a href="https://github.com/numpy/numpy/commit/61127bb4d46d523b699da1b63abaa5035670da27"><code>61127bb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19268">#19268</a> from charris/backport-19264</li>
<li><a href="https://github.com/numpy/numpy/commit/143d45fff3ed9e051bdeef7bdb4df38025ea7d1c"><code>143d45f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19269">#19269</a> from charris/backport-19228</li>
<li><a href="https://github.com/numpy/numpy/commit/d80e4738f781a1d206bbc04a2e863299e5f2e104"><code>d80e473</code></a> BUG: Removed typing for == and != in dtypes</li>
<li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.19.2...v1.21.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-27-2022 13:30:34 | 01-27-2022 13:30:34 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,368 | closed | Bump notebook from 6.1.5 to 6.4.1 in /examples/research_projects/visual_bert | Bumps [notebook](http://jupyter.org) from 6.1.5 to 6.4.1.
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-27-2022 13:30:22 | 01-27-2022 13:30:22 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,367 | closed | Bump numpy from 1.19.2 to 1.21.0 in /examples/research_projects/visual_bert | Bumps [numpy](https://github.com/numpy/numpy) from 1.19.2 to 1.21.0.
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/numpy/numpy/releases">numpy's releases</a>.</em></p>
<blockquote>
<h2>v1.21.0</h2>
<h1>NumPy 1.21.0 Release Notes</h1>
<p>The NumPy 1.21.0 release highlights are</p>
<ul>
<li>continued SIMD work covering more functions and platforms,</li>
<li>initial work on the new dtype infrastructure and casting,</li>
<li>universal2 wheels for Python 3.8 and Python 3.9 on Mac,</li>
<li>improved documentation,</li>
<li>improved annotations,</li>
<li>new <code>PCG64DXSM</code> bitgenerator for random numbers.</li>
</ul>
<p>In addition there are the usual large number of bug fixes and other
improvements.</p>
<p>The Python versions supported for this release are 3.7-3.9. Official
support for Python 3.10 will be added when it is released.</p>
<p>:warning: Warning: there are unresolved problems compiling NumPy 1.21.0 with gcc-11.1 .</p>
<ul>
<li>Optimization level <code>-O3</code> results in many wrong warnings when running the tests.</li>
<li>On some hardware NumPy will hang in an infinite loop.</li>
</ul>
<h2>New functions</h2>
<h3>Add PCG64DXSM BitGenerator</h3>
<p>Uses of the PCG64 BitGenerator in a massively-parallel context have
been shown to have statistical weaknesses that were not apparent at the
first release in numpy 1.17. Most users will never observe this weakness
and are safe to continue to use PCG64. We have introduced a new
PCG64DXSM BitGenerator that will eventually become the new default
BitGenerator implementation used by <code>default_rng</code> in future releases.
PCG64DXSM solves the statistical weakness while preserving the
performance and the features of PCG64.</p>
<p>See <code>upgrading-pcg64</code> for more details.</p>
<p>(<a href="https://github-redirect.dependabot.com/numpy/numpy/pull/18906">gh-18906</a>)</p>
<h2>Expired deprecations</h2>
<ul>
<li>The <code>shape</code> argument <code>numpy.unravel_index</code> cannot be
passed as <code>dims</code> keyword argument anymore. (Was deprecated in NumPy
1.16.)</li>
</ul>
<!-- raw HTML omitted -->
</blockquote>
<p>... (truncated)</p>
</details>
<details>
<summary>Commits</summary>
<ul>
<li><a href="https://github.com/numpy/numpy/commit/b235f9e701e14ed6f6f6dcba885f7986a833743f"><code>b235f9e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19283">#19283</a> from charris/prepare-1.21.0-release</li>
<li><a href="https://github.com/numpy/numpy/commit/34aebc2824cf8c2bdbe19040b82f98f18557c8ba"><code>34aebc2</code></a> MAINT: Update 1.21.0-notes.rst</li>
<li><a href="https://github.com/numpy/numpy/commit/493b64bfe9c5396498325b87e5e80e1917555c41"><code>493b64b</code></a> MAINT: Update 1.21.0-changelog.rst</li>
<li><a href="https://github.com/numpy/numpy/commit/07d7e72ab6880c05b5fdd98482cf88982e778393"><code>07d7e72</code></a> MAINT: Remove accidentally created directory.</li>
<li><a href="https://github.com/numpy/numpy/commit/032fca5e2e9749b152ec56153f476e05efdff287"><code>032fca5</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19280">#19280</a> from charris/backport-19277</li>
<li><a href="https://github.com/numpy/numpy/commit/7d25b81025a50cc0368f5727c65e875ca769469a"><code>7d25b81</code></a> BUG: Fix refcount leak in ResultType</li>
<li><a href="https://github.com/numpy/numpy/commit/fa5754e8c159a37fcd9345df261cf82821088ea0"><code>fa5754e</code></a> BUG: Add missing DECREF in new path</li>
<li><a href="https://github.com/numpy/numpy/commit/61127bb4d46d523b699da1b63abaa5035670da27"><code>61127bb</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19268">#19268</a> from charris/backport-19264</li>
<li><a href="https://github.com/numpy/numpy/commit/143d45fff3ed9e051bdeef7bdb4df38025ea7d1c"><code>143d45f</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/numpy/numpy/issues/19269">#19269</a> from charris/backport-19228</li>
<li><a href="https://github.com/numpy/numpy/commit/d80e4738f781a1d206bbc04a2e863299e5f2e104"><code>d80e473</code></a> BUG: Removed typing for == and != in dtypes</li>
<li>Additional commits viewable in <a href="https://github.com/numpy/numpy/compare/v1.19.2...v1.21.0">compare view</a></li>
</ul>
</details>
<br />
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores)
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`.
[//]: # (dependabot-automerge-start)
[//]: # (dependabot-automerge-end)
---
<details>
<summary>Dependabot commands and options</summary>
<br />
You can trigger Dependabot actions by commenting on this PR:
- `@dependabot rebase` will rebase this PR
- `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it
- `@dependabot merge` will merge this PR after your CI passes on it
- `@dependabot squash and merge` will squash and merge this PR after your CI passes on it
- `@dependabot cancel merge` will cancel a previously requested merge and block automerging
- `@dependabot reopen` will reopen this PR if it is closed
- `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
- `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
- `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
- `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language
- `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language
- `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language
- `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language
You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/huggingface/transformers/network/alerts).
</details> | 01-27-2022 13:30:15 | 01-27-2022 13:30:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,366 | closed | Issues with custom dataset in Wav2Vec2 | @patrickvonplaten @anton-l
We are training Wav2Vec using the run_speech_recognition_ctc_bnb.py-script.
This is working fine with Common Voice datasets, however using our custom dataset and data loader at [NbAiLab/NPSC]( https://huggingface.co/datasets/NbAiLab/NPSC) it crashes after roughly 1 epoch with the following stack trace:

We are able to work around the issue, for instance by adding this check in line#222 in transformers/models/wav2vec2/modeling_wav2vec2.py:
```python
if input_length - (mask_length - 1) < num_masked_span:
num_masked_span = input_length - (mask_length - 1)
```
Interestingly, these are the variable values before the adjustment:
```
input_length=10
mask_length=10
num_masked_span=2
````
After adjusting num_masked_spin to 1, the training script runs. The issue is also fixed by setting “replace=True” in the same function.
Do you have any idea what is causing this, and how to fix this error permanently? If you do not think this is an Datasets issue, feel free to move the issue.
| 01-27-2022 12:32:24 | 01-27-2022 12:32:24 | Hey @peregilk,
I'm happy to look into it! Could you give me a minimal reproducible code example (if the training data is not too huge a simple bash command is fine as well)<|||||>Hi @patrickvonplaten.
We are not exactly sure where the error is, but running any the default training scripts with these parameters, reproduces the error. Note that the error happens after roughly one epoch:
```bash
python run_speech_recognition_ctc_bnb.py \
--dataset_name="NbAiLab/NPSC" \
--dataset_config_name="16K_mp3" \
--text_column_name="text" \
--model_name_or_path="facebook/wav2vec2-xls-r-300m" \
...
# I would assume this will give you a smaller train set (but have not tested it)
--train_split_name="test" \
```
<|||||><s>Hey @peregilk,
I'm sadly getting a couple of errors when running your command, *e.g.*:
```bash
Traceback (most recent call last):
File "run_speech_recognition_ctc_bnb.py", line 760, in <module>
main()
File "run_speech_recognition_ctc_bnb.py", line 490, in main
vocab_dict = create_vocabulary_from_data(
File "run_speech_recognition_ctc_bnb.py", line 322, in create_vocabulary_from_data
remove_columns=datasets["train"].column_names,
File "/home/patrick/python_bin/datasets/dataset_dict.py", line 41, in __getitem__
return super().__getitem__(k)
KeyError: 'train'
```
Could you try to make the command runable? </s>
See next comment<|||||>Actually now having read through your comment more carefully, I think we don't even need to reproduce it - the problem is (as you probably found) that there is an example of very short input length. An input_length of 10 is very short it corresponds to less than 0.1 seconds.
The fix that you proposed actually looks like a very good one! Would be interested in opening a PR? I'd be happy to help with adding a nice test and making the function a bit more robust. Otherwise happy to open a PR myself :-)<|||||>Thanks @patrickvonplaten. I can submit a PR on this but it has to wait until after the weekend.
I verified that there are quite a few small sentences in the corpus. Most are valid though, like "OK", "Thanks", "Yes" etc. Many between 0.1 and 0.2 second. We should probably keep this in the corpus, and fix the error here.
A slightly related question. During initialisation of the model, I keep getting this warning:
```"The following columns in the training set don't have a corresponding argument in `Wav2Vec2ForCTC.forward` and have been ignored: input_length."```
I see this also refers to the same ```ìnput_length```-variable. Is this in any way related, and can it affect this?<|||||>Hey @peregilk,
The warning is totally normal and you can discard this!<|||||>I created a pull request for the num_masked_span issue: "Update modeling_wav2vec2.py #15423"<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>This can be closed I think no? |
transformers | 15,365 | closed | Fix initialization of mutable types | # What does this PR do?
This PR fixes the initialization of mutable types for Swin Transformer, SegFormer and BEiT (as pointed out by @FrancescoSaverioZuppichini in #15277).
It also fixes a small docs issue for Swin Transformer. | 01-27-2022 11:04:33 | 01-27-2022 11:04:33 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15365). All of your documentation changes will be reflected on that endpoint.<|||||>This is present in many more configurations, so if we decide to switch it should be for all of them and not only for three configs.
Having a default for a mutable parameter is dangerous when the function mutates the parameter, which is not the case for our config inits, it's merely stored. I don't think we need to switch to `None`, which would result in the signatures in the doc being less complete.<|||||>I'm also not convinced it's super important as these are lookup values, but if you feel strongly a simpler way of doing this is to switch everything tu tuples, which are immutable. It keeps the arguments in the docstrings as well and can be converted to lists quite easily.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,364 | closed | Implementation of activations as subclasses of the torch.nn.Module | # 🚀 Feature request
<!-- A clear and concise description of the feature proposal.
Please provide a link to the paper and code in case they exist. -->
I guess it would be best to ask this first: are there some specific reasons why activation functions in https://github.com/huggingface/transformers/blob/master/src/transformers/activations.py are not subclasses of the `torch.nn.Module` ?
If there are, then we can probably ignore everything else below :) .
If there aren't, then it might be interesting to consider implementing them that way (I would be happy to work on a PR for it).
A few advantages (that I'm aware of) with activations as subclasses of the `torch.nn.Module`:
1. it's easy to check which activations are used in the model by just running: `print(my_bert)`. Currently one has to check the config file for it, which is also not that bad but this just makes it a bit more convenient. Just like printing the torchvision models `print(resnet50)`, one can immediately see which activations are being used in the model.
2. composing layers with for example `nn.Sequential` would be possible (I'm not sure if this is possible when activations are implemented as python functions)
3. attaching pytorch hooks to activation modules would be possible (I think this is the most important advantage)
| 01-27-2022 10:55:20 | 01-27-2022 10:55:20 | Pinging @sgugger @patrickvonplaten @patil-suraj for consideration
Also pinging @mfuntowicz as I believe this may affect the graph of the models and would like to hear feedback on the request<|||||>No strong opinion on my side, apart from the fact we would need to be careful not to break anything in any existing model.<|||||>Ok for me to change if it doesn't break anything. I agree that seeing the activations when printing out the model is a considerably improvement!<|||||>okay for me as well as long as it doesn't break anything!<|||||>Also cc @stas00, @NielsRogge would love your input.
If everyone is on board, let's open a good first issue and open this to community contributions.<|||||>> * it's easy to check which activations are used in the model by just running: `print(my_bert)`. Currently one has to check the config file for it, which is also not that bad but this just makes it a bit more convenient. Just like printing the torchvision models `print(resnet50)`, one can immediately see which activations are being used in the model.
check, but this is trivial to do by modifying `__repr__` to print out the activation function.
> * composing layers with for example `nn.Sequential` would be possible (I'm not sure if this is possible when activations are implemented as python functions)
this is not a problem with functions, unless of course you need to have a super-fine "granularity". But you will have to change a ton of code for this to work. And making `nn.Sequential` Transformers models is super-complicated , especially for some models - think conditional control flow.
> * attaching pytorch hooks to activation modules would be possible (I think this is the most important advantage)
check, this is indeed helpful when debugging training instabilities.
--------------
One issue is backward compatibility, if the models start using `nn.Module` activations - I don't think we could remove the function versions as others surely use those in their modified models.
-------------
the other issue is that we are likely to start having more complex activation functions soon which require `torch.autograd.Function` - basically if you want to manually tweak the `backward` function. Here is an example:
https://github.com/bigscience-workshop/Megatron-DeepSpeed/blob/dfb5d6873a0825ddbfecda009b2ac1f493f09a97/megatron/model/fused_bias_gelu.py#L47
and it's likely we will already need to use something similar for the Megatron-Deepspeed models BigScience has been training.
I tried to search how to make `torch.autograd.Function` into `nn.Module` - but so far the only answers I found is that one has to use the former if they want a custom `backward` function. But surely there should be a way to then again fold it into `nn.Module` - I'd ask you to research this by asking on pytorch forums or slack.
update: I think this is the answer as someone already asked this https://discuss.pytorch.org/t/defining-backward-function-in-nn-module/5047/2 so you call a `torch.autograd.Function` from `nn.Module`.
So I think it should be fine. It's just that I haven't tried doing it myself yet.
`torch.autograd.Function` docs:
- https://pytorch.org/tutorials/beginner/examples_autograd/two_layer_net_custom_function.html
- https://pytorch.org/docs/stable/notes/extending.html#extending-torch-autograd
|
transformers | 15,363 | closed | fix bug in run_clm.py | Sorry I don't have time to follow contributor guideline.
group_texts function in transformers/examples/pytorch/language-modeling/run_clm.py will lead to training failure.
This is because the possibility that the length of a batch is too small is not considered.
I hope this can help some people.
| 01-27-2022 09:26:13 | 01-27-2022 09:26:13 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15363). All of your documentation changes will be reflected on that endpoint.<|||||>May cause errors "ValueError: expected sequence of length 1024 at dim 1 (got 79)". This pull request can solve this problem<|||||>This is not a fix that will actually work since most models trained on causal language modeling objective do not have a padding token. 0 thus does not work as an index, and it shouldn't be the index picked for the labels either since PyTorch only ignore the index -100 in the loss.
It's also seem like a very edge case since it implies that you get the total length of 1000 texts in your dataset be shorter than the block size.<|||||>@sgugger Thank you for your reply~
> This is not a fix that will actually work since most models trained on causal language modeling objective do not have a padding token. 0 thus does not work as an index, and it shouldn't be the index picked for the labels either since PyTorch only ignore the index -100 in the loss.
Agree. Padding should be set more appropriately, Or discard this part directly.
> It's also seem like a very edge case since it implies that you get the total length of 1000 texts in your dataset be shorter than the block size.
Disagree. This often happens in the case of short text( I tested on 7.4 million short texts with an average length of 15). Function group_texts ("map processes 1,000 texts together")probably process data incorrectly in the last "1,000 texts", because it can actually be any number between 0 and 1000. This causes total_length less than block_size which leads to dataloader error.
I think this should be fixed. Do I need to resubmit my pull request?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,362 | closed | Unexpected shape of "past_key_values" of ProphetNetDecoder.forward() | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: colab
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111
- Tensorflow version (GPU?): 2.7.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten, @LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- T5, BART, Marian, Pegasus, EncoderDecoder: @patrickvonplaten
- Blenderbot, MBART: @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet, BigBird: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder, UniSpeech, UniSpeechSAT, SEW, SEW-D, Speech2Text: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten @narsil
- Tokenizers: @SaulLu
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...): ProphetNet
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
When I try to convert ProphetNetModel to onnx, I found that "past_key_values" of decoder output is not the same shape as in official document.
The description of `ProphetNetDecoderModelOutput` says that:
> past_key_values (List[torch.FloatTensor], optional, returned when use_cache=True is passed or when config.use_cache=True) — List of torch.FloatTensor of length config.n_layers, with each tensor of shape (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head)).
However, I get past_key_values just like `BaseModelOutputWithPastAndCrossAttentions`.
> past_key_values (tuple(tuple(torch.FloatTensor)), optional, returned when use_cache=True is passed or when config.use_cache=True) — Tuple of tuple(torch.FloatTensor) of length config.n_layers, with each tuple having 2 tensors of shape (batch_size, num_heads, sequence_length, embed_size_per_head)) and optionally if config.is_encoder_decoder=True 2 additional tensors of shape (batch_size, num_heads, encoder_sequence_length, embed_size_per_head).
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
```
from transformers import ProphetNetTokenizer, ProphetNetEncoder, ProphetNetDecoder
tokenizer = ProphetNetTokenizer.from_pretrained('microsoft/prophetnet-large-uncased')
encoder = ProphetNetEncoder.from_pretrained('microsoft/prophetnet-large-uncased')
decoder = ProphetNetDecoder.from_pretrained('microsoft/prophetnet-large-uncased')
# assert model.config.is_decoder, f"{model.__class__} has to be configured as a decoder."
enc_inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
dec_inputs = tokenizer("<s>", return_tensors="pt")
encoder_outputs = encoder(
input_ids=enc_inputs["input_ids"],
attention_mask=enc_inputs["attention_mask"],
return_dict=True,
)
decoder_outputs = decoder(
input_ids=dec_inputs["input_ids"],
encoder_hidden_states=encoder_outputs["last_hidden_state"],
encoder_attention_mask=enc_inputs["attention_mask"],
past_key_values=None,
return_dict=True,
)
print(decoder_outputs.keys())
# odict_keys(['last_hidden_state', 'last_hidden_state_ngram', 'past_key_values'])
print(len(decoder_outputs["past_key_values"]))
# 12
print(len(decoder_outputs["past_key_values"][0]))
# 4
print(decoder_outputs["past_key_values"][0][0].shape)
# torch.Size([1, 16, 4, 64])
```
## Expected behavior
`len(decoder_outputs["past_key_values"]) == 12` and `decoder_outputs["past_key_values"][0].shape = (2, batch_size, num_attn_heads, decoder_sequence_length, embed_size_per_head))`
<!-- A clear and concise description of what you would expect to happen. --> | 01-27-2022 05:15:05 | 01-27-2022 05:15:05 | For ONNX related issues, gently pinging @lewtun here<|||||>@patrickvonplaten, thanks.
I think that this issue relates not only using ONNX because it is contradiction between document and actual outputs. <|||||>Ok, I see so the problem is the documentation here? There might very well be a bug in the documentation...
The following looks correct to me:
```
print(len(decoder_outputs["past_key_values"]))
# 12
print(len(decoder_outputs["past_key_values"][0]))
# 4
print(decoder_outputs["past_key_values"][0][0].shape)
# torch.Size([1, 16, 4, 64])
```
We have 12 layers. ProphetNet is an encoder-decoder model so it caches 4 tensors (decoder value, decoder key as well as projected encoder key and projected encoder value matrices for the cross attention layer).
Would you like to open a PR maybe to fix the documentation is you've found the bug it seems?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 15,361 | closed | Set syncfree AdamW as the default optimizer for xla:gpu device in amp mode | This PR sets the syncfree AdamW ([#3294](https://github.com/pytorch/xla/pull/3294)) as the default optimizer for xla:gpu device in amp mode. The syncfree AdamW optimizer shows 20%+ throughput speedup in the run_mlm test case.
Tested with the following script:
```sh
GPU_NUM_DEVICES=1 python run_mlm.py \
--model_name_or_path bert-base-uncased \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--overwrite_output_dir true \
--output_dir /tmp/test-mlm \
--per_device_train_batch_size 12 \
--do_eval \
--fp16 true \
--do_train \
--num_train_epochs 10
```
With normal AdamW:
```sh
***** train metrics *****
epoch = 10.0
train_loss = 1.6235
train_runtime = 0:19:13.85
train_samples = 4627
train_samples_per_second = 40.1
train_steps_per_second = 3.345
```
With syncfree AdamW:
```sh
***** train metrics *****
epoch = 10.0
train_loss = 1.6166
train_runtime = 0:16:14.05
train_samples = 4627
train_samples_per_second = 47.502
train_steps_per_second = 3.963
```
cc @sgugger | 01-27-2022 02:59:08 | 01-27-2022 02:59:08 | _The documentation is not available anymore as the PR was closed or merged._<|||||>Thanks a lot for your PR! I think it would be better to be less magical and offer the option of using this optimizer through a new choice (for instance named adamw_syncfree) in the `--optim` parameter instead of changing the default.
We did not even dare change the default from our implementation to the PyTorch ones for fear of braking changes :-)<|||||>@sgugger I added `adamw_torch_xla` as an optimizer option. |
transformers | 15,360 | closed | Add tensor parallelism related mappings | Discussed in https://github.com/huggingface/transformers/issues/13690
In addition, I also created some helper functions.
cc @RezaYazdaniAminabadi @lucasleesw
## Reviewers
@stas00 @jaketae @siddk | 01-26-2022 23:42:19 | 01-26-2022 23:42:19 | _The documentation is not available anymore as the PR was closed or merged._<|||||>In my opinion, this mapping + module replacement is the easiest and the most extensible way to implement tensor parallelism now. That's why we need this mapping. However, I've found there were three models (ProphetNet, BigBird, SqueezeBert) that could not be applied in this way. see: https://github.com/tunib-ai/parallelformers/blob/main/FAQ.md#q-why-are-some-models-not-supported
The documentation also points out EncoderDecoderModel and RAG, but this is because it does not match the implementation of parallelformers. It has nothing to do with these mappings.<|||||>I think it's perfectly fine to design a way to automate most of the derivations if it works in most cases and leave the special cases to be that - special cases. Which can be taken care of later.
If you have ever used Perl, its motto: "make the easy things easy, and the hard things possible" and it lived up to that lofty goal. So if we can follow a similar principal here, then it'd be great!
that's is let's not replicate data that doesn't need to be replicated if it can be derived automatically, and make it possible to overcome exceptions by providing additional data when it can't be auto-derived.
Does it make sense?<|||||>@stas00 Yes. That will be fine. Let's proceed by adding mappings for exception cases. For example, most models match well, but if a few models have exceptions, it is easiest to solve the problem by constructing an additional dictionary for only those exception cases.
The `REVERSED PARAMS` and `FUSED ATTENTION` maps I wrote are examples. These cases are not difficult to define because there are only two of them across the transformers. @lucasleesw So, let's design it to support as many models as possible with as little effort as possible. That would be the best. <|||||>```python
ValueError: The following files have docstrings written in rst:
- src/transformers/utils/tensor_parallel_utils.py
To fix this run `doc-builder convert path_to_py_file` after installing `doc-builder`
(`pip install git+https://github.com/huggingface/doc-builder`)
make: *** [Makefile:40: repo-consistency] Error 1
```
@stas00 @jaketae @siddk Can you help me? I think it's about documentation.
```python
hyunwoongko ~/Github/transformers master ± doc-builder convert path_to_py_file
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.7/bin/doc-builder", line 8, in <module>
sys.exit(main())
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/doc_builder/commands/doc_builder_cli.py", line 39, in main
args.func(args)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/doc_builder/commands/convert_doc_file.py", line 114, in convert_command
raise ValueError(f"This script only converts rst files. Got {source_file}.")
ValueError: This script only converts rst files. Got /Users/hyunwoongko/Github/transformers/path_to_py_file.
```
Do I have to create a new rst file?<|||||>> ```python
> ValueError: The following files have docstrings written in rst:
> - src/transformers/utils/tensor_parallel_utils.py
> To fix this run `doc-builder convert path_to_py_file` after installing `doc-builder`
> (`pip install git+https://github.com/huggingface/doc-builder`)
> make: *** [Makefile:40: repo-consistency] Error 1
> ```
>
> @stas00 @jaketae @siddk Can you help me? I think it's about documentation.
>
> ```python
> hyunwoongko ~/Github/transformers master ± doc-builder convert path_to_py_file
> Traceback (most recent call last):
> File "/Library/Frameworks/Python.framework/Versions/3.7/bin/doc-builder", line 8, in <module>
> sys.exit(main())
> File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/doc_builder/commands/doc_builder_cli.py", line 39, in main
> args.func(args)
> File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/site-packages/doc_builder/commands/convert_doc_file.py", line 114, in convert_command
> raise ValueError(f"This script only converts rst files. Got {source_file}.")
> ValueError: This script only converts rst files. Got /Users/hyunwoongko/Github/transformers/path_to_py_file.
> ```
>
> Do I have to create a new rst file?
we switched all docs from .rst and .md to .mdx some weeks back. Please See the updated docs tree in master. and README:
https://github.com/huggingface/transformers/tree/master/docs#readme<|||||>@stas00 @lucasleesw
I modified tensor parallel mapping. I am sure this is the best structure.
In conclusion, the list of string is not a suitable structure to represent this mapping because various parameter states exist, such as fusion and reverse, and they have many combinations. (fusion means the fused QKV parameters like linear(3 * dim, dim), not the kernel fusion)
“col_no_fuse”, “col_no_fuse_no_reverse”, “col_no_replace_no_fuse”, … I think this is not we want.
For example, it may or may not be reversed while being fused. Therefore, it is better to represent them as objects, and if we set the default values of attributes to the most common value, we can reduce the time required to create a mapping.
And I added many utility functions to search these states like:
- `mapping.get_fusion_degree(model, parameter)`
- `mapping.is_reversed(model, parameter)`
- `mapping.is_fused(model, parameter)`
- `mapping.is_column_parallel(model, parameter)`
- `mapping.is_row_parallel(model, parameter)`
- `mapping.update_attributes(model)`
- ...<|||||>@hyunwoongko Apologies for the delayed review. I only have a few minor comments, and I can work on them if you'd like. Overall, I agree with the design choices you made. Thanks again for the PR!<|||||>Hello @stas00, this PR is ready for review. Could you kindly take a look when time permits?
For context, this PR provides a mapping between model names and parameters that will enable tensor parallelism. While this PR was introduced as part of efforts to integrate Oslo-enabled 3D parallelism to transformers, the map was constructed in a way that is framework agnostic, i.e., it could be used by Oslo, DeepSpeed, and the likes.
cc @LysandreJik @sgugger <|||||>@l-yohai If it works well after adding it to OSLO and testing it, let's continue to PR and add models here. I hope to support almost all models as soon as possible.<|||||>@sgugger Thanks for fixing code. I'll apply these suggestions to code.
@jaketae I'll apply some suggestions, Could you help me adding docstrings?<|||||>I think that any such new mappings should have an application and tests with it, so I don't think this is a good addition to transformers the way it's presented at the moment as it could lead to dead code.
Typically any new features always come with application, tests and documentation. This PR has none of that.<|||||>Then, I think I will have to add this when I add OSLO.<|||||>closing the PR.<|||||>do you not want to first integrate the suggestions by Sylvain so that when added in the new PR it won't need to be done second time?<|||||>Hi all, apologies for causing confusion. In future PRs, I'll make sure to communicate with Stas before requesting a final round of review.
On a separate note, I'll continue coordinating with Kevin to complete the remaining steps (i.e. docstrings, better documentation, reflecting feedback already received) to open a new PR.<|||||>@stas00
> do you not want to first integrate the suggestions by Sylvain so that when added in the new PR it won't need to be done second time?
I will. |
transformers | 15,359 | closed | Error when running the T5 pre-training script | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.15.0
- Platform: Linux-5.4.0-1043-gcp-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): not installed (NA)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.6 (tpu)
- Jax version: 0.2.26
- JaxLib version: 0.1.75
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
### Who can help
@patrickvonplaten @patil-suraj @sgugger
## Information
I'm running the `run_t5_mlm_flax.py` example script on a custom version of mC4 dataset (only English), but I'm facing this exception during the training after a few steps (changing the seed changes the step that it occours, as expected).
```
Traceback (most recent call last):
File "/home/hugo/run_t5_mlm_flax.py", line 925, in <module>
main()
File "/home/hugo/run_t5_mlm_flax.py", line 839, in main
model_inputs = data_collator(samples)
File "/home/hugo/run_t5_mlm_flax.py", line 339, in __call__
batch["input_ids"] = self.filter_input_ids(input_ids, input_ids_sentinel)
File "/home/hugo/run_t5_mlm_flax.py", line 392, in filter_input_ids
input_ids = input_ids_full[input_ids_full > 0].reshape((batch_size, -1))
ValueError: cannot reshape array of size 65471 into shape (64,newaxis)
```
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
I'm running the script with the following parameters:
```
python $HOME/run_t5_mlm_flax.py \
--output_dir="./models/${MODEL_DIR}" \
--overwrite_output_dir \
--model_type="t5" \
--config_name="./models/${MODEL_DIR}" \
--tokenizer_name="google/byt5-small" \
--train_file="/mnt/datasets/train/en.json" \
--max_seq_length="1024" \
--num_train_epochs="1" \
--per_device_train_batch_size="8" \
--per_device_eval_batch_size="8" \
--adafactor \
--learning_rate="0.001" \
--weight_decay="0.001" \
--warmup_steps="2000" \
--save_steps="50000" \
--eval_steps="50000" \
--push_to_hub \
--logging_steps="1000" \
--preprocessing_num_workers="96" \
--gradient_accumulation_steps="128"
```
## Expected behavior
Not throwing this exception. | 01-26-2022 20:33:01 | 01-26-2022 20:33:01 | Hey @hugoabonizio , could you please have a look at:
https://github.com/huggingface/transformers/issues/12755
I've seen the same issue, my current workaround is to try-catch `data_collator` calls:
```diff
@@ -813,13 +818,17 @@ def main():
# Gather the indexes for creating the batch and do a training step
for step, batch_idx in enumerate(tqdm(train_batch_idx, desc="Training...", position=1)):
- samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
- model_inputs = data_collator(samples)
-
- # Model forward
- model_inputs = shard(model_inputs.data)
- state, train_metric, dropout_rngs = p_train_step(state, model_inputs, dropout_rngs)
- train_metrics.append(train_metric)
+ try:
+ samples = [tokenized_datasets["train"][int(idx)] for idx in batch_idx]
+ model_inputs = data_collator(samples)
+
+ # Model forward
+ model_inputs = shard(model_inputs.data)
+ state, train_metric, dropout_rngs = p_train_step(state, model_inputs, dropout_rngs)
+ train_metrics.append(train_metric)
+ except:
+ cur_step = epoch * (num_train_samples // train_batch_size) + step
+ continue
```<|||||>Hi @stefan-it, sorry, this issue sliped under my radar :sweat_smile:
Thank you for the code snippet, I was thinking on a similar workaround but will use yours.
<|||||>Sorry haven't found the time to take a closer look. Will try to do it tomorrow!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@hugoabonizio - could you check whether this PR solves the problem: https://github.com/huggingface/transformers/pull/15835 ? :-) Sorry for being so late here<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>@patrickvonplaten sorry for the late reply. It looks like it's fixed now, thank you! |
transformers | 15,358 | closed | Ignored unknown kwarg option direction, while running run_mlm.py (pytorch) | transformers_version == '4.16.0.dev0'
using script ```run_mlm.py``` (pytorch)
with following parameters
```
!python run_mlm.py \
--output_dir "./test-roberta-base" \
--model_type "roberta" \
--config_name "./test-roberta-base" \
--tokenizer_name "./test-roberta-base" \
--train_file "./training_data/test.txt" \
--max_seq_length 128 \
--line_by_line \
--do_train \
--weight_decay 0.01 \
--per_device_train_batch_size 256 \
--per_device_eval_batch_size 256 \
--learning_rate 3e-4 \
--warmup_steps 1000 \
--overwrite_output_dir \
--num_train_epochs 18 \
--adam_beta1 0.9 \
--adam_beta2 0.98 \
--logging_steps 500 \
--save_steps 2500 \
--eval_steps 2500
```
when i pass in the parameter ```--line_by_line```, this happens
```
Running tokenizer on dataset line_by_line: 0%| | 66/1536000 [00:01<6:13:50, 68Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Running tokenizer on dataset line_by_line: 0%| | 76/1536000 [00:01<5:35:58, 76Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Running tokenizer on dataset line_by_line: 0%| | 85/1536000 [00:01<6:44:46, 63Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Ignored unknown kwarg option direction
Running tokenizer on dataset line_by_line: 0%| | 95/1536000 [00:01<6:01:37, 70Ignored unknown kwarg option direction
```
my train_file is in the form
```
sentence1
sentence2
sentence3
and so on...
```
is this expected?
PS:
using custom tokenizer made via
```
tokenizer = ByteLevelBPETokenizer(lowercase=True)
tokenizer.train(
files=paths,
vocab_size=50265,
min_frequency=10,
special_tokens=["<s>", "<pad>", "</s>", "<unk>", "<mask>"]
)
tokenizer.save("./test-roberta-base/tokenizer.json")
``` | 01-26-2022 19:37:18 | 01-26-2022 19:37:18 | cc @sgugger <|||||>Looks like a problem in the tokenizers. What's your version of the Tokenizers library?<|||||>@sgugger
tokenizers version = '0.9.4'<|||||>I think you might need to upgrade it.<|||||>Okay, upgraded tokenizers to 0.11.0.
i can confirm that running run_mlm.py script doesn't throw that error anymore.
Thanks.<|||||>In my case 0.11.0 still threw error but 0.11.5 worked<|||||>Same problem
```
# Name Version Build Channel
tokenizers 0.10.3 py38hb317417_1
```<|||||>Same issue even with tokenizers version 0.11.6 (using [keyBERT](https://maartengr.github.io/KeyBERT/guides/quickstart.html))
However, I see this warning while I run this in a notebook, but not in the shell.<|||||>I had to downgrade to transformers 4.15.0 (and left tokenizers at 0.10.3)
Using conda, you can't install tokenizers version 0.11.x with transformers 4.x. I tried forcing it, but got errors regarding openssl v3 when trying to import tokenizers.
```
conda search transformers --info -c conda-forge
...
transformers 4.16.2 pyhd8ed1ab_0
...
dependencies:
- tokenizers >=0.10.1,<0.11
```
and
```
conda search transformers --info -c huggingface
...
transformers 4.14.1 pyhd3eb1b0_0
...
dependencies:
- tokenizers >=0.10.1,<0.11
```
<|||||>Downgrading to transformers 4.15.0 fixed the issue for me! Thanks!<|||||>Downgrading 4.15.0 helped. Thank you<|||||>Downgrading transformers`conda install -c huggingface -c conda-forge transformers==4.15.0` didn't work for me, tokenizers however<|||||>Having the same issue, is this just an annoying superfluous warning or does it affect the results? Because in the former case I will just ignore it. |
transformers | 15,357 | closed | WIP Create test_speech_recognition_deepspeed.py | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 01-26-2022 19:03:55 | 01-26-2022 19:03:55 | The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_15357). All of your documentation changes will be reflected on that endpoint.<|||||>regarding to https://github.com/huggingface/transformers/issues/15330#issuecomment-1021930267<|||||>locally the tests are just getting skipped, any tipps ?<|||||>Are the tests passing for you locally @flozi00 ?<|||||>Are the tests passing for you locally @flozi00 ? Happy to hook them to our github actions runner so that they are run at least once per day<|||||>The tests are skipping locally and I don't know why exactly <|||||>Thank you for working on adding the tests, @flozi00
You first need to copy the config files:
```
cp examples/research_projects/wav2vec2/ds_config_wav2vec2_zero* examples/pytorch/speech-recognition
```
and make them part of the PR
And then the run command is:
```
RUN_SLOW=1 pytest examples/pytorch/speech-recognition/test_speech_recognition_deepspeed.py -sv
```
you were probably missing `RUN_SLOW=1`
<|||||>RUN_SLOW=1 did it and raises one error after the next one
i'm sorry but for health reasons i can't work on it in depth because i'm in the hospital with only a tablet<|||||>Very sorry to hear that you're in the hospital @flozi00 :-/ Will try to allocate time for this (early next week) |
transformers | 15,356 | closed | Fix TFLEDModel | # What does this PR do?
The computation of `diagonal_mask` in `TFLEDEncoderSelfAttention`, and `inputs["attention_mask"]` in `TFLEDEncoder` differ from `LEDEncoderSelfAttention` and `LEDEncoder`.
This cause the `logits` and `decoder hidden states` differs in the range `1e-5 ~ 5e-5`, and `encoder hidden states` differs in the range `1e-2 ~ 1e-1`.
This PR fixes these inconsistency, which reduces the difference to the range `1e-7 ~ 1e-6`. I can provide an example if you prefer to see one.
(#15256) | 01-26-2022 18:54:54 | 01-26-2022 18:54:54 | _The documentation is not available anymore as the PR was closed or merged._<|||||>@patrickvonplaten was very involved in those two models, but @Rocketknight1 and @gante may also take a look<|||||>> It looks good to me, as it recovers a pattern lost in the transition and lowers the error. I'm assuming this PR is also a dependency of #15256, correct?
>
Yes!
<|||||>Looks clean - thanks a mille for diving into this complicated code @ydshieh! Could we maybe add a test between PT and TF for LED that would have failed previously without this change? Also Longformer should be affected by this problem as well no? <|||||>> Looks clean - thanks a mille for diving into this complicated code @ydshieh! Could we maybe add a test between PT and TF for LED that would have failed previously without this change? Also Longformer should be affected by this problem as well no?
For Longformer,
- about `merge "global_attention_mask" and "attention_mask"`: TF version works correctly in `_merge_to_attention_mask`.
- about the `diagonal_mask`, it turns out `TFLongformerMainLayer` use something different `extended_attention_mask = tf.cast(tf.math.abs(1 - extended_attention_mask), tf.dtypes.float32) * -10000.0` (note the `tf.math.abs` here), which makes all of masked places negative (including global attn), while in `TFLED`, it uses `_expand_mask`, which make global attn `positive`.
This difference makes `TFLongformer` gives the same results as PT models in the test. (I feel there is something strange of using abs here though!)
<|||||>For the test, I have a more general `test_pt_tf_model_equivalence` in mind (already in another PR), which will make the previous version fail. However, it will also fail (quite a lot) other models/tests. Maybe I can add it specifically to LED model in this PR, and remove it once the more general test could be put into `test_modeling_tf_common.py` (after fixing other tf/pt inconsistency)
|
transformers | 15,355 | closed | [docs] post-PR merge fix | Reverts incorrect changes merged in https://github.com/huggingface/transformers/pull/15346 | 01-26-2022 18:49:04 | 01-26-2022 18:49:04 | _The documentation is not available anymore as the PR was closed or merged._<|||||>And thanks again for the proposed improvements, @ngoquanghuy99! |
transformers | 15,354 | open | GeneratorExp aren't supported by torch.jit.script when I try to export a previously trained model 'google/vit-base-patch16-224-in21k'. | ## Environment info
- `transformers` version: 4.15.0
- Platform: Linux-5.4.144+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
Models:
ViTModel
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
## Information
GeneratorExp aren't supported by torch.jit.script when I try to export a previously trained model 'google/vit-base-patch16-224-in21k'.
Model I am using (ViTModel):
The problem arises when using:
* [X] my own modified scripts: (give details below)
*
model_x = ViTForImageClassification.from_pretrained(
'google/vit-base-patch16-224-in21k',
num_labels=len(label2id),
label2id=label2id,
id2label=id2label
)
model_scripted = torch.jit.script(model_x) # Export to TorchScript
---------------------------------------------------------------------------
UnsupportedNodeError Traceback (most recent call last)
<ipython-input-12-bc467d8ea1c0> in <module>()
6 id2label=id2label
7 )
----> 8 model_scripted = torch.jit.script(model_x) # Export to TorchScript
9 model_scripted.save('model_scripted.pt') # Save
14 frames
/usr/local/lib/python3.7/dist-packages/torch/jit/frontend.py in __call__(self, ctx, node)
284 method = getattr(self, 'build_' + node.__class__.__name__, None)
285 if method is None:
--> 286 raise UnsupportedNodeError(ctx, node)
287 return method(ctx, node)
288
UnsupportedNodeError: GeneratorExp aren't supported:
File "/usr/local/lib/python3.7/dist-packages/transformers/modeling_utils.py", line 987
activations".
"""
return any(hasattr(m, "gradient_checkpointing") and m.gradient_checkpointing for m in self.modules())
~ <--- HERE
## To reproduce
Steps to reproduce the behavior:
1. from transformers import ViTForImageClassification
2. Instantiate a previously created mode 'google/vit-base-patch16-224-in21k' using ViTForImageClassification.from_pretrained() API.
3. Try invoking torch.jit.script(model_x) and you will see the error.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
| 01-26-2022 18:47:55 | 01-26-2022 18:47:55 | Hi,
The Vision Transformer currently isn't supported to work with TorchScript (test is disabled [here](https://github.com/huggingface/transformers/blob/c4d1fd77fa52d72b66207c6002d3ec13cc36dca8/tests/test_modeling_vit.py#L157)).
cc'ing @lewtun who can perhaps assist here to add support for it.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @ssriram1978 it seems the problem is that the Python `any()` function does not belong to the [list of TorchScript supported functions](https://pytorch.org/docs/stable/jit_builtin_functions.html#python-built-in-functions). Here's a simple example that reproduces the error:
```python
import torch
import torch.nn as nn
class MyModel(nn.Module):
def __init__(self):
super().__init__()
def forward(self, x):
a = any(i for i in range(10))
return x
model = MyModel()
x = torch.randn(1, 1)
out = model(x)
scripted = torch.jit.script(model) # Throws UnsupportedNodeError
```
Is tracing an option for your application? If yes, here's a snippet the does the job:
```python
import numpy as np
import requests
import torch
from PIL import Image
from transformers import AutoFeatureExtractor, AutoModelForImageClassification
# Load feature extractor and checkpoint
model_ckpt = "google/vit-base-patch16-224-in21k"
feature_extractor = AutoFeatureExtractor.from_pretrained(model_ckpt)
model = AutoModelForImageClassification.from_pretrained(model_ckpt, torchscript=True)
# Download sample image and feed to original model
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
original_outputs = model(**inputs)
# Trace model and save
traced_model = torch.jit.trace(model, [inputs["pixel_values"]])
torch.jit.save(traced_model, "traced_vit.pt")
# Reload traced model and compare outputs to original one
loaded_model = torch.jit.load("traced_vit.pt")
loaded_model.eval()
traced_outputs = loaded_model(**inputs)
assert np.allclose(
original_outputs[0].detach().numpy(), traced_outputs[0].detach().numpy()
)
```
You can find more details on tracing in [our guide](https://huggingface.co/docs/transformers/serialization#torchscript).
An alternative to tracing would be to export the model to ONNX - once `torch` v1.11 is released you'll be able to do this via #15658
|
transformers | 15,353 | closed | Fix YosoConfig doc | # What does this PR do?
Fixes to the Yoso config documentation. | 01-26-2022 18:33:48 | 01-26-2022 18:33:48 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,352 | closed | Allow relative imports in dynamic code | # What does this PR do?
This PR ensures one can use relative imports when defining a module.py (or other kinds of file) living on the Hub. Without this, writing a custom configuration and wanting to use it in a custom model fails, line this [repo](https://huggingface.co/hf-internal-testing/test_dynamic_model).
To solve this, the way the module files are copied in the transformers module is changed slightly as they need to keep their original name (and not the e-tag). To make sure we don't copy cached files needlessly, but still benefit from updates in the main repo, the commit hash of the repo is used as a submodule name, so the modeling files are stored in `transformers_module/repo_id/hash/`. When the hash changes, a new copy of the modeling files from the cache is done, but those files are only downloaded if there was an update to them (since they are still cached with their e-tag name).
Several tests are added to make sure everything works well. | 01-26-2022 17:58:32 | 01-26-2022 17:58:32 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,351 | closed | Fix KerasMetricCallback prediction with generate() and inference of column names | A number of fixes to the Keras metric callback based on testing with the notebooks. I think I've caught all the implementation differences, and in most cases we can now pass exactly the same `compute_metrics` function to this callback as to `Trainer`.
I'm still in the process of overhauling the notebooks, so I might need to make a few more tweaks to this PR if I encounter any other problems! | 01-26-2022 17:41:15 | 01-26-2022 17:41:15 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,350 | closed | Update doc writing guide | # What does this PR do?
This updates the guide for writing documentation to the latest changes. | 01-26-2022 17:37:49 | 01-26-2022 17:37:49 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,349 | closed | - | null | 01-26-2022 15:52:38 | 01-26-2022 15:52:38 | |
transformers | 15,348 | closed | Fix 'eval_split_name' described as defaulting to 'train' | # What does this PR do?
The default is correct (`test`) but the description is not.
## Before submitting
- [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
Cc: @sgugger, @patil-suraj | 01-26-2022 14:28:05 | 01-26-2022 14:28:05 | _The documentation is not available anymore as the PR was closed or merged._ |
transformers | 15,347 | closed | A few minor questions regarding run_summarization.py in examples | Hi, thank you for the amazing repo.
I'm currently using the script `run_summarization.py` from examples (I'm using ver 4.15.0).
When I run the code below, it seems that my trained model is not using beam search during generation.
Am I correct?
```
python examples/pytorch/summarization/run_summarization.py \
--model_name_or_path t5-small \
--do_train \
--do_eval \
--dataset_name cnn_dailymail \
--dataset_config "3.0.0" \
--source_prefix "summarize: " \
--output_dir /tmp/tst-summarization \
--per_device_train_batch_size=4 \
--per_device_eval_batch_size=4 \
--overwrite_output_dir \
--predict_with_generate
```
Furthermore, when I set `--evaluation_strategy` to either `epochs `or `steps `, the performance of the validation set during training and the final performance of the validation set measured at the end of training (`do_eval`) seem quite different.
Is this behavior expected? or am I missing something (such as only using beam search for the final measurement).
Thank you!
| 01-26-2022 12:56:45 | 01-26-2022 12:56:45 | Also, what does `task_specific_params `do during running the script `run_summarization.py`?
When I am initializing the T5-model, the model configuration includes such parameters.
```
"task_specific_params": {
"summarization": {
"early_stopping": true,
"length_penalty": 2.0,
"max_length": 200,
"min_length": 30,
"no_repeat_ngram_size": 3,
"num_beams": 4,
"prefix": "summarize: "
}
```
Thank you!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.