repo
stringclasses 1
value | number
int64 1
25.3k
| state
stringclasses 2
values | title
stringlengths 1
487
| body
stringlengths 0
234k
β | created_at
stringlengths 19
19
| closed_at
stringlengths 19
19
| comments
stringlengths 0
293k
|
---|---|---|---|---|---|---|---|
transformers | 14,639 | closed | [Hub] 403 when trying to download models | Cross-posting from https://github.com/UKPLab/sentence-transformers/issues/1297 as I'm not sure if this is an issue with the `SentenceTransformers` library or with the model hub directly.
Similar to the author of the linked post, we are also running into this both locally and from CI, so it doesn't seem to be one particular blocked or rate-limited IP. Interestingly enough, I am not running into this issue when using `huggingface/transformers` directly, but only when it's wrapped through `SentenceTransformers`. | 12-06-2021 12:20:27 | 12-06-2021 12:20:27 | Update: The original author found that the [same error also occurs on non SBERT-related activity](https://github.com/UKPLab/sentence-transformers/issues/1297#issuecomment-986724370), suggesting that it is indeed an issue on the model hub directly.
Thanks in advance for timely investigation and fix, as I assume this is blocking a lot of users at the moment :) <|||||>We're also affected, our `all-MiniLM-L6-v2` model is not working in production. Are there any workarounds? How can we download them manually and load them locally instead?<|||||>In case this helps with the investigation, I can also reproduce this in the browser:
- This page works fine: https://huggingface.co/sentence-transformers/clip-ViT-B-32/tree/afe976e3f4ea04b633edb0089a7e5088ddbe9212
- When I now click on `.gitattributes`, I am directed to a page that returns `403`: https://huggingface.co/sentence-transformers/clip-ViT-B-32/blob/afe976e3f4ea04b633edb0089a7e5088ddbe9212/.gitattributes (Update: It's not really specific to `.gitattributes`, it's any files, the file with the leading dot just happens to be the first that is checked)<|||||>@hfjallemark
You can download all models from here:
https://public.ukp.informatik.tu-darmstadt.de/reimers/sentence-transformers/v0.2/
Unzip them to a path and load from disc.
We currently look into this<|||||>Thanks @nreimers -- managed to do that and load locally.<|||||>> We currently look into this
Thanks @nreimers for the quick response. If there is any way we can help, do let us know!<|||||>Same here with `stsb-distilbert-base`
Digging in code, found that server says model should contain files [".gitattributes", "README.md"] which are 403
Ugly workaround - just skip them by adding at `site-packages/sentence_transformers/util.py`@434 (`snapshot_download` func, loop over `.sublings`) smth like
```
if model_file.rfilename in [".gitattributes", "README.md"]:
print("[adhoc hack] Skip ", model_file.rfilename)
```
Hope you will fix that <|||||>The download from the hub should be working again.
Let me know if you still have issues.<|||||>I can confirm that the download works again. Thanks to you and everyone on the team for the quick action! <|||||>Post-mortem: AWS WAF (Web Application Firewall) had a new ruleset merged today (https://docs.aws.amazon.com/waf/latest/developerguide/aws-managed-rule-groups-list.html) that didn't like our URL structure |
transformers | 14,638 | closed | Gradient accumulation causing different training curves | For a pretraining experiment using the built-in `Trainer`, setting batch size 32 x 16 accumulation steps seems to yield a different training loss curve from 64 x 8. Even though they converge to approximately the same value, shouldn't the curves be exactly the same? What would cause the difference?

I've also seen similar things with using 1 vs. multiple GPUs (under DataParallel), which may or may not be relevant. | 12-06-2021 07:22:22 | 12-06-2021 07:22:22 | Theoretically speaking, it is expected because each configuration "differently" feed the gradients. Though is also expected that they converge to a common point.<|||||>What do you mean by they differently feed the gradients? Calculating the loss based on 32 examples and then another 32 examples and then calculating the gradients should be exactly the same as calculating the loss based on 64 examples and then calculating the gradients, no?<|||||>They are very almost the same. Though, they are being repeatedly calculated and might lead to some numerical difference that explain the divergence at some points. The amount of updates that are performed when propagating the gradients, along with precision, might also been causing some influence.
If you exponentiate the loss, the differences at the end might be easier to be spotted.<|||||>cc @sgugger <|||||>I think @gugarosa analysis is correct in the sense that it could be explained by precision differences when computing the gradients vs accumulating partial gradients.<|||||>Do you think that is suffcient to explain the not-too-minor difference in the plot?<|||||>It's hard to say given you shared basically nothing on the training script :-)<|||||>Ok, I think I'll take this explanation. Thanks! I'm not doing anything extraordinary in my training script and posting it in its entirety will probably bore everyone :)<|||||>@sgugger I found one source of difference: the grouping by length done in `LengthGroupedSampler` depends on the batch size, so the data ordering would be different. I argue that ideally in all `(batch size, gradient accumulation)` settings, as long as the effective batch size is the same, they should have an identical training process, except for numerical issues.<|||||>Good point, will try to see if there is a quick way to fix this! |
transformers | 14,637 | closed | add flax example tests in CI workflow | # What does this PR do?
This PR adds flax examples tests in CI workflow. | 12-06-2021 06:16:27 | 12-06-2021 06:16:27 | |
transformers | 14,636 | closed | [Flax examples] remove dependancy on pytorch training args | # What does this PR do?
Remove dependency on PyTorch `TrainingArguments` class in flax examples.
Fixes #13721 | 12-06-2021 06:11:09 | 12-06-2021 06:11:09 | @patrickvonplaten I've run the flax examples tests and made sure that they pass with this PR. Merging this now.
Feel free to leave comments if you see something strange :) <|||||>Awesome - thanks @patil-suraj |
transformers | 14,635 | closed | fix typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-06-2021 05:21:57 | 12-06-2021 05:21:57 | |
transformers | 14,634 | closed | [WIP] Add Nystromformer | # What does this PR do?
This PR adds the Nystromformer transformer model to the repository.
Paper: [https://arxiv.org/abs/2102.03902](https://arxiv.org/abs/2102.03902)
Code: [https://github.com/mlpen/Nystromformer](https://github.com/mlpen/Nystromformer)
Checkpoint: [Nystromformer sequence length 512](https://www.dropbox.com/s/8uv4f6q52oaqwkh/Nystromformer.model?dl=0)
## Who can review?
@NielsRogge
| 12-06-2021 01:39:22 | 12-06-2021 01:39:22 | |
transformers | 14,633 | closed | Dynamic Inputs for fx traced GPTNeoLM | # π Feature request
I've been looking a little into using torch.fx + HF lately, and run into a problem that the traced GPTNeoLM can't take dynamic inputs - would love to know what work would be needed to enable that, and what the current blockers are - it would be great to be able to use the traced module for inference.
I'm assuming the problem has something to do with caching of keys/values?
## Your contribution
Given guidance, I could put some work into adding this feature, yes :) | 12-05-2021 22:48:42 | 12-05-2021 22:48:42 | cc @michaelbenayoun <|||||>Hi @sdtblck ,
The issue with dynamic inputs is not related to the `past_key_values` but to dynamic control-flow in the model forward pass implementation. The PR #14321, that I plan to merge at some point (hopefully before the end of the year) solves this kind of issues for the supported models.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,632 | closed | to use FX - your torch version must be *exactly* 1.9, even though fx also works in later versions | [this line](https://github.com/huggingface/transformers/blob/3977b58437b8ce1ea1da6e31747d888efec2419b/src/transformers/file_utils.py#L375) checks that the torch version matches 1.9 exactly, even though fx is available in 1.9+ (i.e torch 1.10).
To use it in torch 1.10, i had to do the following, and the symbolic trace worked just fine, so i'm not sure that this strict versioning is necessary:
```python
from transformers import GPTNeoForCausalLM
import torch
from transformers.models.gpt_neo.modeling_gpt_neo import GPTNeoSelfAttention
from transformers.utils.fx import symbolic_trace
import transformers
transformers.utils.fx.is_torch_fx_available = lambda: True
model = GPTNeoForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M")
g = symbolic_trace(model)
print(g)
``` | 12-05-2021 21:24:30 | 12-05-2021 21:24:30 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>cc @michaelbenayoun, what's the status of our support of torch fx?<|||||>Currently working on a PR (#14321), that should be merged soon, it should allow to use symbolic trace for torch 1.10 and above.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,631 | closed | Add Sliding Window to TokenClassificationPipeline | # π Feature request
The current [implementation of TokenClassificationPipeline](https://github.com/huggingface/transformers/blob/3977b58437b8ce1ea1da6e31747d888efec2419b/src/transformers/pipelines/token_classification.py#L82) cannot do anything with text input beyond the length (measured in tokens) allowed by the model's number of positional embeddings. I think it would be useful to many users to have a setting in the pipeline that allows for a "sliding window" approach that can take documents longer than the `.config.max_position_embeddings` of the model in use.
The modified version of `TokenClassificationPipeline` could be instantiated with the optional new params `window_size` (defaults to `model.max_position_embeddings - 2` to accomodate the `[CLS]` and `[SEP]` special tokens) and `stride` (defaults to `window_size / 2`, the user can disable the sliding window by setting `stride = 0`) like so:
```
model, tokenizer = # Some model, some tokenizer
pipeline = TokenClassificationPipeline(model=model, tokenizer=tokenizer, aggregation_strategy='first', stride=254)
```
## Motivation
I run tasks that involve doing named entity recognition on longer documents. I have to use my own modified subclass of `TokenClassificationPipeline` in order to do so, since the existing class does not support processing texts longer than the maximum number of position embeddings for the underlying models.
## Your contribution
I will submit a PR adding sliding window functionality to `TokenClassificationPipeline`. I have previously created a sliding window implementation of Transformers' NER Pipeline [here](https://github.com/nlpsandbox/phi-annotator-huggingface/blob/634e1736837b50a190f42c2fc741b262e88bf3ac/server/openapi_server/huggingface.py#L15), and will re-work this to work with TensorFlow and meet the code standards of HuggingFace/Transformers.
| 12-05-2021 21:13:59 | 12-05-2021 21:13:59 | Hi @cascadianblue,
This could be an interesting option to add.
IMHO:
- This should be opt-in, since at the boundaries, you might still get issues if the entities are across your split. FYI, for question answering we use a stride argument to have an overlap on the chunks and give opportunity to the model to see each token with some context, even at the boundaries. Should we do that here, we should be prepared to handle resolution logic. I can see the stride but not resolution mechanism in your code.
- This would be slightly tricky to do as a simple pipeline (because of the `batch_size`), maybe something like this will be necessary: https://github.com/huggingface/transformers/pull/14225
- I can see in your code hardcoded CLS ,and SEP tokens, this unfortunately cannot be allowed since this is very model specific. `self.tokenizer("some text")` should always be use to add those automatically
- A PR should obviously add tests for that option and importantly with at least 1 test showcasing the boundary handling. Like "Welcome to New York city" with a model max_length 3 for instance, should still be able to yield "New York" (if it was able to on a simple test ofc)
Feel free to start a PR and publish it even if not refined to get help/support or if unsure about anything.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Is there any continued work on this (@cascadianblue / @Narsil)?<|||||>@cgpeltier I haven't worked on this since making the feature request. You're welcome to copy my code with attribution, if it works for your case.<|||||>Are there any plans to add this feature? <|||||>@boyleconnor and I have implemented a new class called [`SlidingWindowTokenClassificationPipeline`](https://github.com/connor-qingxia/transformers/blob/71d6f938485c45fc6d494cebf501065f04242c0c/src/transformers/pipelines/token_classification.py#L496) which inherits from `TokenClassificationPipeline`.
The class essentially operates as @boyleconnor has described in this issue. First the input text is separated into list of tokens using the sliding window method with given window length and stride. And on the output side, we aggregated the duplicated tokens by averaging their output logits and picked the highest one to be the chosen NER.
However, `TokenClassficationPipeline` just updated a new feature while we are working on the implementation of this new class. The updated version of `TokenClassificationPipeline` allows another argument `stride` so that when the input text is longer than the `model_max_length`, this would allow the pipeline to separate input text into several chunks using the sliding window method. When it comes to output, the current code would first generate the NER output and then use the aggregate function defined [here](https://github.com/huggingface/transformers/blob/d04ec99bec8a0b432fc03ed60cea9a1a20ebaf3c/src/transformers/pipelines/token_classification.py#L331) to compare the output scores of the duplicated token and select the NER with higher score.
Both implementation follows more or less the same spirit but differ slightly in how the final NER gets chosen. So I am wondering if it is still valuable for us to create a PR that adds `SlidingWindowTokenClassificationPipeline` to the script in spite of the current `TokenClassificationPipeline` can already perform somewhat similar functionalities. @Narsil <|||||>It depends.
This is an advanced feature, if you can provide a simple PR that adds support for the way you handle this, it would be a nice addition.
Also you will find https://github.com/huggingface/transformers/pull/21771 which contains more details about choices, and more importantly scripts to compare various implementations. |
transformers | 14,630 | closed | bert-base-uncased weights for BertForPreTraining | Hi,
According to the model card: https://huggingface.co/bert-base-uncased , these weights are obtained by training on two objectives (MLM and NSP) but the `config.json` has only the `BertForMaskedLM` tag. So when initializing `BertForPreTraining` model with `bert-base-uncased` weights, the following warning is printed:
`Some weights of BertForPreTraining were not initialized from the model checkpoint at bert-base-uncased and are newly initialized: ['cls.predictions.decoder.bias']`.
(My use-case would be to load `bert-base-uncased` weights and run the same pretraining process as the one described in the model card: https://huggingface.co/bert-base-uncased)
@LysandreJik | 12-05-2021 19:03:54 | 12-05-2021 19:03:54 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,629 | closed | GPT-NeoX checkpoint conversion? | I found that there are conversion from gpt-neo and gpt-J to transformers. I wonder if the checkpoints trained using gpt-neox https://github.com/EleutherAI/gpt-neox could be loaded inside transformers? I think it's the same architecture as GPT-neo but I could be wrong. Thanks! | 12-05-2021 12:46:23 | 12-05-2021 12:46:23 | This is something we're working on at eleuther in preparation for an upcoming release, and already have some working code for specific configurations, but for other configurations it breaks.
If it would be useful to HF, we can post up the code :) <|||||>@sdtblck This sounds really useful and I'd like to take a look too! I wonder if you could share what configuration it works on? Right now I have a trained model that's using the 2.7B.yml and I'd like to do inference with HF (no training is fine). Thank you so much!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I wonder if there are any updates on this @sdtblck ?
Any help would be greatly appreciated!
|
transformers | 14,628 | closed | load_best_model_at_end failed due to "size mismatch" when DeepSpeed is used | ## Environment info
- `transformers` version: 4.13.0.dev0
- Platform: Linux-5.4.0-91-generic-x86_64-with-glibc2.17
- Python version: 3.8.10
- PyTorch version (GPU?): 1.10.0+cu113 (True)
- Tensorflow version (GPU?): 2.7.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.3.6 (cpu)
- Jax version: 0.2.25
- JaxLib version: 0.1.74
- Using GPU in script?: Yes, 1 GPU
- Using distributed or parallel set-up in script?: No
### Who can help
@stas00
## Information
EleutherAI/gpt-neo-1.3B and EleutherAI/gpt-j-6B
The problem arises when using: run_clm.py with DeepSpeed and --load_best_model_at_end
The tasks I am working on is: a toy fine-tuning
## To reproduce
Steps to reproduce the behavior:
1. Run run_clm.py without DeepSpeed finished successfully:
```
+ export TOKENIZERS_PARALLELISM=false
+ TOKENIZERS_PARALLELISM=false
+ CUDA_VISIBLE_DEVICES=0
+ ./run_clm.py --model_name_or_path=EleutherAI/gpt-neo-1.3B --dataset_name=wikitext --dataset_config_name=wikitext-2-raw-v1 --output_dir=output.test --overwrite_output_dir=true --do_train=true --max_train_samples=100 --do_eval=true --max_eval_samples=100 --logging_strategy=steps --logging_steps=10 --evaluation_strategy=steps --eval_steps=3 --save_strategy=steps --save_steps=3 --save_total_limit=2 --load_best_model_at_end=true --per_device_train_batch_size=16 --per_device_eval_batch_size=4 --gradient_accumulation_steps=1 --gradient_checkpointing=true --num_train_epochs=2
```
2. Run similar command with DeepSpeed failed
```
+ export TOKENIZERS_PARALLELISM=false
+ TOKENIZERS_PARALLELISM=false
+ deepspeed --num_gpus 1 run_clm.py --deepspeed=zero3.json --model_name_or_path=EleutherAI/gpt-neo-1.3B --dataset_name=wikitext --dataset_config_name=wikitext-2-raw-v1 --output_dir=output.test --overwrite_output_dir=true --do_train=true --max_train_samples=100 --do_eval=true --max_eval_samples=100 --logging_strategy=steps --logging_steps=10 --evaluation_strategy=steps --eval_steps=3 --save_strategy=steps --save_steps=3 --save_total_limit=2 --load_best_model_at_end=true --per_device_train_batch_size=16 --per_device_eval_batch_size=4 --gradient_accumulation_steps=1 --gradient_checkpointing=true --num_train_epochs=2
```
Error stack:
```
Training completed. Do not forget to share your model on huggingface.co/models =)
[INFO|trainer.py:1431] 2021-12-05 01:29:52,800 >> Loading best model from output.test/checkpoint-9 (score: 2.5225963592529297).
Traceback (most recent call last):
File "run_clm.py", line 536, in <module>
main()
File "run_clm.py", line 484, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/meiyang/src/transformers_fork/src/transformers/trainer.py", line 1440, in train
self._load_state_dict_in_model(state_dict)
File "/home/meiyang/src/transformers_fork/src/transformers/trainer.py", line 1472, in _load_state_dict_in_model
load_result = self.model.load_state_dict(state_dict, strict=False)
File "/home/meiyang/bin/miniconda3/envs/gptj/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1482, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for GPTNeoForCausalLM:
size mismatch for transformer.wte.weight: copying a param with shape torch.Size([50257, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.wpe.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.0.attn.attention.k_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.0.attn.attention.v_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.0.attn.attention.q_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.0.attn.attention.out_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.0.mlp.c_fc.weight: copying a param with shape torch.Size([8192, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.0.mlp.c_proj.weight: copying a param with shape torch.Size([2048, 8192]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.1.attn.attention.k_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.1.attn.attention.v_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.1.attn.attention.q_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.1.attn.attention.out_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.1.mlp.c_fc.weight: copying a param with shape torch.Size([8192, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.1.mlp.c_proj.weight: copying a param with shape torch.Size([2048, 8192]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.2.attn.attention.k_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.2.attn.attention.v_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
size mismatch for transformer.h.2.attn.attention.q_proj.weight: copying a param with shape torch.Size([2048, 2048]) from checkpoint, the shape in current model is torch.Size([1]).
:
```
## Expected behavior
--load_best_model_at_end should not crash with DeepSpeed
| 12-05-2021 09:52:03 | 12-05-2021 09:52:03 | cc @stas00 <|||||>Thank you for this report, @dunalduck0 - I didn't have this "path" tested (or ever used it). I can reproduce the problem with a much faster:
```
rm -r output_dir; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 deepspeed --num_gpus=1 examples/pytorch/translation/run_translation.py --model_name_or_path hf-internal-testing/tiny-random-t5 --output_dir output_dir --overwrite_output_dir --per_device_train_batch_size 1 --sortish_sampler --source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" --source_prefix "translate English to Romanian: " --deepspeed tests/deepspeed/ds_config_zero3.json --do_train --max_train_samples 3 --do_eval --max_eval_samples 1 --logging_strategy steps --logging_steps 1 --evaluation_strategy steps --eval_steps 1 --save_strategy steps --save_steps 1 --load_best_model_at_end --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --num_train_epochs 1
```
will work on solving it and report back when I have something to show.<|||||>Please try with this PR https://github.com/huggingface/transformers/pull/14652 and let me know if that fixes the issue for you.
Thank you.<|||||>Thank you @stas00. What do I do to get your fix into my local box? I have a Linux bot and a local copy of fork of Huggingface/transformer. It looks like your fix has not been merged into HuggingFace/transformers<|||||>Indeed, it's not merged yet. I need to add tests first.
Here are some of the ways you can try my PR's branch:
if you have `gh` installed (https://github.com/cli/cli)
```
git clone https://github.com/huggingface/transformers
cd transformers
gh pr checkout 14652
pip install -e .
```
Or you can clone my fork and switch to that branch:
```
git clone https://github.com/stas00/transformers
cd transformers
git checkout ds-load-best-model
pip install -e .
```
**update: The PR is ready to be merged, but I want to make sure you validate it first that it indeed solves the problem for you.**
<|||||>Will do it tonight<|||||>Perfect. I will merge as soon as you give me green light.<|||||>The fix worked on my box. Thank you stas00 for quick fix.
Minor comment: the logging confused me at first time. I saw DeepSpeed re-initialized everything and I almost thought the program restarted again :P.<|||||>Thank you for testing, @dunalduck0
Once Deepspeed fixes this issue the full restart will go away |
transformers | 14,627 | closed | How to separate multiturn dialog context in blenderbot? | Hello. I want to perform multiturn dialog with blenderbot model. I followed the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/blenderbot#transformers.BlenderbotForConditionalGeneration) and here's the code:
```
chat_history = []
uttr_sep = '</s> <s>'
while(True):
UTTR_input = input('User: ')
chat_history.append(UTTR_input)
UTTR = uttr_sep.join(chat_history)
print('Input: ' + UTTR)
inputs = tokenizer([UTTR], return_tensors='pt')
reply_ids = model.generate(**inputs)
REPLY = tokenizer.batch_decode(reply_ids, skip_special_tokens=True)[0]
print("Bot:\n" + REPLY)
chat_history.append(REPLY)
```
But model treats context as user's input and sometimes `</s> <s>` shows in model's responses.
In [this issue](https://github.com/huggingface/transformers/issues/9365), there are discussions about what to use to separate multiturns including `</s> <s>`, `</sep>`, `\n`, `\t`, but no conclusions.
| 12-05-2021 08:11:29 | 12-05-2021 08:11:29 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm facing the same. Maybe @patrickvonplaten or @sshleifer can give us a hint on which separator to use :) <|||||>Gently pinging @Narsil and @patil-suraj here<|||||>Hi @LiShaoyu5 ,
A good first approximation is to use the ConversationalPipeline actually https://huggingface.co/docs/transformers/master/en/main_classes/pipelines#transformers.ConversationalPipeline
```python
pipe = pipeline(model="facebook/blenderbot-1B-distill")
conversation_1 = Conversation("Going to the movies tonight - any suggestions?")
pipe(conversation_1)
print(conversation_1)
conversation_1.add_user_input("And what about the next day ?")
pipe(conversation_1)
print(conversation_1)
```
`ConversationalPipeline` tries to use the very special (and private) method on the tokenizer `_build_conversation_input_ids`: https://github.com/huggingface/transformers/blob/master/src/transformers/models/blenderbot/tokenization_blenderbot_fast.py#L79
Which allows for model specific things like locution separator.
This code will work for `BlenderBot` not `BlenderBotSmall` (which has a very different architecture and separation).
The source for the information of `" "` (double space) separation of locutors was taken directly from `fairseq`. I couldn't however make it into a proper link, since the code is pretty spread out. Overall good care was taken at the time to make sure the raw outputs of the model ended up being exactly similar.
Keep in mind, that `transformers` does not contain the full code of `blenderbot`. And while the model is the same, the actual `fairseq` implementation contains many more hardcoded things like the turn separator, some are even trickier like a banned word list, or even smaller models to assess the output of the model and discard it if it is deemed "injurious" for instance.
On the other end, using `transformers` to fine-tune should be easier.
<|||||>Hello @Narsil and thanks for your help! <|||||>i dont think this should be closed yet. first sorry for bad writing, i only have 1 hand now due to surgery so i type as i can. i tried the suggested and i think there is a bug somewhere, when using conversational pipeline the response to: "Hey, how are you doing?" is " I'm doing well, thank you. How are you? I am doing?" with blenderbot3b while using the model without the pipeline returns "I'm doing well. How are you? I am good..". I think this is strange and we should look into it, specially because the answer provided by the pipeline is deceptive. I'm also using 2 spaces when working without the pipeline...
|
transformers | 14,626 | closed | fix a typo | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-04-2021 21:15:40 | 12-04-2021 21:15:40 | @patil-suraj |
transformers | 14,625 | closed | Updated deberta attention | # What does this PR do?
This PR simplifies the current DeBERTa implementation that handles an attention type that was eventually not implemented. It removes unused code, simplifies the model logic branching and avoids unnecessary calculations.
<!-- Remove if not applicable -->
Fixes https://github.com/huggingface/transformers/issues/14621
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@LysandreJik
@BigBird01
| 12-04-2021 15:00:09 | 12-04-2021 15:00:09 | Thanks @guillaume-be!
I'm worried that this would break models that have `p2p` as a `pos_att_type`. I think putting the following in the configuration initialisation would be safer:
```py
self.pos_att_type = pos_att_type if pos_att_type != "p2p" else "p2c"
```
Or something like this, still taking into account the list creation. What do you think?<|||||>Hi @LysandreJik ,
`pos_att_type` is a list of strings, that may contain `p2c`, `c2p` and `p2p` initialized as
```python
self.pos_att_type = config.pos_att_type if config.pos_att_type is not None else []
```
In the proposed implementation, the `p2p` attribute may be parsed into the list of attention types if it is provided. It will however be ignored in the attention calculations (it was already not impacting the model output, although was leading to unnecessary calculations). Providing `p2p` in the `pos_att_type` is still allowed and will not break the model, but will not impact the results.
As an illustration all 3 configurations in the example below do not lead to an error. Config 1 and 2 lead to identical outputs. Config 3 will lead to a calculation without relative attention:
```python
from transformers import DebertaForSequenceClassification, DebertaTokenizer, DebertaConfig
if __name__ == '__main__':
config = DebertaConfig.from_pretrained("microsoft/deberta-base-mnli")
config.pos_att_type = ["c2p", "p2c", "p2p"] # case 1
# config.pos_att_type = ["c2p", "p2c"] # case 2
# config.pos_att_type = ["p2p"] # case 3
model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base-mnli", config=config)
tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base-mnli')
input_context = ["[CLS] I love you. [SEP] I like you. [SEP]"]
input_ids = tokenizer(input_context, return_tensors="pt", padding=True)
# warmup
output = model(input_ids=input_ids.input_ids)
print(output.logits.softmax(-1))
```
Please let me know if I misunderstood the concern and a case that would lead to breaking changes |
transformers | 14,624 | closed | updated pytorch token-classification readme | # What does this PR do?
Adds proper arguments to the readme.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger | 12-04-2021 14:15:05 | 12-04-2021 14:15:05 | |
transformers | 14,623 | closed | Auto processor fix | Reverts the `AutoProcessor` magic. | 12-04-2021 10:00:17 | 12-04-2021 10:00:17 | I'm OOO on Monday/Tuesday so feel free to merge this so that it doesn't block Wednesday's release. |
transformers | 14,622 | closed | [Quick poll] Give your opinion on the future of the Hugging Face Open Source ecosystem! | Thanks to all of you, Transformers just passed 55k :star2: this week!
Since the last survey, we've expanded our efforts into vision and speech, and the community now actively contributes to the [model hub](https://hf.co/models) and [datasets hub](https://hf.co/datasets); so this survey isn't solely focused around `transformers`, but around the entire HF ecosystem!
If you have a couple of minutes and want to participate in shaping the future of the ecosystem, please share your thoughts:
[**hf.co/oss-survey**](https://hf.co/oss-survey)
(please reply in the above feedback form rather than to this thread)
Thank you all on behalf of the HuggingFace team! π€ | 12-04-2021 09:11:34 | 12-04-2021 09:11:34 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,621 | closed | DeBERTa `p2p` attention type is not used | ### Who can help
DeBERTa: @LysandreJik
## Information
The current implementation of DeBERTa handles 3 types of attentions: `p2c`, `c2p` and `p2p` with complex branching handling the different types of attentions that may be provided by the user. It is my current understanding that the `p2p` attention currently does not impact the output of the model but leads to unnecessary intermediate calculations and unused code.
The handling of the attention types is mainly done in the `disentangled_att_bias` method at https://github.com/huggingface/transformers/blob/73ec4340ec651ca1fe4f8ead9206297a4d4ed79c/src/transformers/models/deberta/modeling_deberta.py#L656
While the option of having `p2p` in the attention type is used on lines 673, 677 and 690, the scores are only updated if either `p2c` or `c2p` are in the attention types when these types are checked on lines 683 and 700. I am proposing to simplify the attention calculation to handle only the `c2p` and `p2c` attention types, as follows:
```python
def disentangled_att_bias(self, query_layer, key_layer, relative_pos, rel_embeddings, scale_factor):
if relative_pos is None:
q = query_layer.size(-2)
relative_pos = build_relative_position(q, key_layer.size(-2), query_layer.device)
if relative_pos.dim() == 2:
relative_pos = relative_pos.unsqueeze(0).unsqueeze(0)
elif relative_pos.dim() == 3:
relative_pos = relative_pos.unsqueeze(1)
# bxhxqxk
elif relative_pos.dim() != 4:
raise ValueError(f"Relative position ids must be of dim 2 or 3 or 4. {relative_pos.dim()}")
att_span = min(max(query_layer.size(-2), key_layer.size(-2)), self.max_relative_positions)
relative_pos = relative_pos.long().to(query_layer.device)
rel_embeddings = rel_embeddings[
self.max_relative_positions - att_span : self.max_relative_positions + att_span, :
].unsqueeze(0)
score = 0
if "c2p" in self.pos_att_type:
pos_key_layer = self.pos_proj(rel_embeddings)
pos_key_layer = self.transpose_for_scores(pos_key_layer)
c2p_att = torch.matmul(query_layer, pos_key_layer.transpose(-1, -2))
c2p_pos = torch.clamp(relative_pos + att_span, 0, att_span * 2 - 1)
c2p_att = torch.gather(c2p_att, dim=-1, index=c2p_dynamic_expand(c2p_pos, query_layer, relative_pos))
score += c2p_att
# position->content
if "p2c" in self.pos_att_type:
pos_query_layer = self.pos_q_proj(rel_embeddings)
pos_query_layer = self.transpose_for_scores(pos_query_layer)
pos_query_layer /= math.sqrt(pos_query_layer.size(-1) * scale_factor)
if query_layer.size(-2) != key_layer.size(-2):
r_pos = build_relative_position(key_layer.size(-2), key_layer.size(-2), query_layer.device)
else:
r_pos = relative_pos
p2c_pos = torch.clamp(-r_pos + att_span, 0, att_span * 2 - 1)
p2c_att = torch.matmul(key_layer, pos_query_layer.transpose(-1, -2))
p2c_att = torch.gather(
p2c_att, dim=-1, index=p2c_dynamic_expand(p2c_pos, query_layer, key_layer)
).transpose(-1, -2)
if query_layer.size(-2) != key_layer.size(-2):
pos_index = relative_pos[:, :, :, 0].unsqueeze(-1)
p2c_att = torch.gather(p2c_att, dim=-2, index=pos_dynamic_expand(pos_index, p2c_att, key_layer))
score += p2c_att
return score
```
I have tested this implementation on a miminal example, and the results are identical, with the benefit of simpler attention calculation:
```python
from transformers import DebertaForSequenceClassification, DebertaTokenizer, DebertaConfig
if __name__ == '__main__':
config = DebertaConfig.from_pretrained("microsoft/deberta-base-mnli")
config.pos_att_type = ["c2p", "p2p"]
model = DebertaForSequenceClassification.from_pretrained("microsoft/deberta-base-mnli")
tokenizer = DebertaTokenizer.from_pretrained('microsoft/deberta-base-mnli')
input_context = ["[CLS] I love you. [SEP] I like you. [SEP]"]
input_ids = tokenizer(input_context, return_tensors="pt", padding=True)
output = model(input_ids=input_ids.input_ids)
print(output.logits.softmax(-1))
```
Simplifying this calculation also allows a marginally simpler instantiation of the `DisentangledSelfAttention` module
If you would be open to it, I'd be happy to submit these changes as part of a PR.
Thank you!
| 12-04-2021 08:43:12 | 12-04-2021 08:43:12 | Thanks for reporting! Pinging @BigBird01 for advice :)<|||||>Yes, P2P is not used actually. In our previous experiments, we run with P2P for ablation study and finally we didn't apply it due to the additional cost and marginal performance improvement. Please feel free to remove it. |
transformers | 14,620 | closed | Implement head_mask for Flax BERT and other models copied from BERT | # What does this PR do?
This PR implements attention head masking for Flax BERT models, and other models copied from BERT, i.e. Roberta, BigBird, Electra. This PR thus narrows the gap between Flax and other implementations.
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@LysandreJik
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-03-2021 22:49:23 | 12-03-2021 22:49:23 | Hi @patil-suraj, thank you a lot for the review. I commented on the issue with BigBird head-masking. As I mentioned, I tried to follow the current `PyTorch` implementation, and I'm definitely willing to have a look at handling head masking for different attention mechanisms.<|||||>I forgot to write this earlier, do you think you could also update the flax templates, the templates test are failing,
If not let me know, I will take care of it , thanks :) <|||||>@patil-suraj Oh, good point. I'll update the template :] |
transformers | 14,619 | closed | Getting ambiguous messages for SummarizationPipeline | When working with the SummarizationPipeline I get errors every input of type
`Your max_length is set to 50, but you input_length is only 46. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)` | 12-03-2021 21:28:58 | 12-03-2021 21:28:58 | I tried to fix this in this PR #14618<|||||>Fixed in #14618 |
transformers | 14,618 | closed | quick fix SummarizationPipeline error messages | Fix error messages to avoid spam errors, and errors of type:
`Your max_length is set to 50, but you input_length is only 46. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)` | 12-03-2021 21:27:21 | 12-03-2021 21:27:21 | Pinging @Narsil and @patrickvonplaten <|||||>Thanks for your contribution @NouamaneTazi :)
You have an error in your code quality; do you mind running the following commands at the root of your clone?
```
pip install -e ".[quality"]
make fixup
```
This should fix most quality issues and let you know if there are some the scripts cannot resolve.<|||||>@NouamaneTazi .
I am not sure you PR achieves your desired goal:
You are changing another error than the error you're posting in the description.
You report `max_length is set to ....` and are modifying another warning. Actually the error you're modifying is different and linked to using `min_length` and having a length very well smaller than `min_length`.
FWIW. Summarization usually ingests large texts, and tries to generate something between `min_length` and `max_length` of size as a summary.
If `input_length` < `min_length` or `input_length` < `max_length` then the summary is likely to be not very summarized (since the original string already fits the requirements, or worse, the summary is supposed to expand on it. (Since original text is smaller than the supposedly `min_length`.
Do that clarify why the warnings are originally here ? If yes, then maybe we could clarify the warning instead to make them self sufficient ?
The warnings originated here 9c683ef01e19c4dc1216dcd1ae3c8e7c44d7b2b9 (tried to get a better rationale for the `// 2` but doesn't seem to be explained)
<|||||>Hello @Narsil , thanks for your clarification.
Indeed, since I tried to make the quick fix using Github web IDE, it didn't save all my changes. Here are my intended changes:
* assert `max_length > min_length`
* assert `input_length > max_length` (if not suggest `max_length = input_length // 2`)
Do you think there is more checks to do?<|||||>I think it should be fine to check both, but would that avoid the spam you were referring to ?<|||||>Yes, because before the solution proposed wasn't clear:
`Your max_length is set to 50, but you input_length is only 46. You might consider decreasing max_length manually, e.g. summarizer('...', max_length=50)`
Whereas now, we do propose a `max_length = input_length // 2` which should stop showing the error if fixed.<|||||>Makes sense. |
transformers | 14,617 | closed | [urls to hub] Replace outdated model tags with their now-canonical pipeline types | > ### [Internal tracker π ](https://github.com/huggingface/moon-landing/pull/1558)
#### Context:
On the hub side, we are using the metadata generated by transformers (hat/tip @sgugger) to replace the pattern matching, BUT I'm also proposing to take this opportunity to remove the `lm-head`, `seq2seq`, `causal-lm`, `masked-lm` auto-generated tags (it's still possible to add them manually to a model card, of course)
(they are superseded by the pipeline tag aka. what we call the Task on hf.co/models)
This might break some stuff though, which is why your feedback will be important. | 12-03-2021 18:59:47 | 12-03-2021 18:59:47 | also cc'ing @osanseviero for info before merging! |
transformers | 14,616 | closed | Fix doc builder | Hotfix to fix the doc-builder | 12-03-2021 17:03:20 | 12-03-2021 17:03:20 | Passing! Updated and merging |
transformers | 14,615 | closed | Bigger batch size decrease the throughput. | I'm training bert-base-uncased from scratch, single node, 8xA100, if increase the batch size, the throughput decrease, detail is:
- set batch size to 256, the throughput is 7000 sample/s, GPU Utilization rate 80%
- set batch size to 384(while full gpu memory), the throughput is 4000 sample/s, GPU Utilization rate 30%
It look like the IO problem(or maybe not, you can correct me)?
Any help will be appreciated~ txs | 12-03-2021 12:43:01 | 12-03-2021 12:43:01 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,614 | closed | datasets doesn't support NSP? should I implement my custom DataCollator or function for dataset.map? | As the title, I haven't find NSP dataset, there is only transformers.TextDatasetForNextSentencePrediction, but it's not work while loading data by `load_dataset(...)`.
So have I to implement my custom DataCollator or function for dataset.map? Which one is better?
Or I miss something? | 12-03-2021 12:02:40 | 12-03-2021 12:02:40 | Pinging @sgugger <|||||>Yes, you have to implement your own function for preprocessing the data.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,613 | closed | Running Pipeline batching code from documentation returns TypeError: _batch_encode_plus() got an unexpected keyword argument 'batch_size' on Colab. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux-5.4.104+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.12
- PyTorch version (GPU?): 1.10.0+cu111 (True)
- Tensorflow version (GPU?): 2.7.0 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
I have Tokenizers version `0.10.3` installed.
### Who can help
@Narsil @sgugger
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
I was trying to use batch inference using pipelines and datasets. I was running into issues so tried to just run the [example](https://huggingface.co/docs/transformers/master/main_classes/pipelines#transformers.TextClassificationPipeline) from the docs in Colab. This returns an error `TypeError: _batch_encode_plus() got an unexpected keyword argument 'batch_size'`.
## To reproduce
Steps to reproduce the behavior:
1. Run example pipeline batching code from [docs](https://huggingface.co/docs/transformers/master/main_classes/pipelines#transformers.TextClassificationPipeline)
```python
from transformers import pipeline
from transformers.pipelines.base import KeyDataset
import datasets
import tqdm
dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
pipe = pipeline("text-classification", device=0)
for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
print(out)
# [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
# Exactly the same output as before, but the content are passed
# as batches to the model
```
Returns the following stack trace:
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-60-ef0c0f3faa82> in <module>()
6 dataset = datasets.load_dataset("imdb", name="plain_text", split="unsupervised")
7 pipe = pipeline("text-classification", device=0)
----> 8 for out in pipe(KeyDataset(dataset, "text"), batch_size=8, truncation="only_first"):
9 print(out)
10 # [{'label': 'POSITIVE', 'score': 0.9998743534088135}]
5 frames
/usr/local/lib/python3.7/dist-packages/torch/_utils.py in reraise(self)
432 # instantiate since we don't know how to
433 raise RuntimeError(msg) from None
--> 434 raise exception
435
436
TypeError: Caught TypeError in DataLoader worker process 0.
Original Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/worker.py", line 287, in _worker_loop
data = fetcher.fetch(index)
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/torch/utils/data/_utils/fetch.py", line 49, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/usr/local/lib/python3.7/dist-packages/transformers/pipelines/base.py", line 616, in __getitem__
processed = self.process(item, **self.params)
File "/usr/local/lib/python3.7/dist-packages/transformers/pipelines/text_classification.py", line 130, in preprocess
return self.tokenizer(inputs, return_tensors=return_tensors, **tokenizer_kwargs)
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 2442, in __call__
**kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_base.py", line 2512, in encode_plus
**kwargs,
File "/usr/local/lib/python3.7/dist-packages/transformers/tokenization_utils_fast.py", line 496, in _encode_plus
**kwargs,
TypeError: _batch_encode_plus() got an unexpected keyword argument 'batch_size'
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
That the pipeline successfully uses batching for inference.
I suspect this is some user error on my part but since others might try and run the documentation code in Colab I thought it was worth opening an issue.
I haven't had a chance to test this in a non-Colab environment yet. | 12-03-2021 11:56:03 | 12-03-2021 11:56:03 | Hello! Could you install from the `master` branch and give it another try?
```py
pip uninstall transformers
pip install git+https://github.com/huggingface/transformers
# Restart colab runtime
```
The current documentation is `master` as we're trying out the new frontend; thanks for your understanding!<|||||>> Hello! Could you install from the `master` branch and give it another try?
>
> ```python
> pip uninstall transformers
> pip install git+https://github.com/huggingface/transformers
> # Restart colab runtime
> ```
>
> The current documentation is `master` as we're trying out the new frontend; thanks for your understanding!
This works without any errors now. Thanks so much and, apologies I should have spotted that the docs are for the `master` branch. <|||||>No problem at all, the docs are usually for the stable release, but for this week only we're switching to `master` to try out the new docs.
By the way, what's your feeling about the updated frontend of the documentation?<|||||>> No problem at all, the docs are usually for the stable release, but for this week only we're switching to `master` to try out the new docs.
>
> By the way, what's your feeling about the updated frontend of the documentation?
I mostly like the updated frontend. One thing I find a bit harder to read is the arguments sections. For example the arguments for `class transformers.TextClassificationPipeline`:
<img width="760" alt="Screenshot 2021-12-03 at 13 47 30" src="https://user-images.githubusercontent.com/8995957/144613158-153f2639-864b-4a61-ae23-defefe29088d.png">
I find it a bit harder to parse this because the `model`, `tokenizer`,`modelcard` etc. arguments are not that separated. This makes it a bit harder for me to quickly see all of the arguments without reading through. I would find it easier to read if it was split into new lines like:
```
Arguments:
- model: ajfnjann
- tokenizer: hahdhf
- modelcard: thewhrh
```
This might be more to do with the formating of the underlying docs rather than the frontend though and this is obviously a bit subjective and others might find this format better.
<|||||>Ah that's a bug in the formatting of the docstrings of those classes, it should not be displayed like this! Will fix that this morning :-)<|||||>@Narsil @sgugger thanks both for your help. I will close this now - hopefully, anyone else running into the same issue this week will stumble on this issue. <|||||>Thanks for the bug report @davanstrien!<|||||>I also encountered the same problem with the new fast tokenizer. Use the old one (`use_fast=False`).
```python
tokenizer = AutoTokenizer.from_pretrained(model_type, use_fast=False)
pipe = TextClassificationPipeline(
model=model,
tokenizer=tokenizer,
batch_size=batch_size,
device=-1
)
```<|||||>`batch_size` needs to be used on the `pipe` call, not on the `pipeline` call.
```python
tokenizer = AutoTokenizer.from_pretrained(model_type, use_fast=False)
pipe = TextClassificationPipeline(
model=model,
tokenizer=tokenizer,
device=-1
)
for out in pipe(data, batch_size=batch_size):
# Do something
```
The error and overall UX could be improved for sure (either support this call type or raise proper error).<|||||>> `batch_size` needs to be used on the `pipe` call, not on the `pipeline` call.
>
> ```python
> tokenizer = AutoTokenizer.from_pretrained(model_type, use_fast=False)
> pipe = TextClassificationPipeline(
> model=model,
> tokenizer=tokenizer,
> device=-1
> )
> for out in pipe(data, batch_size=batch_size):
> # Do something
> ```
>
> The error and overall UX could be improved for sure (either support this call type or raise proper error).
Thanks! The documentation is a bit confusing. |
transformers | 14,612 | closed | Scores returned by beam search are not useful | ## Environment info
- `transformers` version: 4.12.3
- Platform: Linux-3.10.0-1160.45.1.el7.x86_64-x86_64-with-centos-7.9.2009-Core
- Python version: 3.7.11
- PyTorch version (GPU?): 1.9.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: yes
-
### Who can help
- Text generation: @patrickvonplaten
## Information
The typical use-case why one might specify `output_scores` when using `model.generate` is to get the scores of each of the tokens returned in `sequences` (see #14086) . However, that is not what is being returned.
@patrickvonplaten gives an example [here](https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175) about how to calculate the scores, but this is not correct.
In beam search, it is false to assume that the token `sequences[i, j]` was chosen from the probability distribution `scores[j-1][i]`. It could have come from any of the other beams. Therefore, to get the actual per-token probabilities, one would have to completely reimplement beam search, similar to #14065 (although as I point out below, that solution is also not correct).
For sampling search, finding the per-token probabilities after the fact is completely impossible.
The returned scores right now are only useful for visualizing the beam search algorithm and not for the use-case that most users would want, i.e. to know the probabilities for each token in `sequences`.
## To reproduce
Adapted from [Patrick's example](https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175)
```python
import transformers
import datasets
import torch
model = transformers.AutoModelForSeq2SeqLM.from_pretrained("lidiya/bart-base-samsum")
tok = transformers.AutoTokenizer.from_pretrained("lidiya/bart-base-samsum")
ds = datasets.load_dataset("samsum")["validation"]
bsz = 10
num_beams = 4
batch = tok(ds[:bsz]["dialogue"], padding=True, truncation=True, return_tensors="pt")
output = model.generate(
**batch, num_beams=num_beams, return_dict_in_generate=True, output_scores=True
)
tokens = output.sequences
seq_len = tokens.size(1)
assert list(tokens.size()) == [bsz, seq_len]
# No softmax is necessary, the scores are already logprobs (if the doc is correct)
scores = torch.stack(
output.scores, dim=1
) # Not dim=-1 so the dimensions are in the same order as in tokens
vocab_size = scores.size(2)
assert list(scores.size()) == [bsz * num_beams, seq_len - 1, vocab_size]
per_token_scores = torch.gather(
# Step size of num_beams, so at least the gather
# fetches probabilities from the correct batch index
input=scores[::num_beams],
# beam_search also returns the start token even though it is not an output from the model
# that token has no probability, so we need to start at sequence index 1
index=tokens[:, 1:, None],
dim=2,
).squeeze(2)
# This result has the correct shape: (bsz, seq_len-1),
# but it is *not* actually the cumulative probabilities for each token in tokens,
# because tokens[0, i] did not necessarily come from scores[0, i-1],
# it could have come from another beam that assigned it a higher probability.
# Without reimplementing beam search, we cannot get the per-token probabilities.
# To actually get the per-token probabilities (without reimplementing beam search):
logits = model.forward(
**batch,
decoder_input_ids=tokens,
decoder_attention_mask=tokens.ne(tok.pad_token_id)
).logits
actual_per_token_scores = torch.gather(
input=logits.log_softmax(-1), index=tokens[:, 1:, None], dim=2
).squeeze(2)
assert not torch.allclose(per_token_scores, actual_per_token_scores.cumsum(1))
# Two naive possibilities to get the probabilities which will work often, but not always:
# The reason is that the same token may be chosen by two different beams, so without
# more history (and effectively a full re-implementation of beam search), we cannot get an accurate
# result in that case.
# To actually get the per-token probabilities
max_scores = scores.view(bsz, num_beams, seq_len - 1, vocab_size).max(1).values
maybe_token_scores = torch.gather(
input=max_scores, index=tokens[:, 1:, None], dim=2
).squeeze(2)
# Alternative
maybe_token_scores = []
for i in range(1, seq_len):
values, indices = torch.topk(
scores[:, i - 1].view(bsz, num_beams * vocab_size), k=num_beams, dim=1
)
selected_tokens = indices % vocab_size
assert list(selected_tokens.size()) == list(values.size()) == [bsz, num_beams]
# Multiply instead of mask select because mask select flattens the array
maybe_token_scores.append((values * selected_tokens.eq(tokens[:, i])).max(1).values)
maybe_token_scores = torch.stack(maybe_token_scores, dim=1)
# Right now the only definitive way to get per-token probabilities
logits = model.forward(
**batch,
decoder_input_ids=tokens,
decoder_attention_mask=tokens.ne(tok.pad_token_id)
).logits
actual_per_token_scores = torch.gather(
input=logits.log_softmax(-1), index=tokens[:, 1:, None], dim=2
).squeeze(2)
assert not torch.allclose(per_token_scores, actual_per_token_scores.cumsum(1))
```
## Expected behavior
Either `scores` or a new field in the beam search output should return the probability for each token in `sequences`. Because of padding, these sequences would be of different length. | 12-03-2021 11:05:33 | 12-03-2021 11:05:33 | For anyone else looking for the same thing: I edited the original post to add some different ways to get the actual probabilities. I think the forward pass (the last option) is the only one guaranteed to be correct, but also the slowest.
It would be much easier if we just got this output out of beam search.<|||||>@felix-schneider thanks for your write-up here. Could you give more detail regarding the statement
> "but this is not correct"
In the example of the forum post, I do the following:
1. Return all output logits at every step (which can be seen as `softmax^(-1)(prob)`)
2. Compute the softmax to get the probability that of token i to be sampled at time-step t
3. Gather the probabilities of the generated tokens
4. Take the product of the probabilities to get the overall probability
=> which step is not correct here? And how?
Note that the example is just for sampling - not for beam search<|||||>For beam search it is true that this operation is not that simple because it's hard to know from which beam the output tokens are coming from at each time step - one could compute this by taking the `topk` values of the `scores` output.
However in order to facilitate the computation for the beam scores I'd be happy to also return the `beam_idx` to make it easier to map the generated tokens to the corresponding score.<|||||>See: https://discuss.huggingface.co/t/generation-probabilities-how-to-compute-probabilities-of-output-scores-for-gpt2/3175/15?u=patrickvonplaten
I'll improve the behavior for beam search<|||||>https://github.com/huggingface/transformers/pull/14654<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,611 | closed | Got extra_id_%d when generating texts, and %d is negative. | Mission: Translate Graph to English.
In generation_utils.py _generate_beam_search()
After getting next_scores and next_tokens, i was tring to generate texts using the next_tokens with the second highest next_scores.
But i got ids which converts to be <extra_id_%d>, and %d is a negative number.
Anyone got some ideas?
Thanks. | 12-03-2021 10:43:10 | 12-03-2021 10:43:10 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,610 | closed | 2022 is the year of multi-modality | Long overdue update of the README | 12-03-2021 10:34:59 | 12-03-2021 10:34:59 | Thank you all for your contributions to the README! |
transformers | 14,609 | closed | Unable to load mT5 with MT5Model.from_pretrained("google/mt5-small") | ## Environment info
- `transformers` version: 4.2.2
- Platform: win10
- Python version: 3.6
- PyTorch version (GPU?): pytorch==1.8.1
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
@LysandreJik@patrickvonplaten, @patil-suraj
## Information
Model I am using (mt5): code
from transformers import MT5Model
from transformers import T5Tokenizer
tokenizer = T5Tokenizer.from_pretrained("google/mt5-small")
model = MT5Model.from_pretrained("google/mt5-small")
When I use the above code, the following error appearsοΌ
Some weights of the model checkpoint at google/mt5-small were not used when initializing MT5Model: ['lm_head.weight']
- This IS expected if you are initializing MT5Model from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing MT5Model from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
https://huggingface.co/docs/transformers/model_doc/mt5
How to load the mt5-base(or mt5-small) model correctly? thanks | 12-03-2021 07:41:39 | 12-03-2021 07:41:39 | Hi @Xuanfang1121 !
You are loading `MT5Model` , which loads the base model without the `lm_head`, so when loading the pre-trained weights the `lm_head.weight` is ignored.
If you want to load the model for conditional generation training or inference then you should use the `MT5ForConditionalGeneration` class which has the `lm_head`.<|||||>> Hi @Xuanfang1121 !
>
> You are loading `MT5Model` , which loads the base model without the `lm_head`, so when loading the pre-trained weights the `lm_head.weight` is ignored.
>
> If you want to load the model for conditional generation training or inference then you should use the `MT5ForConditionalGeneration` class which has the `lm_head`.
ok,thanks |
transformers | 14,608 | open | [Benchmark] HF Trainer on RTX-3090 | # π₯ Benchmarking `transformers` w/ HF Trainer on RTX-3090
We are going to use a special benchmarking tool that will do all the work for us. https://github.com/huggingface/transformers/pull/14934
This is the index post and specific benchmarks are in their own posts below:
1. [fp16 vs bf16 vs tf32 vs fp32](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004390803)
2. [gradient accumulation steps](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537)
3. [gradient checkpointing](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004422281)
4. [batch size](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537)
5. [optimizers](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005219385)
6. [combining winning strategies](https://github.com/huggingface/transformers/issues/14608#issuecomment-1005229426) **~2x speed improvement!**
7. [RTX-3090 vs A100](https://github.com/huggingface/transformers/issues/15026#issuecomment-1005235845)
See also the [same benchmarks for A100](https://github.com/huggingface/transformers/issues/15026)
TODO:
- other suggestions?
Note that each benchmark was run only once, so multiple runs and averaging is probably going to give slightly different results. The purpose here though is to see relative differences roughly and not try to give an exact number.
| 12-03-2021 05:56:53 | 12-03-2021 05:56:53 | ### Diagnostics of T5 slowness under TF32
By Eddie Yan (re-pasted from torch slack)
Here is update/summary after investigating the T5 model further:
The lack of speedup in TF32 can be attributed to bottlenecks in non-GEMM ops (e.g., pointwise ops in custom unfused LayerNorm and custom AdamW optimizer). Without optimization, we see that the custom LayerNorm is comparable in wall clock time to GEMMs due to this bottleneck (first image attachment).
Zoomed in profile on A6000 showing custom βT5LayerNormβ being very expensive compared to cuBLAS GEMM (in green):

When we zoom out in the profile, the ratio of pointwise ops in the optimizer to compute is further exacerbated by the small batch size of the large model on the 3090; this small batch size means that the GEMM compute intensity is low for the number of pointwise ops incurred by the optimizer, which will update every parameter in the model along with running statistics for a relatively number small of training examples. The second image attachment shows this issue, where the optimizer step wall clock time is comparable to an entire backward step (220+ms vs. 260 ms)!
Zoomed out profile on A6000 showing expensive optimizer step (AdamW) relative to backward pass:

On A6000, a GPU comparable to 3090 in terms of architecture, we've done a study to incrementally gauge the performance of optimizations for TF32 vs. fp32. First, we replaced the custom LayerNorm implementation with PyTorch's native implementation (which despite being different should be good for a rough estimate). While the native implementation is far from optimal, this change yields 38.3 samples/s with TF32 vs. 34.3 with fp32, a ~10% speedup. Turning on gradient accumulation improves performance dramatically as the optimizer to forward-backward compute ratio is abated, but more importantly TF32 is now ~20% faster than fp32 at 90.5 samples/s to 75.1 samples/s for fp32.
Additionally, replacing the custom AdamW with a fused implementation from apex (thanks to @Kevin Stephano for suggesting this) yields another small improvement to 92.4 samples/s with TF32 to 75.3 for fp32.As we've seen that the lack of improvement can be attributed to a low ratio of GEMMs to pointwise ops (especially in the optimizer), another way to improve this ratio is to increase the batch size vs. gradient accumulation. This approach really shines on A100, which even with 40GiB of memory allows the batch size to be increased to 64. Perhaps due to higher TF32 throughput, we see that the speedup here is dramatic: 207.4 samples/s to 73.1 samples/s for fp32, an over 2x speedup.
----------------------------
For reference here is the custom "T5LayerNorm" which also ignores the input data types and casts to float32: https://github.com/huggingface/transformers/blob/37ed3ab719f10dc00bf63ac343b441bf78bb1eee/src/transformers/models/t5/modeling_t5.py#L239 (this is RMS norm and not a normal layer norm, hence we don't have any fast kernels at the moment)
-----
Here is another analysis of t5 but for fp16 https://github.com/NVIDIA/apex/issues/1271
----
Also trying to dealing with it by trying to manifest an RMSNorm fused kernel here: https://github.com/NVIDIA/apex/issues/1271<|||||>Will probably move the following elsewhere, but will save here for now:
### Notes on Ampere cards performance comparison
From Christian Sarofeen (re-pasted from torch slack)
Whitepapers on Ampere chips:
- [GA100](https://images.nvidia.com/aem-dam/en-zz/Solutions/data-center/nvidia-ampere-architecture-whitepaper.pdf) (A100)
- [GA102](https://images.nvidia.com/aem-dam/en-zz/Solutions/geforce/ampere/pdf/NVIDIA-ampere-GA102-GPU-Architecture-Whitepaper-V1.pdf) (RTX-3080, RTX-3090) (consumer grade)
- GA104 (RTX-3070)
The following numbers are TFLOPs
RTX-3080:
- FP32: 29.8
- TF32: 29.8
- FP16: 59.5 (forget sparsity at the moment)
- BF16: 59.5
RTX-3090: (~1.17 more powerful than 3080)
- FP32: 34.87 (29.8*1.17)
- TF32: 34.87 (29.8*1.17)
- FP16: 69.61 (59.5*1.17)
- BF16: 69.61 (59.5*1.17)
A100:
- FP32: 19.5
- TF32: 156
- FP16: 312
- BF16: 312
Performance comparison:
SM = Streaming multi-processors on the GPU.
[GeForce 30 series](https://en.wikipedia.org/wiki/GeForce_30_series)
If you just look at the SM count it's probably the easiest way to scale. So RTX 3080 has 68 SMs, and 3090 has 82 SMs. Then the clock speeds it's 1440 (1710 with boost) vs 1395 (1695). So the ratio of their compute is 68 * 1440 : 82 * 1395 if we just use the base clocks.
3090 should be more SMs, slightly slower clock. When you have a significantly bigger chip it's common to reduce the clock speed so the overall power consumption isn't over some set budget.
They're both GA102 chips, so more SMs equates pretty trivial to more compute. 3070 on the other hand is a GA104, so comparison to that isn't as straight forward.
You can't straight forwardly compare RTX with A100 like you can within the same chip family. So wikipedia was fine to go from 3080 -> 3090, because they're based on the same chip GA102. A100 is a GA100 so you can't do a simple comparison like that.<|||||># precision: fp16 vs bf16 vs tf32 vs fp32
Main interest: benchmarking the new --bf16 and --tf32 on Ampere/RTX-3090, comparatively to fp16 and fp32 modes.
- bf16 is `autocast(dtype=torch.bfloat16)`
- tf32 is `torch.backends.cuda.matmul.allow_tf32 = True`
```
Datetime : 2021-12-29 16:37:16
Software:
transformers: 4.16.0.dev0
torch : 1.10.1
cuda : 11.3
python : 3.8.11
Hardware:
1 GPUs : NVIDIA GeForce RTX 3090, 23.70GB
```
Note: to get the best performance make sure you have 2 independent 12V PCIe rails plugged into the card and not 2 splits of the same rail.
## Benchmark
The benchmark uses 3 different t5 models, and at the end of the section also gpt2. For t5 the main script is:
```
CUDA_VISIBLE_DEVICES=0 python \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2
```
and now adding one of:
```
--tf32 0 # fp32
--tf32 0 --fp16
--tf32 0 --bf16
--tf32 1
--tf32 1 --fp16
--tf32 1 --bf16
```
But we are going to use a special benchmarking tool that will do all the work for us. https://github.com/huggingface/transformers/pull/14934
Important notes:
1. `--tf32 0 --fp16 0` combo is just fp32 (which is the default mode - we don't have this option per se)
2. I changed `--per_device_train_batch_size` in the base command from 32 (`t5-small`) to 16 (`t5-base`) to 8 (`t5-large`) to be able to fit into the GPU memory while keeping it as occupied as possible.
3. I changed `--max_train_samples` in the base command from 20k (`t5-small`) to 10k (`t5-base`) to 5k (`t5-large`) to give each run about 1-3min of run time so that the benchmark doesn't take too too long, but is long enough to put strain on the card.
## Benchmark 1: t5-small
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-small \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 32 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 20000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 286.07 | 0 | 2.51 |
| --tf32 1 | 342.82 | 20 | 2.51 |
| --fp16 --tf32 0 | 422.07 | 48 | 2.51 |
| --fp16 --tf32 1 | 423.18 | 48 | 2.51 |
| --bf16 --tf32 0 | 415.93 | 45 | 2.52 |
| --bf16 --tf32 1 | 418.51 | 46 | 2.52 |
Conclusions:
- bf16 is 2-3% slower than fp16
- tf32 makes 0% impact on bf16 and fp16 modes
- tf32 is 20% faster than fp32, but otherwise doesn't help much with performance
## Benchmark 2: t5-base
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 16 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 95.69 | 0 | 2.20 |
| --tf32 1 | 116.58 | 22 | 2.20 |
| --fp16 --tf32 0 | 131.98 | 38 | 2.20 |
| --fp16 --tf32 1 | 132.84 | 39 | 2.20 |
| --bf16 --tf32 0 | 135.47 | 42 | 2.21 |
| --bf16 --tf32 1 | 135.86 | 42 | 2.21 |
Conclusions:
- similar to t5-small
- but bf16 is 2-3% faster than fp16!
## Benchmark 3: t5-large
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-large \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 8 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 31.88 | 0 | 2.03 |
| --tf32 1 | 35.66 | 12 | 2.03 |
| --fp16 --tf32 0 | 47.34 | 49 | 0.00 |
| --fp16 --tf32 1 | 48.08 | 51 | 0.00 |
| --bf16 --tf32 0 | 35.07 | 10 | 2.04 |
| --bf16 --tf32 1 | 35.13 | 10 | 2.04 |
Conclusions:
- **fp16 overflows here** (loss=0). I originally wasn't printing the loss and thus missed this and was getting much faster outcome under fp16! But it was totally wrong. (And this is a very well [known issue](https://github.com/huggingface/transformers/pull/10956) with many bf16-pretrained models that are being attempted to be finetuned in fp16).
- tf32 makes 0% impact on bf16 mode
- tf32 is only 12% faster than fp32
## Benchmark 4: gpt2
Let's try a different architecture.
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2 \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--logging_strategy no --save_strategy no --do_train --max_train_samples 2500 \
--per_device_train_batch_size 8 --num_train_epochs 1 --warmup_steps 8 \
--block_size 512 --report_to none ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 26.96 | 0 | 3.36 |
| --tf32 1 | 33.43 | 24 | 3.36 |
| --fp16 --tf32 0 | 42.46 | 58 | 3.36 |
| --fp16 --tf32 1 | 42.43 | 57 | 3.36 |
| --bf16 --tf32 0 | 42.43 | 57 | 3.37 |
| --bf16 --tf32 1 | 42.42 | 57 | 3.37 |
Conclusions:
- tf32 still far from suggested huge speedups - only 24%
- as before tf32 makes no difference for fp16/bf16
- fp16/bf16 perform on par here and are 57% faster than fp32
## Benchmark 5: gpt2-medium
and now with `gpt-medium` (~3x larger than `gpt`):
```
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/language-modeling/run_clm.py --model_name_or_path gpt2-medium \
--dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
--logging_strategy no --save_strategy no --do_train --max_train_samples 1200 \
--per_device_train_batch_size 4 --num_train_epochs 1 --warmup_steps 8 \
--block_size 512 --report_to none ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'|--fp16|--bf16' '--tf32 0|--tf32 1' --report-metric-keys train_loss \
--repeat-times 1 --base-variation '--tf32 0'
```
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------|------------------------------------:|------------:|----------------:|
| --tf32 0 | 9.23 | 0 | 3.02 |
| --tf32 1 | 11.48 | 24 | 3.01 |
| --fp16 --tf32 0 | 14.50 | 57 | 3.02 |
| --fp16 --tf32 1 | 14.52 | 57 | 3.02 |
| --bf16 --tf32 0 | 14.56 | 58 | 3.02 |
| --bf16 --tf32 1 | 14.55 | 58 | 3.02 |
Conclusions:
- % diff is same as the smaller `gpt2` model
<|||||># gradient accumulation steps
Let's choose `t5-base` model to test with as it's pretty large yet doesn't overflow like t5-large.
Let's measure `--gradient_accumulation_steps` 1,2,4,8,16 with different precision configurations.
*** Results:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:-------------------------------------------------|------------------------------------:|------------:|----------------:|
| --gradient_accumulation_steps 1 --tf32 0 | 96.17 | 0 | 2.20 |
| --gradient_accumulation_steps 1 --tf32 1 | 116.57 | 21 | 2.20 |
| --gradient_accumulation_steps 1 --tf32 0 --fp16 | 132.64 | 38 | 2.20 |
| --gradient_accumulation_steps 1 --tf32 0 --bf16 | 136.35 | 42 | 2.21 |
| --gradient_accumulation_steps 2 --tf32 0 | 103.83 | 8 | 2.28 |
| --gradient_accumulation_steps 2 --tf32 1 | 130.11 | 35 | 2.28 |
| --gradient_accumulation_steps 2 --tf32 0 --fp16 | 153.09 | 59 | 2.28 |
| --gradient_accumulation_steps 2 --tf32 0 --bf16 | 156.70 | 63 | 2.29 |
| --gradient_accumulation_steps 4 --tf32 0 | 108.48 | 13 | 2.39 |
| --gradient_accumulation_steps 4 --tf32 1 | 137.75 | 43 | 2.40 |
| --gradient_accumulation_steps 4 --tf32 0 --fp16 | 164.48 | 71 | 2.40 |
| --gradient_accumulation_steps 4 --tf32 0 --bf16 | 170.01 | 77 | 2.42 |
| --gradient_accumulation_steps 8 --tf32 0 | 111.14 | 16 | 2.57 |
| --gradient_accumulation_steps 8 --tf32 1 | 141.59 | 47 | 2.57 |
| --gradient_accumulation_steps 8 --tf32 0 --fp16 | 170.77 | 78 | 2.57 |
| --gradient_accumulation_steps 8 --tf32 0 --bf16 | 177.59 | 85 | 2.62 |
| --gradient_accumulation_steps 16 --tf32 0 | 112.65 | 17 | 2.81 |
| --gradient_accumulation_steps 16 --tf32 1 | 143.89 | 50 | 2.81 |
| --gradient_accumulation_steps 16 --tf32 0 --fp16 | 173.69 | 81 | 2.81 |
| --gradient_accumulation_steps 16 --tf32 0 --bf16 | 181.04 | 88 | 2.86 |
Let's filter out just one subset so that it's easier to compare the gradient accumulation differences alone, so re-running with just bf16 enabled (` --tf32 0 --bf16`):
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --gradient_accumulation_steps 1 | 135.85 | 0 | 2.21 |
| --gradient_accumulation_steps 2 | 156.95 | 16 | 2.29 |
| --gradient_accumulation_steps 4 | 167.65 | 23 | 2.42 |
| --gradient_accumulation_steps 8 | 175.02 | 29 | 2.62 |
| --gradient_accumulation_steps 16 | 179.15 | 32 | 2.86 |
Conclusions:
- that's a significant speed up for even 4 steps
- notice that the loss gets much bigger with the higher accumulation steps - my benchmark is very short and with less steps to take when the batches are larger, the model simply doesn't have a chance to step down far enough. The same can be observed with just [normal batch size changes](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537).
Non-zero lr warm up too plays a role here since it's a very short run.
```
*** Setup:
Datetime : 2022-01-03 14:53:02
Software:
transformers: 4.16.0.dev0
torch : 1.10.1
cuda : 11.3
python : 3.8.11
Hardware:
1 GPUs : NVIDIA GeForce RTX 3090, 23.70GB
*** The benchmark command line was:
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 16 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 ' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_accumulation_steps 1|--gradient_accumulation_steps 2|--gradient_accumulation_steps 4|--gradient_accumulation_steps 8|--gradient_accumulation_steps 16' \
'--tf32 0|--tf32 1|--tf32 0 --fp16|--tf32 0 --bf16' --report-metric-keys \
train_loss --repeat-times 1
```
<|||||># gradient checkpointing
Let's choose `t5-base` model to test with as it's pretty large yet doesn't overflow like t5-large.
Let's benchmark enabling `--gradient_checkpointing`
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------|------------------------------------:|------------:|----------------:|
| --gradient_checkpointing 0 | 135.82 | 24 | 2.21 |
| --gradient_checkpointing 1 | 109.24 | 0 | 2.21 |
Conclusions:
- as expected since gradient checkpointing recalculates forward activations it should be slower - we get a 24% slowdown here.
Let's look at memory:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss | Train<br>mem<br>gpu<br>alloc<br>delta | Train<br>mem<br>gpu<br>peaked<br>delta |
|:---------------------------|------------------------------------:|------------:|----------------:|----------------------------------------:|-----------------------------------------:|
| --gradient_checkpointing 0 | 63.42 | 32 | 2.17 | 2684MB | 3340MB |
| --gradient_checkpointing 1 | 47.96 | 0 | 2.17 | 2676MB | 1245MB |
We can clearly see that peak GPU memory is ~2/3 less.
note: I had to half BS in the 2nd benchmark as I was getting OOM. Plus memory metrics slow things down.
```
*** The benchmark command lines were:
1.
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 16 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 --bf16' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_checkpointing 0|--gradient_checkpointing 1' --report-metric-keys \
train_loss --repeat-times 1
2.
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --per_device_train_batch_size 8 --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 --bf16 --skip_memory_metrics 0' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--gradient_checkpointing 0|--gradient_checkpointing 1' \
--report-metric-keys 'train_loss train_mem_gpu_alloc_delta train_mem_gpu_peaked_delta' \
--repeat-times 1
```
<|||||># batch size
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:---------------------------------|------------------------------------:|------------:|----------------:|
| --per_device_train_batch_size 1 | 10.04 | 0 | 1.90 |
| --per_device_train_batch_size 2 | 19.39 | 93 | 2.01 |
| --per_device_train_batch_size 4 | 38.66 | 285 | 2.09 |
| --per_device_train_batch_size 8 | 77.52 | 672 | 2.17 |
| --per_device_train_batch_size 16 | 144.12 | 1335 | 2.26 |
Conclusions:
- No surprise here, the speed here is directly proportional to the gpu capacity utilization. In this particular configuration BS=16 is the highest BS we can fit. So when we use BS=1 we greatly underutilize the GPU. The speed up is linear and almost directly proportional to the batch-size.
- as with [gradient accumulation steps](https://github.com/huggingface/transformers/issues/14608#issuecomment-1004392537) lm loss gets worse with the increase in the batch size because my benchmark is very short and with less steps to take when the batches are larger, the model simply doesn't have a chance to step down far enough.
```
*** Setup:
Datetime : 2022-01-03 17:10:28
Software:
transformers: 4.16.0.dev0
torch : 1.10.1
cuda : 11.3
python : 3.8.11
Hardware:
1 GPUs : NVIDIA GeForce RTX 3090, 23.70GB
*** The benchmark command line was:
CUDA_VISIBLE_DEVICES=0 python ./scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' examples/pytorch/translation/run_translation.py --model_name_or_path t5-base \
--output_dir output_dir --do_train --label_smoothing 0.1 --logging_strategy no \
--save_strategy no --max_source_length 512 \
--max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 5000 --dataloader_num_workers 2 --bf16' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--per_device_train_batch_size 1|--per_device_train_batch_size 2|--per_device_train_batch_size 4|--per_device_train_batch_size 8|--per_device_train_batch_size 16' \
--report-metric-keys train_loss --repeat-times 1
```<|||||># optimizers
Let's do fp32 first:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_hf | 116.95 | 4 | 2.20 |
| --optim adamw_torch | 112.60 | 0 | 2.20 |
| --optim adafactor | 90.55 | -20 | 2.20 |
| --optim adamw_apex_fused | 126.38 | 12 | 2.20 |
Observations:
- apex's FusedAdam is the fastest.
fp16:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_hf | 132.49 | 4 | 2.20 |
| --optim adamw_torch | 126.84 | 0 | 2.20 |
| --optim adafactor | 101.91 | -20 | 2.20 |
| --optim adamw_apex_fused | 144.54 | 14 | 2.20 |
bf16:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_hf | 136.49 | 4 | 2.21 |
| --optim adamw_torch | 130.66 | 0 | 2.21 |
| --optim adafactor | 104.65 | -20 | 2.22 |
| --optim adamw_apex_fused | 148.51 | 14 | 2.21 |
Observations:
- The relative speed up is the same
```
# fp32
CUDA_VISIBLE_DEVICES=0 python \
/hf/transformers-trainer-benchmark/scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \
--do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 16 \
--max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 \
' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--optim adamw_hf|--optim adamw_torch|--optim adafactor|--optim adamw_apex_fused' \
--report-metric-keys train_loss --base-variation '--optim adamw_torch'
# fp16 - just add --fp16 to base-cmd
# bf16 - just add --bf16 to base-cmd
```
<|||||># combining winning strategies
Now let's combine the winning strategies from each individual benchmark above and compare with the baseline:
| Variation | Train<br>samples<br>per<br>second | Diff<br>% | Train<br>loss |
|:----------------------------------------------------------------------|------------------------------------:|------------:|----------------:|
| --optim adamw_torch --gradient_accumulation_steps 1 --tf32 0 | 93.40 | 0 | 2.20 |
| --optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16 | 178.90 | 92 | 2.62 |
**Getting an almost 2x improvement in speed!**
```
CUDA_VISIBLE_DEVICES=0 python \
/hf/transformers-trainer-benchmark/scripts/benchmark/trainer-benchmark.py \
--base-cmd \
' \
examples/pytorch/translation/run_translation.py --model_name_or_path t5-base --output_dir output_dir \
--do_train --label_smoothing 0.1 --logging_strategy no --save_strategy no --per_device_train_batch_size 16 \
--max_source_length 512 --max_target_length 512 --num_train_epochs 1 --overwrite_output_dir \
--source_lang en --target_lang ro --dataset_name wmt16 --dataset_config "ro-en" \
--source_prefix "translate English to Romanian: " --warmup_steps 50 \
--max_train_samples 10000 --dataloader_num_workers 2 \
' \
--target-metric-key train_samples_per_second --repeat-times 1 --variations \
'--optim adamw_torch --gradient_accumulation_steps 1 --tf32 0|--optim adamw_apex_fused --gradient_accumulation_steps 8 --tf32 --bf16' \
--report-metric-keys train_loss
``` |
transformers | 14,607 | closed | [CI] move env print to util, add pt, nccl versions | This PR:
- moves multiple python one liners to print pt env into a script - this is both faster and easier to maintain
- adds torch version
- adds NCCL version (could remove if it's useless info, i just thought it might be useful in certain situations)
@LysandreJik | 12-03-2021 01:26:41 | 12-03-2021 01:26:41 | |
transformers | 14,606 | closed | [trainer] add tf32-mode control | This PR adds tr32-mode control support for HF Trainer for Ampere cards. RFC: https://github.com/huggingface/transformers/issues/14450
pytorch had this mode on by default since pt-1.7, but are discussing to turn it off in the coming new release. https://github.com/pytorch/pytorch/issues/67384
Here is the proposed logic:
1. by default HF Trainer will set it to Enabled, This is marked as experimental should we discover that this is not a safe default down the road.
2. and `--tf32 0` will disable it.
3. If the setup uses a wrong gpu or too low torch version - it will silently do nothing as it's irrelevant.
The PR adds:
- `is_torch_tf32_available` and `require_torch_tf32` helper utils
- adds basic test
Fixes: https://github.com/huggingface/transformers/issues/14450
@sgugger, @LysandreJik
| 12-03-2021 00:19:05 | 12-03-2021 00:19:05 | The benchmarks are terrible: https://github.com/huggingface/transformers/issues/14608
going to ask for advice from the pytorch experts |
transformers | 14,605 | closed | [Flax] Add Flax implementation of `RoFormer` | # π Feature request
Add Flax implementation of `RoFormer` models.
## Motivation
Improve the level of Flax/Jax support.
## Your contribution
I'll be glad to work on this :] | 12-02-2021 20:02:33 | 12-02-2021 20:02:33 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>active #15005 |
transformers | 14,604 | closed | HF space specific issue: can't import `AutoModelForSeq2SeqLM` from `transformers==4.6.1` with `torch` installed | For some reason I can't import `AutoModelForSeq2SeqLM` when running on hf space AND having installed `torch==1.10.0`.
## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.6.1
- Platform: HF Space β https://huggingface.co/spaces/aseifert/AutoModelForSeq2SeqLM-not-working
- Python version: 3.8
- PyTorch version (GPU?): 1.10.0
- Using GPU in script?: no
- Using distributed or parallel set-up in script?: no
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## To reproduce
Here is a minimal example illustrating the problem, with the only difference being the installation of `torch` through `requirements.txt`:
Works: https://huggingface.co/spaces/aseifert/AutoModelForSeq2SeqLM-working
Doesn't work: https://huggingface.co/spaces/aseifert/AutoModelForSeq2SeqLM-not-working
As far as I can tell this only happens on HF Space. I can't reproduce the problem on Google Colab: https://colab.research.google.com/drive/1SfNk-YjwqxYC_92y-652NoYrCi-vBEHd?usp=sharing
Not sure what's happening here, hopefully someone can help me figure this out!
Thanks,
Alex
| 12-02-2021 18:13:58 | 12-02-2021 18:13:58 | cc @cbensimon <|||||>Thank you for flagging this issue @aseifert !
~I was able to reproduce this locally in `conda` using:~
```
conda create -n buggy-space python=3.8 && conda activate bugg-space
pip install transformers==4.6.1 torch==1.10.0
```
~followed by running the following inside the Python repl:~
```
from transformers import AutoModelSeq2SeqLM
```
~The _really_ strange aspect is that the following runs without error:~
```
python -c "from transformers import AutoModelSeq2SeqLM"
```
~I was also able to reproduce the error up to `transformers` v4.8, after which the `AutoModelSeeq2SeqLM` class was replaced separate classes for encoder / decoder LM.~
~I'm not entirely sure what is causing the problem here, but don't think it's specifically connected to Spaces~
Edit: the above is not true because of a typo in the import π . Using `AutoModelForSeq2SeqLM` works fine locally, so the problem does indeed appear to reside on the Spaces side<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,603 | closed | Recover when the `sha` of the local file is not correct. | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: `master` (probably any version)
- Platform: ubuntu
- Python version: 3.9
- PyTorch version (GPU?): 1.10
- Tensorflow version (GPU?): X
- Using GPU in script?:no
- Using distributed or parallel set-up in script?:no
### Who can help
@LysandreJik
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. If a model get incorrectly downloaded or corrupted later. You will end up in an incorrect state where `from_pretrained` won't work anymore.
Script to create incorrect state artificially:
```python
import os
from transformers import AutoModel
from transformers.file_utils import cached_path, hf_bucket_url
model_id = "Narsil/small2"
model = AutoModel.from_pretrained(model_id)
path = cached_path(hf_bucket_url(model_id, "pytorch_model.bin"))
os.remove(path)
with open(path, "w") as f:
f.write("")
model = AutoModel.from_pretrained(model_id) # Triggers an unrecoverable error
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Since we still have `.json` that contains the expected `etag` we can actually check the file against it and trigger a redownload in such situations instead.
Roughly I expect to get a warning that something was wrong with my local file compared to what was expected and that it will attempt to download again, download again and work (if there's nothing obviously wrong there).
(I actually expect for the sha to be run regularly too, even if it's expensive, if someone modified my weights under me, and they're still loadable, I can have big mistakes occurring, no ? I can see users doing cp `mymodel/pytorch_model.bin ~/.cache/..../232323.232aedc`.)
<!-- A clear and concise description of what you would expect to happen. -->
| 12-02-2021 14:42:03 | 12-02-2021 14:42:03 | That's interesting. We have the `force_download` option which you can use to force a redownload in case of a corrupted file, but your approach doesn't have any drawbacks that I am aware of.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,602 | closed | Can you add a fast tokenizer for the ByT5 model? | null | 12-02-2021 13:47:41 | 12-02-2021 13:47:41 | Hi @vitalyshalumov ,
Do you have any suggestions as to why we should do that ? `ByT5` operates on bytes directly, so no real tokenization is needed there, offloading work to rust might actually be counterproductive there.<|||||>>
There are several features of the Fast tokenizers that are utilised in the Q-A task and are missing from the non-fast implementation.
<|||||>You mean `offsets` ?
Would have to check, but keeping those with `ByT5` is probably doable.
Any other features you thought of ?<|||||>> You mean `offsets` ?
>
> Would have to check, but keeping those with `ByT5` is probably doable.
>
> Any other features you thought of ?
Features such as:
truncation="only_second",
return_overflowing_tokens=True,
return_offsets_mapping=True,
stride=doc_stride
and a sequence_ids() method
Basically I want to be able to run this example
https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb
Thank you in advance!
<|||||>I see, there's a workaround because not all tokenizers are fast and we can still run question-answering on those (you just loose offsets):
Take a look at `transformers.data.processors.squad.squad_convert_examples_to_features` function:
https://huggingface.co/docs/transformers/master/main_classes/processors#transformers.data.processors.squad.SquadProcessor.get_train_examples
This should enable you to use `ByT5` tokenizer already. (Using the `question-answering` pipeline could work out of the box too ! )<|||||>> I see, there's a workaround because not all tokenizers are fast and we can still run question-answering on those (you just loose offsets):
>
> Take a look at `transformers.data.processors.squad.squad_convert_examples_to_features` function:
>
> https://huggingface.co/docs/transformers/master/main_classes/processors#transformers.data.processors.squad.SquadProcessor.get_train_examples
>
> This should enable you to use `ByT5` tokenizer already. (Using the `question-answering` pipeline could work out of the box too ! )
Thank you.
Is there still a chance you will introduce Fast for Byt5 to get the "offsets" functionality?
<|||||>It's definitely not off the table, but unlikely.
As I mentionned, there's a overhead to go from Python to Rust (`tokenizers`). And in the case of pure bytes it's likely to be superior to the actual processing of the tokenizer in pure python (defeating the purpose of the rust implementation which is to go fast). Ofc tests should be run to prove this, but since there's at least a utf-8 reprocessing it seems really probable.
That being said, what you want is to get the `offset_mapping` and that might be solvable directly on the python `Tokenizer`. Since there's no real treatment of the strings aside from special tokens, it should really be doable. If you feel like doing a PR that would be welcome.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,601 | closed | Error loading Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-2b-21-to-en") | - `transformers` version: 4.12.5
- Platform: Linux-5.4.0-90-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.9.0+cu102 (True)
- Tensorflow version (GPU?): 2.6.0 (False)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: <fill in>
- Using distributed or parallel set-up in script?: <fill in>
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'MBart50Tokenizer'.
The class this function is called from is 'Speech2Text2Tokenizer'.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/shivam/.local/lib/python3.6/site-packages/transformers/models/speech_to_text_2/processing_speech_to_text_2.py", line 106, in from_pretrained
tokenizer = Speech2Text2Tokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/home/shivam/.local/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1750, in from_pretrained
**kwargs,
File "/home/shivam/.local/lib/python3.6/site-packages/transformers/tokenization_utils_base.py", line 1872, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/home/shivam/.local/lib/python3.6/site-packages/transformers/models/speech_to_text_2/tokenization_speech_to_text_2.py", line 85, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
| 12-02-2021 12:28:31 | 12-02-2021 12:28:31 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hey @ShRajSh,
Sorry we only recently fixed this error. The model should actually be loaded as follows:
```python
Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-xls-r-2b-21-to-en")
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Think we can close this one now |
transformers | 14,600 | closed | change tf.math.divide with int(/) in distilbert model |
# What does this PR do?
This PR changes `tf.math.divide` with `int( / )` for variable `dim_per_head`. Originally `dim_per_head` is returned as a `float64` tensor and is included in the TF graph. This is not supported by some machines and there is not a solid reason to include it into the TF graph. Hence, this PR removes it from the TF graph and allows for better compatibility with user's device.
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
@Rocketknight1
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-02-2021 11:29:59 | 12-02-2021 11:29:59 | This is a simple change with a clear rationale, and all tests pass, so I'm happy to merge it. Thank you for this PR! |
transformers | 14,599 | closed | Add Flax example tests | # What does this PR do?
This PR adds tests for flax examples.
The test for image-classification example is not added since the script uses a manual dataset with `torchvision`. The test will be added once the script is refactored to use `datasets`.
| 12-02-2021 10:46:36 | 12-02-2021 10:46:36 | |
transformers | 14,598 | closed | Python 3.6 -> Python 3.7 for TF runs | Fix the last CircleCI job that was not updated | 12-02-2021 09:09:11 | 12-02-2021 09:09:11 | Thanks for fixing! |
transformers | 14,597 | closed | Adds a git pull instruction to the documentation builder | If two consecutive commits happen in rapid succession, then the clone of the doc-builder repository might not be updated by the time the pip installations are over. This greatly reduces the risk this happens by putting a `git pull` instruction after the environment setup. | 12-02-2021 08:27:00 | 12-02-2021 08:27:00 | you could do it even later, i.e. 1/ building the doc to a temp folder, 2/ pulling, 3/ mv'ing the doc (instant), 4/ pushing |
transformers | 14,596 | closed | Hubert-base Model with new tokenizer is not converging | ## Environment info
- transformers version: 4.12.2
- Platform: Mac
- Python version: 3.7
- PyTorch version (GPU?): 1.9
- Tensorflow version (GPU?): No
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: No
I just run simple code to load Hubert pretrained base model
```
vocab_dict = dict()
vocab_dict['[PAD]'] = 0
vocab_dict['[UNK]'] = 1
vocab_dict['|'] = 2
vocab_dict['a'] = len(vocab_dict)
vocab_dict['b'] = len(vocab_dict)
vocab_dict['c'] = len(vocab_dict)
vocab_dict['d'] = len(vocab_dict)
vocab_dict['e'] = len(vocab_dict)
vocab_dict['f'] = len(vocab_dict)
vocab_dict['g'] = len(vocab_dict)
vocab_dict['h'] = len(vocab_dict)
vocab_dict['i'] = len(vocab_dict)
vocab_dict['j'] = len(vocab_dict)
vocab_dict['k'] = len(vocab_dict)
vocab_dict['l'] = len(vocab_dict)
vocab_dict['m'] = len(vocab_dict)
vocab_dict['n'] = len(vocab_dict)
vocab_dict['o'] = len(vocab_dict)
vocab_dict['p'] = len(vocab_dict)
vocab_dict['q'] = len(vocab_dict)
vocab_dict['r'] = len(vocab_dict)
vocab_dict['s'] = len(vocab_dict)
vocab_dict['t'] = len(vocab_dict)
vocab_dict['u'] = len(vocab_dict)
vocab_dict['v'] = len(vocab_dict)
vocab_dict['w'] = len(vocab_dict)
vocab_dict['x'] = len(vocab_dict)
vocab_dict['y'] = len(vocab_dict)
vocab_dict['z'] = len(vocab_dict)
print("Vocab size:" + str(len(vocab_dict)))
import json
with open('vocab.json', 'w') as vocab_file:
json.dump(vocab_dict, vocab_file)
tokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|")
feature_extractor = Wav2Vec2FeatureExtractor(feature_size=1, sampling_rate=16000, padding_value=0.0, do_normalize=True, return_attention_mask=False)
PROCESSOR = Wav2Vec2Processor(feature_extractor=feature_extractor, tokenizer=tokenizer)
model = HubertForCTC.from_pretrained('facebook/hubert-base-ls960', ctc_loss_reduction="mean",pad_token_id=PROCESSOR.tokenizer.pad_token_id,vocab_size=len(PROCESSOR.tokenizer))
model.freeze_feature_extractor()
#model.gradient_checkpointing_enable()
```
This model is not able to coverage, I have been training for a day, but the loss is not decreasing + prediction is always giving '0'
Must be some silly mistake, as the same code run if I use `hubert-large-ls960-ft`, where I don't touch the tokenizer. I suspect something is wrong with my tokenizer initialization as the rest of the code works for the fine-tuned model where we have a pretrained tokenizer.
@patrickvonplaten, @anton-l: Do you have any idea
| 12-02-2021 07:36:55 | 12-02-2021 07:36:55 | I have checked the labels which i feed in and seems to be ok
```
ADAM ANDREASSON
[3, 6, 3, 15, 2, 3, 16, 6, 20, 7, 3, 21, 21, 17, 16]
```<|||||>After 20,000 iterations, it only learns char '3', which must be quite unusual or wrong configuration somewhere.
My training and test data is same for simplicity.
```
***** Running Evaluation *****
Num examples = 2
Batch size = 16
[[3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 | 0/1 [00:00<?, ?it/s]
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3]
[3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3]]
['A', 'A']
['ADAM ANDREASSON', 'ADAM ANDREASSON']
1.0
{'eval_loss': 4.131505489349365, 'eval_score': 1.0, 'eval_runtime': 0.442, 'eval_samples_per_second': 4.524, 'eval_steps_per_second': 2.262, 'epoch': 24400.0}
24%|ββββββββββββββββββββββββββββββββ | 24400/100000 [7:02:28<21:13:01, 1.01s/it]
Saving model checkpoint to ./output/checkpoint-24400
```<|||||>Hey @harrypotter90 - could you upload your model somewhere on the hub so that I can compare the model's config and the model's tokenizer file?
I think that the tokenizer config is wrong<|||||>Thanks @patrickvonplaten : I have uploaded it here : https://huggingface.co/HarryPotter09/hubert-base-tokenizer<|||||>I have compared hubert-base new tokenizer with the hubert-large pretrained tokenizer (which is working) and feature extractors, couldn't find anything wrong.<|||||>More information, something wrong with the `hubert-base` model, as soon as I just change the model to `"facebook/hubert-large-ll60k"` same code works.
Thoughts ?<|||||>@harrypotter90 - I would need your model in the same repo id as well to be able to compare the two. Could you please upload your model as well? (`pytorch_model.bin` + `config.json`)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi there, I'm having a similar problem that the hubert-base model training does not work for ASR, but the large model works. @harrypotter90 did you solve the problem in the end? Also, I notice that for base model extractor the `return_attention_mask` is set to False (it is true for large and xlarge models). Is there a reason for that?
(Setting that to true still does not solve the problem though.) |
transformers | 14,595 | closed | GPT-2 Model wrapped in DataParallel hangs immediately | ## Environment info
- `transformers` version: 4.12.5
- Platform: Linux-3.10.0-1160.24.1.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.5
- PyTorch version (GPU?): 1.10.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: True
- Using distributed or parallel set-up in script?: True
### Who can help
@sgugger
Models:
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
## Information
Process hangs indefinitely and cannot be killed. Afterwards, other processes that try to interact with the GPUs (e.g. `nvidia-smi`) become stuck/uninterruptible as well. Node then requires a restart. Problem does not occur when the DataParallel wrapper is removed from script below.
GPUs: 4 x Nvidia Quatro RTX 5000
CUDA 11.3 (also tested with 10.2)
Model I am using: GPT2-medium
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [X] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [X] my own task or dataset: (give details below)
Py-Spy Output:
```Python
Thread 10623 (active): "MainThread"
adam (torch/optim/_functional.py:87)
step (torch/optim/adam.py:133)
decorate_context (torch/autograd/grad_mode.py:28)
wrapper (torch/optim/optimizer.py:88)
main (minimal_gpu.py:38)
<module> (minimal_gpu.py:43)
Thread 10689 (idle)
Thread 10688 (idle)
Thread 10690 (idle)
Thread 10691 (idle)
```
## To reproduce
Steps to reproduce the behavior:
run
```python
import torch
import transformers
device = 'cuda' if torch.cuda.is_available() else 'cpu'
def main():
''' python minimal_gpu.py'''
# load tokenizer an model
tokenizer = transformers.GPT2Tokenizer.from_pretrained("gpt2-medium")
model = transformers.GPT2LMHeadModel.from_pretrained("gpt2-medium")
model = torch.nn.DataParallel(model)
model = model.to(device)
# load dataset
dataset = tokenizer(["This is a test sentence",
"And so is this one",
"One two three four banana"], return_tensors="pt").to(device)
dataset.update({"labels": dataset["input_ids"]})
# set up optimizer
optimizer = torch.optim.Adam(params=model.parameters(), lr=0.00001)
# forward pass
outputs = model(**dataset) # <----- gets stuck here
loss = outputs[0]
# backward pass and update weights
optimizer.zero_grad()
loss.sum().backward()
optimizer.step()
if __name__ == '__main__':
main()
```
## Expected behavior
Model gets trained on dummy data.
| 12-01-2021 23:26:58 | 12-01-2021 23:26:58 | The script executes without any problem on my side (two Titans, PyTorch 1.10 and CUDA 10.2), so there is nothing wrong with the model per se. How are you launching the script?<|||||>I was afraid that would be the case and I'm facing a driver issue or something. I am launching simply with `python script.py`, where script.py contains the code above. Same thing happens when I run this from an interactive shell.
How would I debug this? This has cost me quite some time already, this is really becoming an issue.<|||||>Did you try `DistributedDataParallel`? It might work and avoid you the debugging.
Otherwise checking you can do a simple gather of tensors might be a nice first step (but that will require launching the script in distributed mode AFAIK).<|||||>So, running it on a different server worked fine. There must be something wrong at a different level, maybe at some point I can come back to it and figure it out.
Anyway, with DataParallel I can't use AMP anymore, which reduces the sample size we can run by about 20 percent. I saw that it should work if the forward method of the model is wrapped in autocast, but how can I do that with a pretrained GPT-2 model?
I tried running DDP as well. It seems to work on the server where DP doesn't work, but not without problems. The biggest one is that utilization is so low that it is slower than using a single GPU (I don't even know how to do profiling with multiprocessing and multiple GPUs). The other one is that it crashes during evaluation, but only on the last evaluation pass (evaluation is done every half epoch). Except if I reduce the size of the train dataset, then it crashes during the first evaluation pass already. It's really odd. This happens with gloo as the backend - with nccl it hangs forever at initialization. I'll open a new ticket about this at some point, if I can come up with a minimal version that reproduces the problem.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,594 | closed | Rename toctree.yml -> _toctree.yml | # What does this PR do?
As suggested by @Pierrci
> it was easier to distinguish "special" files in the build output before IMO
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-01-2021 23:18:50 | 12-01-2021 23:18:50 | |
transformers | 14,593 | closed | Update doc img links | # What does this PR do?
Companion PR for https://github.com/huggingface/doc-builder/pull/34
TLDR: all doc img links (both rst & md) should be in format `/imgs/xyz.png`
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-01-2021 22:37:13 | 12-01-2021 22:37:13 | |
transformers | 14,592 | closed | Version 4.12.5 doesn't work on sagemaker | ## Environment info
- `transformers` version: 4.12.5
- Platform: sagemaker
- Python version: 3.6
- PyTorch version (GPU?): 1.8.1 GPU
- Using GPU in script?: yes, using sagemaker instance
- Using distributed or parallel set-up in script?: no
## Information
I have code that I am using to train on sagemaker. Previously this code has worked just fine but today I initiated a new training run with new data and no changes to the code. The training run on sagemaker fails when it hits the trainer.train() using transformers.Trainer. Code below:
```
logging.info(f"Loading Tokenizer")
tokenizer = T5Tokenizer.from_pretrained(args.model_base
# Generate tokenized dataset
logging.info(f"Tokenizing dataset")
train_dataset = ParaphraseDataset(
tokenizer=tokenizer,
file_path=training_file,
)
eval_dataset = ParaphraseDataset(
tokenizer=tokenizer,
file_path=testing_file,
)
# Initialize model
logging.info(f"Initializing model from {args.model_base}.")
model = T5ForConditionalGeneration.from_pretrained(args.model_base)
# Training arguments
training_args = TrainingArguments(
**_get_training_args(args),
report_to="wandb"
)
trainer = Trainer(
model=model,
args=training_args,
train_dataset=train_dataset,
eval_dataset=eval_dataset,
data_collator=DataCollator(tokenizer),
callbacks=[EarlyStoppingCallback(early_stopping_patience=3, early_stopping_threshold=0.01)],
)
logging.info(f"Beginning model training using the following params:\n{training_args}.")
output = trainer.train()
```
The error occurs during the trainer.train(). The error is the following:
```
#015 0% 0/7680 [00:00<?, ?it/s]Traceback (most recent call last):#015
File "sm_train_deploy.py", line 271, in <module>#015
train(parser.parse_args())#015
File "sm_train_deploy.py", line 210, in train#015
output = trainer.train()#015
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1316, in train#015
tr_loss_step = self.training_step(model, inputs)#015
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1849, in training_step#015
loss = self.compute_loss(model, inputs)#015
File "/opt/conda/lib/python3.6/site-packages/transformers/trainer.py", line 1881, in compute_loss#015
outputs = model(**inputs)#015
File "/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py", line 903, in _call_impl#015
self._forward_pre_hooks.values()):#015
RuntimeError: OrderedDict mutated during iteration#015
```
I'm calling this on sagemaker using the following command:
```
# Instantiate estimator with parameters
estimator = PyTorch(
entry_point="sm_train_deploy.py",
source_dir="code",
role=role,
framework_version="1.8.1",
py_version="py3",
instance_count=1, # this script only support distributed training for GPU instances.
instance_type="ml.p3.2xlarge",
output_path=output_path,
module_dir=output_path,
code_location=output_path,
# checkpoint_s3_uri=output_path,
hyperparameters={
**params
},
disable_profiler=True # disable debugger
)
estimator.fit({"training": training_file_s3, "testing": test_file_s3})
```
## Expected behavior
This behavior is not seen when I run the code locally on my own GPU. The training completes as expected.
If I revert to using transformers 4.12.2 (the last version of transformers where this code worked on sagemaer), the code runs as expected on sagemaker without changing anything.
Sorry if I missed anything important, first time submitting a bug report like this.
| 12-01-2021 20:04:59 | 12-01-2021 20:04:59 | Hmm @philschmid <|||||>And @sgugger <|||||>+1, ran into the exact same issue yesterday on `4.12.5`. Didn't try other versions exhaustively, but `4.12.2` worked.<|||||>This is the same error as with DeepSpeed, I'm guessing they are using a similar thing with the hooks not being allowed to be mutated. This is fixed on master so we should just hurry up to do a release and clearly document that this patch release won't work on SageMaker.<|||||>@setu4993 We have a documented overview of all HuggingFace DLCs on Amazon SageMaker [here](https://huggingface.co/docs/sagemaker/reference#deep-learning-container). We made sure that those versions are compatible with `transformers`.
@dylanpmorgan I suggest to move from the `PyTorch` to the `HuggingFace` estimator, whiches uses the DLCs we build, maintain and monitor<|||||>@sgugger : Yeah, could be a hook-related issue. Would appreciate a release soon.
@philschmid : Thanks for the links to the DLCs, didn't know they were so clearly documented and that's very useful.
Fair point that those are _well_ supported, but it would be unfortunate if those are the _only_ versions supported. There are cases when we want to use latest versions of libraries (with recent features, fixes, etc.) and not just a single version that's available on the DLC. I understand it takes more maintenance and so do not expect all permutations of DLCs being available (we considered building and maintaining our own inference images for all the models and versions of libraries we serve, so I get the pain π), but unexpected regressions over patch make it harder to debug and fix issues that occur only during training. Had it not been for this already open issue, I wouldn't have expected this to be a `transformers` error (I first thought it was a `datasets` error).
I'll also note that the HF DLCs cannot be used on SageMaker Studio which only have PyTorch DLCs, and this error didn't appear while training within a notebook, but a different error could.<|||||>Thank you for your response and great feedback!
> I'll also note that the HF DLCs cannot be used on SageMaker Studio which only have PyTorch DLCs, and this error didn't appear while training within a notebook, but a different error could.
We are working on enabling them for SM Studio as well.
<|||||>@setu4993 Next release will be this Wednesday normally.<|||||>@philschmid :
> We are working on enabling them for SM Studio as well.
That'll come in very handy, thank you!
@sgugger : Sounds good, will test once the next version is released.<|||||>@sgugger @philschmid : Confirming 4.13.0 fixed this for us.<|||||>Thanks for letting us know!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,591 | closed | update deprecating `transformers-cli` to `huggingface-cli` in docs | null | 12-01-2021 19:23:45 | 12-01-2021 19:23:45 | looks good to me!<|||||>Thanks @fcakyon!
This page seriously needs an update, and I believe some methods here don't exist in `huggingface-cli` ...
We should probably just link to the following: https://huggingface.co/docs/hub/adding-a-model<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>let's merge this and we'll refine later? (but the doc format changed on the main branch in the meantime :) )<|||||>@julien-c revised version of the docs fixed this issue i believe π <|||||>cool, closing this then. Thanks! π |
transformers | 14,590 | closed | Doc new front | # What does this PR do?
This PR switches the current documentation to the new frontend. It is safe to merge as long as we deploy the new front soon enough, but in the meantime it will just stop deploying the doc using the current system.
| 12-01-2021 18:24:26 | 12-01-2021 18:24:26 | π€― |
transformers | 14,589 | closed | Fix doc interlinks | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-01-2021 18:01:25 | 12-01-2021 18:01:25 | |
transformers | 14,588 | closed | Make DefaultDataCollator importable from root | This is a small PR to fix an oversight - the DefaultDataCollator class was not importable from root (this is separate from the `default_data_collator` function). It also adds some missing docstring arguments, and the missing docstring for DefaultDataCollator. | 12-01-2021 17:24:37 | 12-01-2021 17:24:37 | |
transformers | 14,587 | closed | Config overrides option working on any kind of config | # π Feature request
Allow the `--config_overrides` option in language-modeling examples to override a config that was specified by `--config_name` or `--model_name_or_path`, currently this is blocked by:
```python
if self.config_overrides is not None and (self.config_name is not None or self.model_name_or_path is not None):
raise ValueError(
"--config_overrides can't be used in combination with --config_name or --model_name_or_path"
)
```
## Motivation
It is unclear why this is prevented, and it can make things a lot easier when tweaking the settings compared to having to edit a json file everytime. Is there a good reason for not allowing that?
## Your contribution
I can create a PR and make the changes myself. | 12-01-2021 15:09:05 | 12-01-2021 15:09:05 | If I remember correctly, it's because the transformers logic prevents us from overriding config if either of those 2 are defined.
I'm sure you will see it immediately if you remove the constraint.
If you find a good way to fix it, by all means please do.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,586 | closed | Add ONNX support for MarianMT models | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR adds support to export MarianMT models in the ONNX format. The underlying logic builds on the awesome refactor / feature enhancement that @michaelbenayoun has implemented in #14358 & #14700 - ~we should rebase this branch on `master` once that PR is merged to simplify the diff in this PR.~ (Done)
Currently, this PR supports ONNX exports for the following "tasks" (i.e. uses):
* `default`, `default-with-past` => equivalent to exporting a pretrained `MarianModel`
* `seq2seq-lm`, `seq2seq-lm-with-past` => equivalent to exporting a pretrained `MarianMTModel`
* `causal-lm`, `causal-lm-with-past`=> equivalent to exporting a pretrained `MarianForCausalLM`
Note that in each case, the end user will have to implement their own `generate()` method with the ONNX model - see [this BART example](https://github.com/huggingface/transformers/tree/master/examples/onnx/pytorch/summarization) for what's involved.
I've also checked locally that the "slow" tests pass with:
```
RUN_SLOW=1 pytest tests/test_onnx_v2.py -k "marian" -rp
```
## Usage
Here's a quick example to show how this works:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from transformers.models.marian import MarianOnnxConfig
model_ckpt = "Helsinki-NLP/opus-mt-en-de"
tokenizer = AutoTokenizer.from_pretrained(model_ckpt)
ref_model = AutoModelForSeq2SeqLM.from_pretrained(model_ckpt)
# Export model
feature = "seq2seq-lm"
onnx_path = f"onnx/{model_ckpt}-{feature}/"
# Run this from a Jupyter notebook
!python -m transformers.onnx --model={model_ckpt} --atol=1e-4 --feature={feature} {onnx_path}
# Test export with inputs
batch_size = 4
encoder_inputs = tokenizer(
["Studies have been shown that owning a dog is good for you"] * batch_size,
return_tensors="np",
)
decoder_inputs = tokenizer(
["Studien haben gezeigt dass es hilfreich ist einen Hund zu besitzen"]
* batch_size,
return_tensors="np",
)
all_inputs = {
"input_ids": encoder_inputs["input_ids"],
"attention_mask": encoder_inputs["attention_mask"],
"decoder_input_ids": decoder_inputs["input_ids"],
"decoder_attention_mask": decoder_inputs["attention_mask"],
}
# Generate ONNX outputs
ort_session = ort.InferenceSession(f"{onnx_path}model.onnx")
onnx_config = MarianOnnxConfig(ref_model.config, task=feature)
onnx_named_outputs = list(onnx_config.outputs.keys())
onnx_outputs = ort_session.run(onnx_named_outputs, all_inputs)
```
**TODO**
- [x] Extend support for language modelling head
- [x] Investigate range of numerical tolerance between raw and ONNX models for a range of checkpoints
- [x] Ensure that ONNX models are compatible with ONNX Runtime
- [x] Verify whether past key values are supported
Closes #13823, #13854
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-01-2021 13:48:50 | 12-01-2021 13:48:50 | There seems to be some sort of race condition happening in `run_tests_torch`:
```
_____________________________ ERROR collecting gw1 _____________________________
Different tests were collected between gw0 and gw1. The difference is:
--- gw0
+++ gw1
```
[This issue](https://github.com/pytest-dev/pytest-xdist/issues/366) has similar problems - perhaps a solution lies there.<|||||>Feel free to merge once you have taken care of the docs and the `# Copied from` statements :)<|||||>Thanks for the reviews @LysandreJik and @michaelbenayoun π !
I've fixed the docs by rebasing on `master` and added the `# Copied from:` snippets to the functions (I did not know about that trick!)
Will merge once all the test pass :)<|||||>The outputs decoding cannot get the correct result. How do you get the translation result |
transformers | 14,585 | closed | Deberta's Enhanced Masked Decoder | # π Feature request
I don't know if this is a feature request or the feature is already included but I wasn't able to find it. Reviewing Deberta's code, I tried to look for the Enhanced Masked Decoder, which is supposed to take into account absolute positional embeddings. However, when looking both DebertaModel and DebertaForMLM (and its submodules) I don't see where this is done... Could someone please tell me, if the Enhanced Masked Decoder is applied somewhere? Reading the Deberta paper it is clear this is one of the key improvements with respect to BERT and RoBERTa, and I think the Transformers' implementation of Deberta was carried out by the original author of the model, therefore I am sure this part is not missing and it's me that I don't find it... If someone can help me with this I'd really appreciate it :)
@patrickvonplaten @sgugger @BigBird01
## Motivation
## Your contribution
| 12-01-2021 11:31:28 | 12-01-2021 11:31:28 | The implementation of Enhanced Mask Decoder was here, https://github.com/microsoft/DeBERTa/blob/master/DeBERTa/apps/models/masked_language_model.py
We will port the implementation to transformers.<|||||>Hi, Alex, thanks for your interest to our work and sorry for the inconvenience. The missing EMD part will not affect the convergence of pre-training. It will slightly affect the perplexity of MLM training. The inconsistent issue between transformers and the official implementation is a bug due to history reason. We will fix it and improve our future practice to maintain higher quality bars at both side. You are welcome to help us to port the official EMD and the data loading/preparation part from our official repo to transformers.
If you have other issue with pre-training, please feel free to reach me by mail directly. Thanks!
<|||||>Thank you so much for the understanding and for the kind reply. Of course I had interest in your work from the moment I first read the paper, it's so original and your results are impressive. I really appreciate you taking the time to fix this, I'll be looking forward to try the changes. From what you say, do you think that we could safely continue training from the last checkpoint with the new model updates without trouble? Or would we need to make an adjustment to make things work?
Again, thanks for the kind explanation, and for sharing your work with the community. <|||||>Thanks for you understanding and kind words, Alex:)
I think you can just continue train with your current settings as long as the training loss didn't go wild.
Basically, besides of DA and EMD described in our paper, there are a few other things important for pre-training.
1. Batch size, learning rate, warmup. For base model, our best local settings are: 2k batch size, 1M steps with learning rate 3e-4 and warmup for 10k steps. You may also need to set beta2 of Adam to 0.98 to stabilize the training with large batch size.
2. Model size
3. Vocabulary
4. Data volume and quality
We just added a [document](https://github.com/microsoft/DeBERTa/tree/master/experiments/language_model) on pre-training in our official repo, you make take it as a reference for your settings. And you are welcome to leave comments on it.
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,584 | closed | Add PoolFormer Model | # π New model addition
I would like to add the recently announced [PoolFormer](https://github.com/sail-sg/poolformer) model to the Transformers library.
## Model description
PoolFormer model was proposed in the paper, β**MetaFormer is Actually What You Need for Vision**β by Sea AI Lab and the main argument behind this is that performance of transformer/MLP-like models primarily comes from the general architecture instead of the specific token mixers they are using (like Attention for example).
To show this, they have added a basic non-parametric Pooling operator which does basic token mixing. This suggested model outperforms DeiT and ResMLP.
Link to the Model Repo: [PoolFormer](https://github.com/sail-sg/poolformer)
Link to the Paper: [MetaFormer is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418)
<!-- Important information -->
## Open-source status
* [x] the model implementation is available: In PyTorch on PoolFormer [repository](https://github.com/sail-sg/poolformer)
* [x] the model weights are available: In the same repository, [link](https://github.com/sail-sg/poolformer#2-poolformer-models)
* [x] who are the authors: [Sea AI Lab](https://github.com/sail-sg)
| 12-01-2021 11:03:41 | 12-01-2021 11:03:41 | |
transformers | 14,583 | closed | Doc misc fixes | # What does this PR do?
1. Fix docs/source/converting_tensorflow_models.rst
2. Delete versions.yml because https://github.com/huggingface/doc-builder/pull/31
3. Rm `pretrained_models` from toctree.yml
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 12-01-2021 10:35:57 | 12-01-2021 10:35:57 | |
transformers | 14,582 | closed | Add W&B backend for hyperparameter sweep | # Add support for W&B hyperparameter sweep
This PR:
* allows using wandb for running hyperparameter search.
* The runs are visualized on W&B sweeps dashboard
* This supports runnning sweeps on parallel devices, all reporting to the same central dashboard.
### Usage
**To run new a hyperparameter search:**
```
trainer.hyperparameter_search(
backend="wandb",
project="transformers_sweep", # name of the project
n_trials=5,
metric="eval/loss", # metric to be optimized, default 'eval/loss'. A warning is raised if the passed metric is not found
)
```
This outputs a sweep id. Eg. `my_project/sweep_id`
**To run sweeps on parallel devices:**
Just pass sweep id which you want to run parallel
```
trainer.hyperparameter_search(
backend="wandb",
sweep_id = "my_project/sweep_id"
)
```
## Example sweep [[Dashboard](https://wandb.ai/cayush/teansformers_sweep/sweeps/xrmhkad5?workspace=user-cayush)] - runs from 2 devices were reported in this dashboard.

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
### Regarding tests
the testing suite didn't include any tests as it is tested on wandb's side. Same can be done for the sweeps integrations
## Who can review?
Tagging Boris as he did the W&B integration and Sylvain as this PR introduces changes in integrations and trainer
@borisdayma @sgugger
| 12-01-2021 08:51:59 | 12-01-2021 08:51:59 | Thanks for your PR! I will take a look at your PR and help out on tests after the holiday break - first week of January! Thanks for your patience.<|||||>@LysandreJik Hey! any updates on this?<|||||>Thank you for your PR, @AyushExel! All that remains is testing.
You can see an example of how it was done for `ray` and `sigopt` here: https://github.com/huggingface/transformers/blob/f21bc4215aa979a5f11a4988600bc84ad96bef5f/tests/test_trainer.py#L1582-L1692
Ideally, we would move all of that to an `integrations` folder in the [tests](https://github.com/huggingface/transformers/tree/master/tests) folder. Then we could add a `test_wandb_integration.py` which would contain a similar integration test to those above.
The idea is that when the test fail, we ping the author(s) of the PRs, this way we share the burden of the maintenance.
Does that work for you?<|||||>@LysandreJik sounds good!
I've added the tests to the same file `test_trainer.py` for now. Do you want me to move it out to integrations before merging or is that a future project?<|||||>Thanks for adding the test! Can it run out of the box if wandb is installed? If I recall correctly there is some login to do no?<|||||>@sgugger I've kept anonymous mode on by default in the test so it should run. Please, Let me know if that's not the case and I'll think of a workaround<|||||>Having the test in is the important part, we can move to `integrations` in a future PR!
I'm getting the following error when running it:
```
test_trainer.py::TrainerHyperParameterWandbIntegrationTest::test_hyperparameter_search FAILED [100%]
tests/test_trainer.py:1724 (TrainerHyperParameterWandbIntegrationTest.test_hyperparameter_search)
self = <tests.test_trainer.TrainerHyperParameterWandbIntegrationTest testMethod=test_hyperparameter_search>
def test_hyperparameter_search(self):
class MyTrialShortNamer(TrialShortNamer):
DEFAULTS = {"a": 0, "b": 0}
def hp_space(trial):
return {
"method": "random",
"metric": {},
"parameters": {
"a": {"distribution": "uniform", "min": 1e-6, "max": 1e-4},
"b": {"distribution": "int_uniform", "min": 1, "max": 6},
},
}
def model_init(config):
if config is None:
a = 0
b = 0
else:
a = config["a"]
b = config["b"]
model_config = RegressionModelConfig(a=a, b=b, double_output=False)
return RegressionPreTrainedModel(model_config)
def hp_name(params):
return MyTrialShortNamer.shortname(params)
with tempfile.TemporaryDirectory() as tmp_dir:
trainer = get_regression_trainer(
output_dir=tmp_dir,
learning_rate=0.1,
logging_steps=1,
evaluation_strategy=IntervalStrategy.EPOCH,
save_strategy=IntervalStrategy.EPOCH,
num_train_epochs=4,
disable_tqdm=True,
load_best_model_at_end=True,
logging_dir="runs",
run_name="test",
model_init=model_init,
)
trainer.hyperparameter_search(
> direction="minimize", hp_space=hp_space, hp_name=hp_name, backend="wandb", n_trials=4, anonymous="must"
)
test_trainer.py:1769:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../src/transformers/trainer.py:1813: in hyperparameter_search
best_run = backend_dict[backend](self, n_trials, direction, **kwargs)
../src/transformers/integrations.py:406: in run_hp_search_wandb
sweep_id = wandb.sweep(sweep_config, project=project, entity=entity) if not sweep_id else sweep_id
../../../../transformers/.env/lib/python3.6/site-packages/wandb/sdk/wandb_sweep.py:101: in sweep
wandb_login._login(_silent=True)
../../../../transformers/.env/lib/python3.6/site-packages/wandb/sdk/wandb_login.py:274: in _login
wlogin.prompt_api_key()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <wandb.sdk.wandb_login._WandbLogin object at 0x7efde99d17b8>
def prompt_api_key(self):
key, status = self._prompt_api_key()
if status == ApiKeyStatus.NOTTY:
directive = (
"wandb login [your_api_key]"
if self._settings._cli_only_mode
else "wandb.login(key=[your_api_key])"
)
> raise UsageError("api_key not configured (no-tty). call " + directive)
E wandb.errors.UsageError: api_key not configured (no-tty). call wandb.login(key=[your_api_key])
../../../../transformers/.env/lib/python3.6/site-packages/wandb/sdk/wandb_login.py:209: UsageError
```
Should we setup an API key?<|||||>@LysandreJik I guess there are 2 options here.
* Add wandb api key as a github secret and run `wandb login $secret_key` in the CI test workflow setup
* Or Setup a dummy account meant only for test and login directly `wandb login key` without having to worry about exposing the api key<|||||>Hey @LysandreJik were you able to run the tests?<|||||>Hey @AyushExel, I think we can go with the first solution:
- Add wandb api key as a github secret and run wandb login $secret_key in the CI test workflow setup
Would you be able to share an API key we could register in secrets to get this working?<|||||>@LysandreJik each wandb account has an api key which is accessible at [wandb.ai/authorize](https://wandb.ai/authorize)<|||||>@LysandreJik it probably makes more sense for you to create a dummy wandb account with a hugging face email that you control and use this accounts API key :) <|||||>Ok will give it a look!<|||||>I've added a key for a dummy account in the `WANDB_API_KEY` secret of this repo. Is it possible for you to leverage it in Github Actions and to pass it to this test in your PR?<|||||>@LysandreJik Yes..In github actions you can run `wandb login YOUR_SECRET_ WANDB_API_KEY` ,it will log you in on that system and the tests will execute fine<|||||>Right, I was asking if you could please add it to the GitHub action workflows in your PR. I've added the secret, you should be able to handle it from your PR now.
The jobs that run with integrations are the following:
- https://github.com/huggingface/transformers/blob/master/.github/workflows/self-scheduled.yml#L21
- https://github.com/huggingface/transformers/blob/master/.github/workflows/self-scheduled.yml#L253
If this is not possible, then I will open a PR on your PR.
Thank you.<|||||>Okay got it.. I'm trying to set it up<|||||>@LysandreJik I've added the wandb login command in both the tests you mentioned. Can you please take a look?
|
transformers | 14,581 | closed | Cannot run Deepspeed inference of GPT-Neo with low_cpu_mem_usage enabled | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Ubuntu 18.04.6 LTS
- Python version: Python 3.6.9
- PyTorch version (GPU?): 1.10.0+cu113
- Tensorflow version (GPU?): n/a
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: Using Deepspeed 0.5.0
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
--> @stas00
## Information
Model I am using (Bert, XLNet ...): EleutherAI/gpt-neo-1.3B
The problem arises when using:
* [x] the official example scripts: (give details below)
* [ ] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
Steps to reproduce the behavior:
1. write the following code which is referred from https://www.deepspeed.ai/tutorials/inference-tutorial/#end-to-end-gpt-neo-27b-inference
```ruby
import os
import deepspeed
import torch
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
local_rank = int(os.getenv('LOCAL_RANK', '0'))
world_size = int(os.getenv('WORLD_SIZE', '1'))
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-1.3B", low_cpu_mem_usage=True)
#model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-j-6B", low_cpu_mem_usage=True)
tokenizer_i = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-1.3B")
#tokenizer_i = AutoTokenizer.from_pretrained("EleutherAI/gpt-j-6B")
generator = pipeline('text-generation', model=model, device=local_rank, tokenizer=tokenizer_i)
generator.model = deepspeed.init_inference(generator.model,
mp_size=world_size,
dtype=torch.float,
replace_method='auto')
string = generator("DeepSpeed is", do_sample=True, min_length=50)
if not torch.distributed.is_initialized() or torch.distributed.get_rank() == 0:
print(string)
```
2. execute the code using Deepspeed as the following command
```shell
deepspeed --num_gpus 1 test.py
```
3. failed to execute
```log
Traceback (most recent call last):
File "test.py", line 25, in <module>
replace_method='auto')
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/__init__.py", line 285, in init_inference
quantization_setting)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/inference/engine.py", line 70, in __init__
self._apply_injection_policy()
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/inference/engine.py", line 148, in _apply_injection_policy
self.quantize_groups))
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 308, in replace_transformer_layer
_replace_policy=policy)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 404, in replace_module
replaced_module, _ = _replace_module(model, policy)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 429, in _replace_module
_, layer_id = _replace_module(child, policies, layer_id=layer_id)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 429, in _replace_module
_, layer_id = _replace_module(child, policies, layer_id=layer_id)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 425, in _replace_module
layer_id))
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 301, in replace_fn
layer_id=layer_id)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 224, in replace_with_policy
dense_w = transpose(dense_w)
File "/home/ubuntu/env/lib/python3.6/site-packages/deepspeed/module_inject/replace_module.py", line 218, in transpose
data.view(-1).copy_(data.transpose(-1, -2).contiguous().view(-1))
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
Run deepspeed inference successfully without any failure
<!-- A clear and concise description of what you would expect to happen. -->
## Comment
Hi all,
I'm trying to run GPT-Neo inference using Deepspeed. Because of my system environment, I need to reduce the peak RAM usage, so added the argument, low_cpu_mem_usage as True to from_pretrained. But it gets failed as I described.
I'm filing this case to HF because removing low_cpu_mem_usage or changing model to gpt-j-6B, it succeed to run.
Could you advise for this problem? If low_cpu_mem_usage feature doesn't support GPT-Neo, it would be appreciated if you say so.
Thanks, | 12-01-2021 08:49:38 | 12-01-2021 08:49:38 | Deepspeed Inference is not a completed product AFAIK, and it's not yet integrated into Transformers because of that.
As you can see from the trace the `transformers` library is not being used. So please re-file this issue with Deepspeed and tag @RezaYazdaniAminabadi.
Any reason why you are not using Deepspeed ZeRO Inference? https://huggingface.co/transformers/master/main_classes/deepspeed.html#deepspeed-zero-inference
Deepspeed Inference and Deepspeed ZeRO Inference are 2 completely different things. The former uses Tensor parallelism - the 2nd is ZeRO sharding.<|||||>@stas00
Thanks for your comment. I filed the case to transformers because the issue is not reproduced with low_cpu_mem_usage disabled. Do you think it needs to be handled by Deepspeed even so?
I'm eventually trying to load a bigger GPT-Neo like model which doesn't fit to one GPU. That's why Deepspeed Inference is used. Appreciate your advise though. <|||||>@stas00
I have no idea which diff fixes the issue, but it fixed after I update Deepspeed Inference code to the latest. I really appreciate your advise since I did only focus on transformers. Thanks!!!<|||||>It's an actively developed new product, so someone must have reported this issue recently and it got fixed.
I'm glad this is now working for you, @Jiyeon1230
> I'm eventually trying to load a bigger GPT-Neo like model which doesn't fit to one GPU. That's why Deepspeed Inference is used.
And I repeat again that Deepspeed ZeRO is already well tested scalability solution that you can use today to use models larger than one GPU and it's fully integrated into Transformers. It has additional features like CPU Offload, which scales better, and which I don't think Deepspeed Inference supports at the moment. See the doc link in my last comment.
But, of course, it's up to you what you use.<|||||>@stas00
Oh, sure! I may have misunderstood about Deepspeed ZeRO. I'll definitely look into it. Thanks for your advise. |
transformers | 14,580 | closed | [bf16 support] tweaks | This PR:
- validates that `--bf16`, `--bf16_full_eval` are supported by the user setup
- doc improvements by @manuelciosici
@sgugger | 12-01-2021 03:19:24 | 12-01-2021 03:19:24 | |
transformers | 14,579 | closed | [doc] bf16/tf32 guide | This PR adds performance-specific docs as a follow up to https://github.com/huggingface/transformers/pull/13207
- adds the missing `--fp16_full_eval`
- adds `--bf16` and `--bf16_full_eval`
- adds discussion of `tf32`
This is the first pass, surely will add more down the road.
@sgugger | 12-01-2021 02:59:21 | 12-01-2021 02:59:21 | Thanks a lot for the proofreading and the corrections, @manuelciosici
You couldn't make suggestions since the PR was merged already.
I applied your corrections to this ongoing PR https://github.com/huggingface/transformers/pull/14580 that collects small fixes.
I even figured out how to make the right attribution to your suggestions ;)
```
git commit --author "Manuel R. Ciosici <[email protected]>" -ma "corrections"
``` |
transformers | 14,578 | closed | Adapt build command to new CLI tools | # What does this PR do?
This adapts the command to the new CLI tool introduced in [this PR](https://github.com/huggingface/doc-builder/pull/32) (should be merged after as a result). | 11-30-2021 23:54:39 | 11-30-2021 23:54:39 | |
transformers | 14,577 | closed | fix pytorch division warning by using suggested torch.div rounding_mode | # What does this PR do?
This PR removes a warning that is repeatedly thrown with the latest releases of transformers and pytorch.
example:
```shell
lib/python3.7/site-packages/transformers/models/hubert/modeling_hubert.py:803: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
```
The fact that this occurs in multiple places suggests there's an opportunity for a shared function. Instead of a larger refactor, this PR touches a few specific places using `//` to avoid causing other side effects.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests) Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case.
No
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
Docs should be unaffected
- [ ] Did you write any new necessary tests?
The test coverage should be unchanged. | 11-30-2021 21:47:37 | 11-30-2021 21:47:37 | Note that this argument of `torch.div` was only introduced in PyTorch 1.8.0, so this fix is not compatible with all the versions of torch we support (1.4.0 and onward). I think we will need to write a function that uses the old syntax (`//`) for older versions and only switches to the newer syntax in more recent versions, then use that function.<|||||>> Note that this argument of `torch.div` was only introduced in PyTorch 1.8.0, so this fix is not compatible with all the versions of torch we support (1.4.0 and onward). I think we will need to write a function that uses the old syntax (`//`) for older versions and only switches to the newer syntax in more recent versions, then use that function.
Agree! Let's put it in `src/transformers/file_utils.py` no? @mgoldey - would be interested in adapting the PR to include such a function? The function could look very similar to this syntax: https://github.com/huggingface/transformers/blob/bc8a9f415b15d0c8e2f01c6ad79988716704fd9d/src/transformers/activations.py#L45<|||||>>
I'd be happy to put forward something to that effect in the next few days<|||||>Hi @mgoldey did you have time to work on this? It looks like it's going to be needed in the PR mentioned above as well, so we can take over on the implementation of the custom function Patrick mentioned you don't have time :-)<|||||>> Hi @mgoldey did you have time to work on this? It looks like it's going to be needed in the PR mentioned above as well, so we can take over on the implementation of the custom function Patrick mentioned you don't have time :-)
Feel free to jump in if you have time. I'll notify if I wind up being free, but I had to switch directions from time-sensitive projects.<|||||>Hi @mgoldey, I couldn't push to your branch, so I opened a fresh PR at #15180. I put you as co-author.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Moot - resolved by #15180 |
transformers | 14,576 | closed | [Flax] Add FlaxBlenderbotSmall | # What does this PR do?
This PR adds `FlaxBlenderbotSmall` models.
**TODO:** Add model checkpoints for flax on the Hub.
Fixes #14188
## Before submitting
- [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [x] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [x] Did you write any new necessary tests?
## Who can review?
@patrickvonplaten @patil-suraj | 11-30-2021 21:26:04 | 11-30-2021 21:26:04 | Good to merge for me! Sorry for being so late and causing the merge conflict now @stancld :-/ Could you fix it quickly? Then I think we're good to merge this PR<|||||>@patrickvonplaten No worries, it's okay :] Merge conflict should be resolved.<|||||>Thanks, @stancld! Will push the checkpoints shortly and merge :) |
transformers | 14,575 | closed | GPT2 large trains on 1 GPU but does not fit in two. | Hi all,
I am training GPT2 from scratch with the following command:
`torchrun --nproc_per_node=2 --nnodes=1 ./5.run_clm-post.py --model_name_or_path gpt2-large --train_file datasets/sample.txt --tokenizer_name myembeddings --do_train --do_eval --output_dir ./sample --evaluation_strategy epoch --num_train_epochs 100 --per_device_train_batch_size 24 --cache_dir .cache/`
When I train on a single A100, the model trains perfectly. When running on 2 GPUs (both A100s) I get the CUDA out of memory error. I tried to decrease to batch size 16 but still happens. Does this it mean that I have to go to batch size 8? Why does batch size 24 fit on a single GPU but not in two?
Below are the errors:
With batch size 16:
```
File "/path/to/miniconda3/lib/python3.6/site-packages/transformers/activations.py", line 42, in gelu_new
return 0.5 * x * (1.0 + torch.tanh(math.sqrt(2.0 / math.pi) * (x + 0.044715 * torch.pow(x, 3.0))))
RuntimeError: CUDA out of memory. Tried to allocate 320.00 MiB (GPU 0; 39.59 GiB total capacity; 36.81 GiB already allocated; 205.69 MiB free; 37.07 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
With batch size 24:
```
File "/path/to/miniconda3/lib/python3.6/site-packages/torch/nn/functional.py", line 1169, in dropout
return _VF.dropout_(input, p, training) if inplace else _VF.dropout(input, p, training)
RuntimeError: CUDA out of memory. Tried to allocate 1.88 GiB (GPU 1; 39.59 GiB total capacity; 36.11 GiB already allocated; 909.69 MiB free; 36.38 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
```
Any help would be appreciated. Also, any advice to make the model train faster would be great to follow.
Thanks for this great repository. | 11-30-2021 20:58:09 | 11-30-2021 20:58:09 | This [really great document](https://huggingface.co/docs/transformers/performance), written by @stas00 , may be of help :)<|||||>That's very interesting. There should be a bit of memory usage overhead for many gpus over 1 gpu because the latter has to sync DDP results, but not by much.
What was the command line for a single gpu?
What was the max gpu memory usage with 1 gpu?
> any advice to make the model train faster would be great to follow.
what Lysandre said. Specifically enabling https://huggingface.co/docs/transformers/master/performance#bf16
Additionally you may want to experiment with Deepspeed Stage-2 https://huggingface.co/docs/transformers/master/main_classes/deepspeed but it doesn't yet support bfloat - should happen soon.<|||||>Hi all,
Thank you for the suggestions. I didn't know about bf16. I'll check whether that's faster than deepspeed + fp16 which is what I'm finally running at the moment.
The command for one GPU was:
`python ./5.run_clm-post.py --model_name_or_path gpt2-large --train_file datasets/sample.txt --tokenizer_name myembeddings --do_train --do_eval --output_dir ./sample --evaluation_strategy epoch --num_train_epochs 100 --per_device_train_batch_size 24 --cache_dir .cache/`
I have switched over to a computer with 8 A40s and I cannot reproduce the error there. But I could re-test in the A100 if you're interested. Otherwise, feel free to close the issue.
Thanks<|||||>Unfortunately, I don't currently have access to A100, as it'd be much easier for me to test directly.
So let's keep it open and once I can test I will be happy to experiment and see if there are any problems. I will assign this to me so that it doesn't get lost.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,574 | closed | Adafactor does not work with Resnets (or with MAML) | ## Environment info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- `transformers` version: 4.10.3
- Platform: Linux-3.10.0-1160.42.2.el7.x86_64-x86_64-with-glibc2.17
- Python version: 3.9.7
- PyTorch version (GPU?): 1.9.1+cu111 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes (NVIDIA GeForce ...)
- Using distributed or parallel set-up in script?: no
## Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
## To reproduce
I am running the MAML (with higher) meta-learning algorithm with a resnet. I see this gives issues in my script (error message pasted bellow).
Is Adafactor not suppose to work with Resnets or other models?
Steps to reproduce the behavior:
1. run this code: https://github.com/brando90/higher/blob/master/examples/maml-omniglot.py (it already has adafactor)
2. if that works uncomment the resnet12 line and ping me please
## Expected behavior
I expect training to go smoothly but isntead get:
```
--------------------- META-TRAIN ------------------------
Starting training!
Traceback (most recent call last):
File "/home/miranda9/automl-meta-learning/automl-proj-src/experiments/meta_learning/main_metalearning.py", line 441, in <module>
main_resume_from_checkpoint(args)
File "/home/miranda9/automl-meta-learning/automl-proj-src/experiments/meta_learning/main_metalearning.py", line 403, in main_resume_from_checkpoint
run_training(args)
File "/home/miranda9/automl-meta-learning/automl-proj-src/experiments/meta_learning/main_metalearning.py", line 413, in run_training
meta_train_fixed_iterations(args)
File "/home/miranda9/automl-meta-learning/automl-proj-src/meta_learning/training/meta_training.py", line 233, in meta_train_fixed_iterations
args.outer_opt.step()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/transformers/optimization.py", line 577, in step
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/transformers/optimization.py", line 508, in _approx_sq_grad
return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))
RuntimeError: mat1 must be a matrix, got 4-D tensor
```
----
full error output:
```
('PID', '25721')
('always_use_deterministic_algorithms', False)
('args_hardcoded_in_script', False)
('base_model_mode', 'resnet12_rsf')
('best_val_loss', inf)
('condor_jobid', -1)
('copy_initial_weights', False)
('current_logs_path', '/home/miranda9/data/logs/logs_Nov05_15-44-03_jobid_668')
('current_time', 'Nov30_08-42-53')
('data_path', 'miniimagenet')
('debug', False)
('debug_test', False)
('device', device(type='cuda'))
('epoch_num', -1)
('eval_iters', 2)
('experiment_name', 'debug')
('fo', False)
('force_log', True)
('githash', '9af491c')
('githash_long', '9af491ccd13fa88f4d07287f54305488ba4967fc')
('githash_short', '9af491c')
('gpu_name', 'NVIDIA GeForce GTX TITAN X')
('grad_clip_mode', None)
('grad_clip_rate', None)
('hostname', 'vision-02.cs.illinois.edu')
('inner_debug_eval', False)
('inner_debug_train', False)
('inner_lr', 0.1)
('it', 0)
('jobid', 10340)
('k_eval', 15)
('k_shots', 5)
('log_root', PosixPath('/home/miranda9/data/logs/logs_Nov30_08-42-53_jobid_10340'))
('log_to_wandb', True)
('log_train_freq', 200)
('log_val_freq', 200)
('logger', <uutils.logger.Logger object at 0x2b832f5eff70>)
('logging', True)
('mail_user', '[email protected]')
('master_port', '37126')
('meta_batch_size_eval', 2)
('meta_batch_size_train', 2)
('meta_learner', 'maml_fixed_inner_lr')
('metrics_as_dist', False)
('my_stdout_filepath', '/home/miranda9/data/logs/logs_Nov05_15-44-03_jobid_668/my_stdout.log')
('n_classes', 5)
('nb_inner_train_steps', 4)
('nccl', 2708)
('num_epochs', -1)
('num_its', 3)
('num_workers', 4)
('outer_debug', False)
('outer_lr', 0.001)
('path_to_checkpoint', PosixPath('/home/miranda9/data_folder_fall2020_spring2021/logs/nov_all_mini_imagenet_expts/logs_Nov05_15-44-03_jobid_668'))
('pin_memory', False)
('pw_path', '/home/miranda9/pw_app.config.json')
('rank', -1)
('run_name', 'debug (Adafactor) : args.jobid=10340')
('save_ckpt', True)
('seed', None)
('serial', False)
('show_layerwise_sims', False)
('sim_compute_parallel', False)
('slurm_array_task_id', -1)
('slurm_jobid', 10340)
('split', 'train')
('tb', True)
('track_higher_grads', True)
('train_iters', 500000)
('trainin_with_epochs', False)
('training_mode', 'iterations')
('wandb_entity', 'brando')
('wandb_group', 'experiment_debug')
('wandb_project', 'sl_vs_ml_iclr_workshop_paper')
------- Main Resume from Checkpoint --------
args.base_model=ResNet(
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(64, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(160, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(320, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(320, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(dropout): Dropout(p=0.0, inplace=False)
(classifier): Linear(in_features=640, out_features=5, bias=True)
)
args.outer_opt=Adafactor (
Parameter Group 0
beta1: None
clip_threshold: 1.0
decay_rate: -0.8
eps: (1e-30, 0.001)
lr: None
relative_step: True
scale_parameter: True
warmup_init: True
weight_decay: 0.0
)
args.meta_learner=MAMLMetaLearner(
(base_model): ResNet(
(layer1): Sequential(
(0): BasicBlock(
(conv1): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(3, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(layer2): Sequential(
(0): BasicBlock(
(conv1): Conv2d(64, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(160, 160, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(64, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(layer3): Sequential(
(0): BasicBlock(
(conv1): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(320, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(160, 320, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(layer4): Sequential(
(0): BasicBlock(
(conv1): Conv2d(320, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(relu): LeakyReLU(negative_slope=0.1)
(conv2): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(conv3): Conv2d(640, 640, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn3): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(maxpool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
(downsample): Sequential(
(0): Conv2d(320, 640, kernel_size=(1, 1), stride=(1, 1), bias=False)
(1): BatchNorm2d(640, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
)
(DropBlock): DropBlock()
)
)
(avgpool): AdaptiveAvgPool2d(output_size=1)
(dropout): Dropout(p=0.0, inplace=False)
(classifier): Linear(in_features=640, out_features=5, bias=True)
)
)
args.scheduler=None
--------------------- META-TRAIN ------------------------
Starting training!
Traceback (most recent call last):
File "/home/miranda9/automl-meta-learning/automl-proj-src/experiments/meta_learning/main_metalearning.py", line 441, in <module>
main_resume_from_checkpoint(args)
File "/home/miranda9/automl-meta-learning/automl-proj-src/experiments/meta_learning/main_metalearning.py", line 403, in main_resume_from_checkpoint
run_training(args)
File "/home/miranda9/automl-meta-learning/automl-proj-src/experiments/meta_learning/main_metalearning.py", line 413, in run_training
meta_train_fixed_iterations(args)
File "/home/miranda9/automl-meta-learning/automl-proj-src/meta_learning/training/meta_training.py", line 233, in meta_train_fixed_iterations
args.outer_opt.step()
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/torch/optim/optimizer.py", line 88, in wrapper
return func(*args, **kwargs)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/transformers/optimization.py", line 577, in step
update = self._approx_sq_grad(exp_avg_sq_row, exp_avg_sq_col)
File "/home/miranda9/miniconda3/envs/metalearning_gpu/lib/python3.9/site-packages/transformers/optimization.py", line 508, in _approx_sq_grad
return torch.mm(r_factor.unsqueeze(-1), c_factor.unsqueeze(0))
RuntimeError: mat1 must be a matrix, got 4-D tensor
```
----
related:
- https://stackoverflow.com/questions/70171427/adafactor-from-transformers-hugging-face-only-works-with-transfromers-does-it
- https://github.com/facebookresearch/higher/issues/124
- https://www.reddit.com/r/pytorch/comments/r5p2pk/adafactor_from_transformers_hugging_face_only/ | 11-30-2021 14:53:54 | 11-30-2021 14:53:54 | @LysandreJik can you help me ping the right person for this issues?
The summary is:
- using Adafactor with a Resnet results in a bug(also MAML is involved)<|||||>Hi @brando90, `transformers` is meant as a library of model architectures more than a library of optimizers, and we're actively moving away from maintaining optimizers. We'd rather you rely on a library that actively maintain them as the support should be both broader (not tested only on `transformers`, like it is here) and more complete (not limited to the two optimizers that we support here).
Some that come to mind are [pytorch-optimizer](https://github.com/jettify/pytorch-optimizer) or [Fairseq](https://github.com/pytorch/fairseq/blob/main/fairseq/optim/adafactor.py).<|||||>@LysandreJik thank you! I will try that! That comment would be useful in the docs :)
I will close the issue with closing remarks of the solution I ended up using. Appreciate the response.<|||||>@LysandreJik I was reading the adafactor scheduler and it seems that it multiplies the lr by 0 which seems odd to me:
https://github.com/huggingface/transformers/blob/master/src/transformers/optimization.py#L604, https://huggingface.co/docs/transformers/master/main_classes/optimizer_schedules
https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html
```
class AdafactorSchedule(LambdaLR):
"""
Since :class:`~transformers.optimization.Adafactor` performs its own scheduling, if the training loop relies on a
scheduler (e.g., for logging), this class creates a proxy object that retrieves the current lr values from the
optimizer.
It returns ``initial_lr`` during startup and the actual ``lr`` during stepping.
"""
def __init__(self, optimizer, initial_lr=0.0):
def lr_lambda(_):
return initial_lr
for group in optimizer.param_groups:
group["initial_lr"] = initial_lr
super().__init__(optimizer, lr_lambda)
for group in optimizer.param_groups:
del group["initial_lr"]
def get_lr(self):
opt = self.optimizer
lrs = [
opt._get_lr(group, opt.state[group["params"][0]])
for group in opt.param_groups
if group["params"][0].grad is not None
]
if len(lrs) == 0:
lrs = self.base_lrs # if called before stepping
return lrs
```
can you help me figure out what the scheduler for adafactor is doing?
<|||||>seems like the fair one ran without errors so far, other one had a bug.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,573 | closed | Adding call for contribution for the S4 model | Looking for community contributors to add the S4 model! ([repo](https://github.com/HazyResearch/state-spaces), [paper](https://arxiv.org/abs/2111.00396)) | 11-30-2021 13:56:03 | 11-30-2021 13:56:03 | Hi @Rocketknight1,
I would like to work on this S4 model
<|||||>Hi @kamalkraj, sure thing! I thought this was quite an unusual and challenging model, but you've done some really good work in other PRs, so I think you should be able for it. Note that since the original repository is PyTorch-only, we'll probably want a PyTorch version of the model only to start, and we can leave a TF port for a future PR (which you don't have to do, of course!). Is that okay with you?<|||||>Yes, @Rocketknight.
<|||||>Sure! In that case I'll leave this PR un-merged for now. You can see most of the details in the .md file. The best place to get started is probably to get their [standalone example](https://github.com/HazyResearch/state-spaces/blob/main/example.py) working with `pykeops`. If you encounter any difficulties, or you're not sure how to proceed, you can let me know here or on Slack. We have direct communication with the original paper authors too, so we should be able to ask them if we encounter any issues with the underlying theory. Thank you for this!<|||||>Okay<|||||>@Rocketknight1
Got the standalone version working with `pykeops`
https://github.com/kamalkraj/S4-Standalone<|||||>@kamalkraj Nice!<|||||>Hi @Rocketknight1,
I just skimmed through the paper and code, If I understood correctly, I only need to port the autoregressive Language Model to HF, right?
<|||||>Hi @kamalkraj - for now, that should be fine, yes. I think it's best to start with porting a single model correctly and then we can decide what to do afterwards!<|||||>Okay
<|||||>I have started working on the model.
make fixup is throwing an error ``` S4Config in TYPE_HINT but not in _import_structure.```
```python
from typing import TYPE_CHECKING
# rely on isort to merge the imports
from ...file_utils import _LazyModule, is_torch_available
_import_structure = {"configuration_s4": ["S4Config"]}
if is_torch_available():
_import_structure["modeling_s4"] = ["S4ForCausalLM", "S4Layer", "S4Model", "S4PreTrainedModel"]
if TYPE_CHECKING:
from .configuration_s4 import S4Config
if is_torch_available():
from .modeling_s4 import S4ForCausalLM, S4Layer, S4Model, S4PreTrainedModel
else:
import sys
sys.modules[__name__] = _LazyModule(__name__, globals()["__file__"], _import_structure)
```

`parse_init` failed to parse the file properly. Is there anything wrong with this `__init__.py` file
@Rocketknight1 @sgugger <|||||>The script is not used to have something taht fits in one line for the declaration of `_import_structure`. Will fix it one day but in the meantime, you should replace it by
```
# fmt: off
_import_structure = {
"configuration_s4": ["S4Config"]
}
# fmt: on
```
The flags fmt will tell black to be a good boy and leave the style as is, and the script will parse the init properly.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Closing this call for contributions PR since @kamalkraj is already working on the actual model PR. |
transformers | 14,572 | closed | Canβt load config for β/data/sentence-transformers_all-mpnet-base-v2β | Hi,
I am trying to use the Accelerated Inference API and am facing this issue while using multiple sentence-transformer models. This happens when I use any input other than the auto-filled one.

@patil-suraj @patrickvonplaten @LysandreJik
| 11-30-2021 11:48:31 | 11-30-2021 11:48:31 | cc @nreimers @osanseviero <|||||>This should be fixed now, it was a data corruption issue cc @Narsil <|||||>@osanseviero @Narsil Still facing this issue.
<img width="1685" alt="Screenshot 2021-12-01 at 9 47 09 AM" src="https://user-images.githubusercontent.com/24825646/144170950-f58f8496-e551-4534-9fa1-82c5b8f1107b.png">
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@RohanBS sorry for the delay. The data corruption issue was fixed but the results are cached so that would explain why you were still seeing the error. I just tried this query and it seems to be working fine. |
transformers | 14,571 | closed | Delete versions.yml from transformers doc | # What does this PR do?
With https://github.com/huggingface/doc-builder/pull/31, versions.yml file & logic is handled on the doc-builder side
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-30-2021 09:14:48 | 11-30-2021 09:14:48 | |
transformers | 14,570 | closed | Add mLUKE | # What does this PR do?
This PR adds mLUKE, [mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models](https://arxiv.org/abs/2110.08151), by Studio Ousia.
The model is a multilingual extension of [LUKE](https://huggingface.co/transformers/model_doc/luke.html), which is trained on the Wikipedia articles in 24 languages.
We have uploaded the model weights on [the hugging face hub](https://huggingface.co/studio-ousia/mluke-base).
The only difference from LUKE is the tokenizer, so Iβve just added `MLukeTokenizer`, which is a mix of `LukeTokenizer` and `XLMRobertaTokenizer`.
Iβve also added `LukeForMaskedLM` to perform the cloze prompt task experimented in [the original paper](https://arxiv.org/abs/2110.08151).
The example can be found in [this notebook](https://colab.research.google.com/drive/1FAJimyuzUefZyWS1ckSEitglLfFRofi4?hl=ja#scrollTo=jIXV3onW2ITm).
## Who can review?
@NielsRogge, could you help us review the code as you have worked on LUKE?π | 11-30-2021 07:30:16 | 11-30-2021 07:30:16 | Thank you for reviewing, @NielsRogge!
Iβve reflected the reviews to the code and also added mLUKE to the Multilingual-models section in the doc.<|||||>Thanks for taking into account the comments. Can you rebase with master? The index.rst file is now replaced by an [index.mdx](https://github.com/huggingface/transformers/blob/master/docs/source/index.mdx) file because of the new front-end website.
Also, in the main README, make sure the links are correct (they should not include the .html and have a docs in the url as we will move to https://huggingface.co/docs/transformers). You can then run make fix-copies to fix the other READMEs.<|||||>I think there was something wrong in your rebase, and the diff now shows much more than what you are actually changing. Could you close your PR and reopen a fresh one from your branch?<|||||>Sorry, seems like I messed it up.
I will reopen a new pull request. |
transformers | 14,569 | closed | [Deepspeed] add support for bf16 mode | This PR:
- adds support for bfloat16 for ZeRO-1, ZeRO-2 and ZeRO-3 to HF/DS integration.
- most functional tests are now run for bf16 as well - so this PR almost doubles the number of tests. model zoo tests are left with fp16 for now, as both should work the same.
- docs are updated to document the new feature
Requirements:
1. [x] merged https://github.com/huggingface/transformers/pull/13207
2. [x] ZeRO-3 support merged https://github.com/microsoft/DeepSpeed/pull/1453
3. [x] Need new deepspeed release after the above is merged
4. [x] update version dependency table
---------------------------
for users who want to try this early:
1. add `--bf16` to your previous fp16 or fp32 deepspeed command line
done ;)
@sgugger | 11-30-2021 03:59:02 | 11-30-2021 03:59:02 | Thanks a lot for catching those few development process leftovers, Sylvain! Much appreciating. All cleaned up now. |
transformers | 14,568 | closed | Fixes Loss for TransfoXL when using Trainer API | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
This PR address the problem of using TransfoXL with the Trainer API by introducing straightforward changes, as follows:
- Changes the `losses` parameter from TransfoXL output to `loss`, allowing its usage with the Trainer API;
- Reduces the loss by mean calculation and by ignoring pad tokens;
- Adds a checker that will validate whether input labels are not only composed by pad tokens, which eventually breaks the loss calculation (tested on a full wikitext-103 version).
I have been working and training TransfoXL from scratch for the past months, and so far this is the most stable code that I could get with huggingface/transformers.
Let me know if there is anything else that should be changed!
PS: fast testing script: `python examples/pytorch/language-modeling/run_clm.py --model_name_or_path transfo-xl-wt103 --dataset_name wikitext --dataset_config_name wikitext-103-raw-v1 --output_dir out`
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [X] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [X] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
@patrickvonplaten
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
π¨π¨ **Breaking change TransfoXL** π¨π¨
This PR introduces a breaking change.
Previously the output attribute ``losses`` can be found at index 0. Now it is at index 2 in case `labels` are passed.
This means if you code previously used:
```python
losses = model(input_ids, labels=labels)[0]
```
it should now change to:
```python
losses = model(input_ids, labels=labels)[2]
``` | 11-29-2021 19:10:11 | 11-29-2021 19:10:11 | Thanks a lot for the very in-detail explanations @gugarosa ! I like your proposals and IMO we could do the following:
1) Output both `loss` and `losses` whereas `losses` is still in "vector"-form and we introduce a new `loss` output that has your proposed form. So the following use cases should work:
```python
from transformers import TransfoXLLMHead
model = TransfoXLLMHead.from_pretrained(...)
# 1) keep backward compatibility -> this should be an output in vector form
losses = model(input_ids).losses
# 2) allow model to be used in trainer -> this should be in scalar form
loss = model(input_ids).loss
```
2) Then regarding the tuple order we will probably have to do a breaking change with the new tuple order looking as follows:
(loss, ...., lossses) <- this means that
```python
losses = model(input_ids)[0]
# is now changed to
loss = model(input_ids)[0]
```
which is ok IMO given that most people use `outputs.losses` ...
What do you think @sgugger @LysandreJik ?
<|||||>Awesome! @gugarosa - feel free to go ahead with the discussed approach if you want :-)<|||||>Sorry for taking too long, I had a family issue and went off for the week.
Nevertheless, I have updated the PR with the requested changes (thanks so much!), so please feel free to criticize them or request for newer changes. I have also tested with the proposed snippet and everything seems to be working!
```
import torch
from transformers import TransfoXLLMHeadModel
# Constants
BATCH_SIZE = 1
SEQ_LEN = 32
# Loads model from Hub and defines random inputs
model = TransfoXLLMHeadModel.from_pretrained('transfo-xl-wt103')
input_ids = torch.randint(0, 10000, (BATCH_SIZE, SEQ_LEN))
# 1) keep backward compatibility -> this should be an output in vector form
losses = model(input_ids, labels=input_ids).losses
# 2) allow model to be used in trainer -> this should be in scalar form
loss = model(input_ids, labels=input_ids).loss
# Asserts the outputs' shapes
assert losses.shape == (BATCH_SIZE, SEQ_LEN - 1)
assert loss.shape == ()
```
Best regards,
Gustavo.<|||||>Thanks for the feedback! I have squashed the commit with the styling changes and all tests seems to be passing now.<|||||>Cool looks good to me! Left a π¨π¨ breaking π¨π¨ message at the top of the PR to make users aware <|||||>@LysandreJik - feel free to merge if you're ok with the small breaking change<|||||>I am perfectly fine with adding a `loss` output so that it's compatible with the `Trainer`, less-so to push `losses` to be moved to another place in the returned output.
I checked the usage for TransfoXL and we had 17k uses in the past month with the latest `master`, so this is bound to affect some users.
Is the path here the only one we can take? Can't we make the switch from `losses` to `loss` as a first return value opt-in, so that it works only if users explicitly mention that they're okay with the new behavior? This way we could switch for the v5, while preventing the breaking change.
Something like a `trainer_compatible` flag for the transfo-XL configuration, which would put `loss` as the first return instead of `losses`. WDYT?
<|||||>@LysandreJik - that sounds good to me. Think it does make sense to aim for 100% backward compatibility here. @sgugger what do you think?<|||||>That works for me. Also wants to highlight that this would only be necessary if the user specifies `return_dict=False`, as when the outputs is one of our `ModelOutput`, the `Trainer` indexes with the key "loss" and not the first index.<|||||>@gugarosa, would you like to take a stab at this? Otherwise let me know and I'm happy to help out :)<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I'll try and make time for this this week if nobody beats me to it.<|||||>Couldn't push directly to your fork so I opened a PR here: https://github.com/gugarosa/transformers/pull/1<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@gugarosa please let me know if you'd like me to take over the PR. I believe all that remains is to merge the PR I opened here: https://github.com/gugarosa/transformers/pull/1 and we can merge this current PR then.
Thanks!<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,567 | closed | Fix index links | # What does this PR do?
This PR fixes the links in the README (and their localized versions) and the tooling around them to have the index be properly generated.
It also fixes the script that checks if objects are all properly documented to handle the mdx files. | 11-29-2021 18:25:22 | 11-29-2021 18:25:22 | |
transformers | 14,566 | closed | Fix backend regex | # What does this PR do?
This PR fixes the regex for potential backends (which can include an _ as from the QDQBert model) in other scripts. | 11-29-2021 17:04:30 | 11-29-2021 17:04:30 | |
transformers | 14,565 | closed | BertForSequenceClassification : 'module 'Torch' has no attribute 'BoolTensor' | Hi there,
error on line:
'''
from transformers import BertForSequenceClassification
'''
Resulting in the following errror:
'''
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
C:\ProgramData\Anaconda3\lib\site-packages\transformers\file_utils.py in _get_module(self, module_name)
2149 try:
-> 2150 return importlib.import_module("." + module_name, self.__name__)
2151 except Exception as e:
C:\ProgramData\Anaconda3\lib\importlib\__init__.py in import_module(name, package)
126 level += 1
--> 127 return _bootstrap._gcd_import(name[level:], package, level)
128
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _gcd_import(name, package, level)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _load_unlocked(spec)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap_external.py in exec_module(self, module)
C:\ProgramData\Anaconda3\lib\importlib\_bootstrap.py in _call_with_frames_removed(f, *args, **kwds)
C:\ProgramData\Anaconda3\lib\site-packages\transformers\models\bert\modeling_bert.py in <module>
49 )
---> 50 from ...modeling_utils import (
51 PreTrainedModel,
C:\ProgramData\Anaconda3\lib\site-packages\transformers\modeling_utils.py in <module>
47 )
---> 48 from .generation_utils import GenerationMixin
49 from .utils import logging
C:\ProgramData\Anaconda3\lib\site-packages\transformers\generation_utils.py in <module>
26 from .generation_beam_search import BeamScorer, BeamSearchScorer
---> 27 from .generation_logits_process import (
28 EncoderNoRepeatNGramLogitsProcessor,
C:\ProgramData\Anaconda3\lib\site-packages\transformers\generation_logits_process.py in <module>
346
--> 347 class NoBadWordsLogitsProcessor(LogitsProcessor):
348 """
C:\ProgramData\Anaconda3\lib\site-packages\transformers\generation_logits_process.py in NoBadWordsLogitsProcessor()
397
--> 398 def _calc_static_bad_word_mask(self, scores: torch.FloatTensor) -> torch.BoolTensor:
399 static_bad_words_mask = torch.zeros(scores.shape[1])
AttributeError: module 'torch' has no attribute 'BoolTensor'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
<ipython-input-5-b88f39dd1009> in <module>
----> 1 model = transformers.BertForSequenceClassification
C:\ProgramData\Anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name)
2139 elif name in self._class_to_module.keys():
2140 module = self._get_module(self._class_to_module[name])
-> 2141 value = getattr(module, name)
2142 else:
2143 raise AttributeError(f"module {self.__name__} has no attribute {name}")
C:\ProgramData\Anaconda3\lib\site-packages\transformers\file_utils.py in __getattr__(self, name)
2138 value = self._get_module(name)
2139 elif name in self._class_to_module.keys():
-> 2140 module = self._get_module(self._class_to_module[name])
2141 value = getattr(module, name)
2142 else:
C:\ProgramData\Anaconda3\lib\site-packages\transformers\file_utils.py in _get_module(self, module_name)
2152 raise RuntimeError(
2153 f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its traceback):\n{e}"
-> 2154 ) from e
2155
2156 def __reduce__(self):
RuntimeError: Failed to import transformers.models.bert.modeling_bert because of the following error (look up to see its traceback):
module 'torch' has no attribute 'BoolTensor'
'''
Unsure as to how to proceed.
PyTorch = 1.0.1
Unsure of the version of transformers as it's not appearing in my anaconda packages. But the rest of transformers can import and work fine so i know it's there.
Any help greatly appreciated, thanks!
Jakob | 11-29-2021 14:55:59 | 11-29-2021 14:55:59 | Hi @jakobbrown , the same is working for me with this version of packages.
transformers - '4.6.0'
torch - '1.10.0+cu102'<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,564 | closed | [Generate] Fix generate with inputs_embeds on GPU | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
When `inputs_embeds` are passed to `generate(...)` without an attention_mask the attention_mask was not created correctly on GPU leading to a failing test.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-29-2021 14:34:19 | 11-29-2021 14:34:19 | |
transformers | 14,563 | closed | [Bug] Issue in AutoModelForSequenceClassification while initialization | ## Issue:
If we import Sequence Classification model like this,
```python
from transformers import AutoModelForSequenceClassification
num_labels=28
model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
num_labels=num_labels,
problem_type='multi_label_classification',
label2id=label2id,
id2label=id2label)
```
it will always generate a classification head like this
```
(classifier): Linear(in_features=768, out_features=1, bias=True)
```
Ideally `out_features` should be like same as `num_labels` specified. Like this
```
(classifier): Linear(in_features=768, out_features=28, bias=True)
```
for `single_label_classification` also the same thing is happening.
If we remove below two lines while initialization it works fine.
```python
label2id=label2id,
id2label=id2label
```
Note: I have initialized `label2id` and `id2label` are assigned respective values properly. It can be found in below colab notebook
## Environment info
- `transformers` version: 4.12
- Platform: colab
### Who can help
@LysandreJik
## To reproduce
[colab](https://github.com/bhadreshpsavani/UnderstandingNLP/blob/master/ErrorWhileImportingMultilabelClassifier.ipynb)
| 11-29-2021 11:54:29 | 11-29-2021 11:54:29 | A couple of things are strange in your code cell # 3:
```python
id2label = { "0": "admiration", "1": "amusement", "2": "anger", "3": "annoyance", "4": "approval", "5": "caring", "6": "confusion", "7": "curiosity", "8": "desire",
"9": "disappointment", "10": "disapproval", "11": "disgust", "12": "embarrassment", "13": "excitement", "14": "fear", "15": "gratitude", "16": "grief",
"17": "joy", "18": "love", "19": "nervousness", "20": "optimism", "21": "pride", "22": "realization", "23": "relief", "24": "remorse", "25": "sadness",
"26": "surprise", "27": "neutral"
},
label2id ={ "admiration": 0, "amusement": 1, "anger": 2, "annoyance": 3, "approval": 4, "caring": 5, "confusion": 6, "curiosity": 7, "desire": 8, "disappointment": 9,
"disapproval": 10, "disgust": 11, "embarrassment": 12, "excitement": 13, "fear": 14, "gratitude": 15, "grief": 16, "joy": 17, "love": 18, "nervousness": 19,
"neutral": 27, "optimism": 20, "pride": 21, "realization": 22, "relief": 23, "remorse": 24, "sadness": 25, "surprise": 26
}
num_labels=28
```
The datatype is not consistent. `id2label` has strings and `label2id` has integers (both should be integers). More importantly there is a strange `,` making `id2label` actually a tuple. This is where the bug comes from and by that I mean bug in your code. The library works fine. Remove the comma and you get `out_features=28`.<|||||>Hi @shabie,
Thanks for your help,
the comma doesn't make it a tuple, it is still a dictionary that can be assigned for JSON file variable.
Can you please share your code that might work directly, I tried above suggestion its not working.<|||||>So here's the code you can use:
```python
from transformers import AutoModelForSequenceClassification
labels = [
'admiration',
'amusement',
'anger',
'annoyance',
'approval',
'caring',
'confusion',
'curiosity',
'desire',
'disappointment',
'disapproval',
'disgust',
'embarrassment',
'excitement',
'fear',
'gratitude',
'grief',
'joy',
'love',
'nervousness',
'neutral',
'optimism',
'pride',
'realization',
'relief',
'remorse',
'sadness',
'surprise',
]
label2id = {k: i for i, k in enumerate(labels)}
id2label = {i: k for i, k in enumerate(labels)}
num_labels = len(labels)
model = AutoModelForSequenceClassification.from_pretrained("distilbert-base-uncased",
num_labels=num_labels,
problem_type='multi_label_classification',
label2id=label2id,
id2label=id2label)
print(model.classifier)
# Linear(in_features=768, out_features=28, bias=True)
```<|||||>Thanks @shabie for your help,
This worked! |
transformers | 14,562 | closed | Fix doc interlinks | # What does this PR do?
Fixes inter link references in the doc
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-29-2021 11:32:48 | 11-29-2021 11:32:48 | |
transformers | 14,561 | closed | [Bug] `tokenizer.model_max_length` is different when loading model from shortcut or local path | ## To reproduce
When loading model from local path, the `tokenizer.model_max_length` is `VERY_LARGE_INTEGER`:
```py
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
print(tokenizer.model_max_length)
# 1024
tokenizer = GPT2Tokenizer.from_pretrained("path/to/local/gpt2")
print(tokenizer.model_max_length)
# 1000000000000000019884624838656
```
## Related code
https://github.com/huggingface/transformers/blob/25156eb296ae88c7b810235a368c953b7a4b9af9/src/transformers/tokenization_utils_base.py#L1858-L1864
https://github.com/huggingface/transformers/blob/ebbe8cc3fe7a2553e924353ab454bd026fd23135/src/transformers/models/gpt2/tokenization_gpt2.py#L153
https://github.com/huggingface/transformers/blob/ebbe8cc3fe7a2553e924353ab454bd026fd23135/src/transformers/models/gpt2/tokenization_gpt2.py#L56-L62
## Expected behavior
Assign correct `model_max_length` when loading model from local path:
```py
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("path/to/local/gpt2")
print(tokenizer.model_max_length)
# 1024
```
| 11-29-2021 09:28:04 | 11-29-2021 09:28:04 | I've also encountered the same problem specially obvious working on clusters having no internet access. The problem stems from the fact that other than the string passed to the `from_pretrained`, there really is no good way right now of knowing what model is in the folder. I thought `model_type` might help but that's not the case.
Which means as a quick fix, we can resort to a solution that works in "many" cases.
So the key bit of code that is skipped if path to the model folder is different is this:
https://github.com/huggingface/transformers/blob/25156eb296ae88c7b810235a368c953b7a4b9af9/src/transformers/tokenization_utils_base.py#L1858-L1864
If modified to rely on the heuristic that most configs, if available, do have either `max_position_embeddings` or `n_positions`, it solves the problem for those models (and doesn't hurt those who dont):
```python
try:
config
except NameError:
config = None
# Set max length if needed
if pretrained_model_name_or_path in cls.max_model_input_sizes:
model_max_length = cls.max_model_input_sizes[pretrained_model_name_or_path]
if model_max_length is not None and isinstance(model_max_length, (int, float)):
init_kwargs["model_max_length"] = min(init_kwargs.get("model_max_length", int(1e30)), model_max_length)
elif hasattr(config, "max_position_embeddings") or hasattr(config, "n_positions"):
model_max_length = config.n_positions if hasattr(config, "n_positions") else config.max_position_embeddings
if model_max_length is not None and isinstance(model_max_length, (int, float)):
init_kwargs["model_max_length"] = min(init_kwargs.get("model_max_length", int(1e30)), model_max_length)
```
Another way is to raise a warning if its not found:
```python
# Set max length if needed
if pretrained_model_name_or_path in cls.max_model_input_sizes:
model_max_length = cls.max_model_input_sizes[pretrained_model_name_or_path]
if model_max_length is not None and isinstance(model_max_length, (int, float)):
init_kwargs["model_max_length"] = min(init_kwargs.get("model_max_length", int(1e30)), model_max_length)
else:
# raise warning about passing the `model_max_length`
```
This option has the downside that people will need to look into the `config.json` itself or try to find it somehow which may not be immediately obivous.<|||||>@SaulLu, could you take a look at this issue?<|||||>Thank you very much for this detailed issue! Indeed, I completely understand that this behavior is not satisfactory.
We added a new feature some time ago that saves the tokenizer type in the `tokenizer_config.json` file in the `tokenizer_class` key. This new key allows retrieving the value of `model_max_length`. The problem you are experiencing is related to the fact that the `tokenizer_config` file of gpt2 was created before this feature existed.
While waiting for me to come back to you with a more satisfactory solution, I advise you to load the GPT2 tokenizer then save it in a local folder and use these local files for the tokenizer you want to use locally.
```python
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
tokenizer.save_pretrained("gpt2_tokenizer_fixed")
```
```python
from transformers import GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2_tokenizer_fixed")
print(tokenizer.model_max_length)
# 1024
```
You can even push it to your account on HF if you prefer to retrieve it easily with `.from_pretrained` :blush: .
```python
tokenizer = GPT2TokenizerFast.from_pretrained("gpt2")
tokenizer.push_to_hub("SaulLu/gpt2_tokenizer_fixed") # with your HF username
tokenizer = GPT2Tokenizer.from_pretrained("SaulLu/gpt2_tokenizer_fixed")
print(tokenizer.model_max_length)
# 1024
```
<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Ideally it would be better to save all the related attributes into the config file so that users can load the exact same model that's saved.
In my case I used the legal-bert tokenizer but customized the model_max_length:
`tokenizer = BertTokenizer.from_pretrained('nlpaueb/legal-bert-base-uncased')`
`tokenizer.model_max_length=512`
but this attr won't be saved when calling the `save_pretrained` function, when I load the model from my local directory, the model length will still be the original value<|||||>I understand that this is not ideal. At the moment, if you want this to be saved in the `tokenizer_config.json` you need to pass the argument in the `from_pretrained` method:
```python
tokenizer = BertTokenizer.from_pretrained('nlpaueb/legal-bert-base-uncased', model_max_length=512)
```<|||||>This issue still persists.
Downloaded files from https://huggingface.co/bert-base-multilingual-cased/tree/main and tried to create tokenizer using from_pretrained from local files
<img width="947" alt="Screenshot 2023-06-01 at 17 26 29" src="https://github.com/huggingface/transformers/assets/22324507/943d1b83-8801-4b61-8bd0-ae7ef639a926">
|
transformers | 14,560 | closed | [Probably a bug] - T5TokenizerFast is not the same as T5Tokenizer - it adds id 1 when using batch encode plus | Ubuntu 18.04
Transformers 4.12.5
I tried T5TokenizerFast and it is much faster but the generation output is different. Encoding part seems like adds one addition 1 (id) while the decoder seem to work fine. But as results output is very different.
1)
from transformers import T5ForConditionalGeneration,T5Tokenizer, T5TokenizerFast
model1a = T5ForConditionalGeneration.from_pretrained(my_model_path)
tokenizer1 = T5Tokenizer.from_pretrained('t5-large')
encoding = tokenizer1_decode.batch_encode_plus(text,pad_to_max_length=True)
encoding {'input_ids': [[9, 5, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
2)
from transformers import T5ForConditionalGeneration,T5Tokenizer, T5TokenizerFast
model1a = T5ForConditionalGeneration.from_pretrained(my_model_path)
tokenizer1 = T5TokenizerFast.from_pretrained('t5-large')
encoding = tokenizer1.batch_encode_plus(text,pad_to_max_length=True)
encoding {'input_ids': [[9, 5, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]
Is this normal behavior, i expected the results to be exactly the same and they are very different.
Solution:
1) for encoding use standard tokenizer
2) for decoding use fast tokenizer (it seems to work fine so far)
| 11-29-2021 08:28:24 | 11-29-2021 08:28:24 | Hi @Oxi84,
I'm not able to reproduce this error.
In the code cell you gave, at the first block you define `tokenizer1` but you are using `tokenizer1_decode` as a tokenizer instance.
The error may come from this line : `encoding = tokenizer1_decode.batch_encode_plus(text,pad_to_max_length=True)` ?<|||||>Thanks for the answer, i solved it. Tokenizerfast already adds s so i had this character 2 times.
from transformers import T5ForConditionalGeneration,T5Tokenizer, T5TokenizerFast
tokenizer = T5Tokenizer.from_pretrained('t5-large')
tokenizerFast = T5TokenizerFast.from_pretrained('t5-large')
sentence_list = ["I really like beer. It's better than whiskey."]
print("sentence_list",sentence_list)
text = []
text1 = []
for sentence in sentence_list:
text.append("paraphrase: " + sentence + " </s>")
text1.append("paraphrase: " + sentence)
encoding = tokenizer.batch_encode_plus(text,pad_to_max_length=True)
encoding1 = tokenizerFast.batch_encode_plus(text,pad_to_max_length=True)
encoding1a = tokenizerFast.batch_encode_plus(text1,pad_to_max_length=True)
input_ids, attention_masks = encoding["input_ids"], encoding["attention_mask"]
input_ids1, attention_masks1 = encoding1["input_ids"], encoding1["attention_mask"]
input_ids1a, attention_masks1a = encoding1a["input_ids"], encoding1a["attention_mask"]
print("encoding",input_ids)
print("encoding",input_ids1)
print("encoding",input_ids1a)
print("attention_masks",attention_masks)
print("attention_masks",attention_masks1)
print("attention_masks",attention_masks1a)
So the results was:
encoding [[3856, 27111, 10, 27, 310, 114, 6061, 5, 94, 31, 7, 394, 145, 27971, 5, 1]]
encoding [[3856, 27111, 10, 27, 310, 114, 6061, 5, 94, 31, 7, 394, 145, 27971, 5, 1, 1]]
encoding [[3856, 27111, 10, 27, 310, 114, 6061, 5, 94, 31, 7, 394, 145, 27971, 5, 1]]
attention_masks [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
attention_masks [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
attention_masks [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]
<|||||>I sometimes even get better results with <s> added 2 times which is interesting :)<|||||>@Oxi84 - is there still an issue?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,559 | closed | Question about an error occurring while running hf_argparser.py | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.3.2
- Platform: Google Colab
- Python version: Python 3.7.12
- PyTorch version (GPU?): 1.8.0+cu111 (using Colab Pro - GPU/high-RAM)
- Tensorflow version (GPU?): 2.7.0
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
## Information
Model I am using (Bert, XLNet ...):
ET5 (pre-trained Korean language model using T5 and BART training pattern) developed by ETRI
I have downloaded the pre-trained model from ETRI(https://aiopen.etri.re.kr/service_dataset.php), and they suggests to use the model via PyTorch and the files are consist of HuggingFace model(.model) and SentencePiece tokenizer model(.bin) file.
I keep getting error message from running this part of a Python file(seq2seq_finetune_t5_ynat.py) that was given with ET5 model:
The problem arises when using: HfArgumentsParser
'''
def main():
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
if len(sys.argv) == 3 and sys.argv[-1].endswith(".json"): ### jihee
# If we pass only one argument to the script and it's the path to a json file,
# let's parse it to get our arguments.
model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[-1])) ### jihee
else:
model_args, data_args, training_args = parser.parse_args_into_dataclasses()
'''
* the error message I get:
'''
Traceback (most recent call last):
File "seq2seq_finetune_t5_ynat.py", line 422, in <module>
main()
File "seq2seq_finetune_t5_ynat.py", line 199, in main
parser = HfArgumentParser((ModelArguments, DataTrainingArguments, Seq2SeqTrainingArguments))
File "/usr/local/lib/python3.7/dist-packages/transformers/hf_argparser.py", line 66, in __init__
self._add_dataclass_arguments(dtype)
File "/usr/local/lib/python3.7/dist-packages/transformers/hf_argparser.py", line 116, in _add_dataclass_arguments
elif hasattr(field.type, "__origin__") and issubclass(field.type.__origin__, List):
File "/usr/lib/python3.7/typing.py", line 721, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
'''
- I have checked that ModelArguments, DataTrainingArguments, and Seq2SeqTrainingArguments are declared as class in the Python file(seq2seq_finetune_t5_ynat.py), so I do not get why it raise the TypeError.
- Please excuse that I am very new to this field so if this post is irrelevant, please let me know, I will remove!
- Thank you for your time reading this.
The tasks I am working on is:
* Trying to build text summarization system in Korean
## To reproduce
Steps to reproduce the behavior:
1.
2.
3.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. --> | 11-29-2021 07:02:59 | 11-29-2021 07:02:59 | I solved the problem by editing File "/usr/lib/python3.7/typing.py", line 721, in subclasscheck
return issubclass(cls, self.origin):
return issubclass(cls, self.origin) -> return issubclass(type(cls), self.origin)
<|||||>same issue to me. |
transformers | 14,558 | closed | XLMForSequenceClassification does not work | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux 3.10.0-957.21.3.el7.x86_64
- Python version: 3.8.5
- PyTorch version (GPU?): 1.8.1-gpu
- Tensorflow version (GPU?): 2.6.0
- Using GPU in script?: Yes
- Using distributed or parallel set-up in script?: No
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@LysandreJik
## Information
Model I am using : XLMForSequenceClassification
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
The tasks I am working on is:
* [ ] an official GLUE/SQUaD task: (give the name)
* [x] my own task or dataset: (give details below)
I use `XLMForSequenceClassification` to train my downstream text classification tasks.
The loss does not converge and the accuracy of the training set is very similar. The data set used is the English data in `xnli-1.0 (language='en')`.
When I switch to `BertForSequenceClassification` , it can be trained normally and the acc reached 1.0 on the training set.(The training method is exactly the same.)
The loading method is:
```python
model = XLMForSequenceClassification.from_pretrained('xlm-mlm-tlm-xnli15-1024', num_labels=3)
```
The following prompt appears when loading the model:
```
Some weights of XLMForSequenceClassification were not initialized from the model checkpoint at xlm-mlm-enfr-1024 and are newly initialized: ['sequence_summary.summary.bias','transformer.position_ids','sequence_summary.summary.weight']
```
Is it caused by not including position_ids in the pre-training model?
## To reproduce
1. Load the model.
2. Train the model.
3. Print the acc.
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
The acc after XLMForSequenceClassification training can normally reach the index given in the xlm paper.
Thanks ~~~
<!-- A clear and concise description of what you would expect to happen. -->
| 11-29-2021 05:28:19 | 11-29-2021 05:28:19 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,557 | closed | Fix a Bug, modeling_bert.py, erroneosly switched BCE and CrossEntropy | _In BertForSequenceClassification when detecting what type of loss to use, based on self.config.problem_type BCEWithLogitsLoss and CrossEntropyLoss erroneosly switched. It works correctly in case when self.config.problem_type not defined, because it switched in detection and application code, but if problem_type correctly defined by model it fail(or if "single_label_classification" selected CrossEntropy works as replacement for BCE obviously without errors). I found it when trying to fine-tune "unitary/toxic-bert" model._
Update:
Sorry i'm incorectly read mulit-label vs multi-class problem. i'm just use this model "unitary/toxic-bert" with removed head to fine-tune multi-class and get errors with multi-label defined loss. :(
##
@LysandreJik | 11-28-2021 18:33:29 | 11-28-2021 18:33:29 | @LysandreJik Can you please review this PR. And help how to fix torch tests and check_code_quality? |
transformers | 14,556 | closed | Target customized languages using multi-lingual sentence transformer models like "stsb-xlm-r-multilingual" | I have seen these models with multi-lingual ability and they are very nice.
But, I would like to ask that is there any possibility to have the specific target language / languages from the multi-language data search?
Like source language >> input, target language << output?
Main motivation is that, I have data in several languages. But I want only specific languages to be searched and get the results from that. | 11-28-2021 18:28:11 | 11-28-2021 18:28:11 | |
transformers | 14,555 | closed | Add LayoutLMv2 to models exportable with ONNX | This PR adds the code for converting LayoutLMv2 model to onnx format. | 11-28-2021 15:32:52 | 11-28-2021 15:32:52 | @michaelbenayoun Can you please review this PR and highlight why these circleci tests are failing ?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi! Thanks for working on this @fadi212 π I am interested in this work, but I see it has been quite some time since you've had an opportunity to work on it. Are you still working on this? If you no longer have the time or resources to do so, would you be able to provide any next steps or advice on what is necessary for completion? Thank you for your time and effort π€ |
transformers | 14,554 | closed | Loading from checkpoint without skipping requires the same number of GPUs | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux-4.4.0-210-generic-x86_64-with-glibc2.10
- Python version: 3.8.8
- PyTorch version (GPU?): 1.9.0a0+2ecb2c7 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
When changing the number of GPUs available, the trainer fails to load from the checkpoint that was trained with a different number of GPUs checkpoint.
Note that this does not happen when the flag ignore_data_skip is used.
The "per_device_train_batch_size" remains unchanged between runs.
I use Trainer API
The error looks like this:
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1298, in train
self._load_rng_state(resume_from_checkpoint)
File "/opt/conda/lib/python3.8/site-packages/transformers/trainer.py", line 1529, in _load_rng_state
torch.cuda.random.set_rng_state_all(checkpoint_rng_state["cuda"])
File "/opt/conda/lib/python3.8/site-packages/torch/cuda/random.py", line 73, in set_rng_state_all
set_rng_state(state, i)
File "/opt/conda/lib/python3.8/site-packages/torch/cuda/random.py", line 64, in set_rng_state
_lazy_call(cb)
File "/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py", line 114, in _lazy_call
callable()
File "/opt/conda/lib/python3.8/site-packages/torch/cuda/random.py", line 61, in cb
default_generator = torch.cuda.default_generators[idx]
IndexError: tuple index out of range
Thank you
| 11-28-2021 14:03:28 | 11-28-2021 14:03:28 | Yes, in general the API to resume training requires you to have the exact same setup. You won't get to the same results otherwise.<|||||>> Yes, in general the API to resume training requires you to have the exact same setup. You won't get to the same results otherwise.
My aim is not to reproduce the results.
From time to time I have to switch between different hardware setups and I wish to continue from a checkpoint. Is it at all possible or I cannot work around this error?<|||||>This would require some digging into the Trainer to fix the error you get and see if there are others that arise. It's fixable but not high priority on our side so it might take a little bit of time.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Unstale, will whip something around soon to avoid the error.<|||||>>
Thank you! |
transformers | 14,553 | closed | to support 3 dim attention mask in tf version | support 3 dim attention mask
# What does this PR do?
to support 3 dim attention mask in tf version
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
| 11-28-2021 09:24:21 | 11-28-2021 09:24:21 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,552 | closed | Canβt run Parallel inference | - `transformers` version: 4.12.3
- Platform: Darwin-20.6.0-x86_64-i386-64bit
- Python version: 3.7.0
- PyTorch version (GPU?): 1.10.0 (False)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: parallel
Hi,
I have warning message
`[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
`
while using pipeline
```
pipe = pipeline("sentiment-analysis", model=model, padding=True, max_length=200, truncation=True)
results = pipe(texts)
```
It happened on both models:
- "distilbert-base-uncased-finetuned-sst-2-english"
- "cardiffnlp/twitter-roberta-base-sentiment"
Only one cpu is executed!
Any suggestions ?
Thanks
@Narsil @LysandreJik | 11-28-2021 08:42:15 | 11-28-2021 08:42:15 | Hi @dabasmoti,
Can you share the full reproducible script ?
It seems that something early on is messing things up.
If you remove `model=model` then it works perfectly. The model creation is likely to be the issue.<|||||>@Narsil
Thank you for your comment.
Here is the script
```
import pandas as pd
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import time
from transformers import pipeline
def using_pip(texts, model):
print('Starting Pipe')
pipe = pipeline("sentiment-analysis", model=model, padding=True, max_length=200, truncation=True)
time0 = time.time()
results = pipe(texts)
print(f'Using Pipe: {time.time() - time0}')
def inference(texts, model):
print('Starting Casual')
cls = AutoModelForSequenceClassification.from_pretrained(model)
tokenizer = AutoTokenizer.from_pretrained(model)
time0 = time.time()
preds = []
for i in texts:
encoded_input = tokenizer(i, return_tensors='pt', padding=True, max_length=200, truncation=True)
output = cls(**encoded_input)
scores = output[0][0].detach().cpu().numpy()
scores = softmax(scores)
preds.append(np.argmax(scores))
print(f'Using Casual: {time.time() - time0}')
def main(texts, model):
using_pip(texts, model)
#inference(texts, model)
if __name__ == '__main__':
df = pd.read_csv('tagged_data/Sentiment_TrainSet_Review_PosNeg_20190107.xlsx - a1401422.csv',nrows=100)
#df = df[df.label=='neutral']one
print(f'DataFrame shape: {df.shape}')
model = None #cardiffnlp/twitter-roberta-base-sentiment
main(df.Description.to_list(), model)
```
As I said above
I checked it on the default(model=None) and on - 'cardiffnlp/twitter-roberta-base-sentiment'.
and now the transformer-version is - `4.12.5`
```
DataFrame shape: (100, 21)
Starting Pipe
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
Using Pipe: 8.930904865264893
```
<|||||>Hi, I couldn't reproduce with pandas since it triggers an error too `AttributeError: 'DataFrame' object has no attribute 'Description'
`. (pandas==1.3.4)
So I modified to
`model = "cardiffnlp/twitter-roberta-base-sentiment"`
`main(["This is a test", "This is another test"], model)`
And I have as a result
```bash
Starting Pipe
Using Pipe: 0.07680535316467285
Starting Casual
Using Casual: 0.08454775810241699
```
What seems super odd, is that if `texts` is indeed a list of `str` then both should be equivalent.
Can you tweak your example without pandas and reproduce (or with a file + pandas version necessary to reproduce) ?<|||||>Hey,
Tweaked version
```
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import time
from transformers import pipeline
def using_pip(texts, model):
print('Starting Pipe')
pipe = pipeline("sentiment-analysis", model=model, padding=True, max_length=200, truncation=True)
time0 = time.time()
results = pipe(texts, num_workers=8)
print(f'Using Pipe: {time.time() - time0}')
def inference(texts, model):
print('Starting Inference')
cls = AutoModelForSequenceClassification.from_pretrained(model)
tokenizer = AutoTokenizer.from_pretrained(model)
time0 = time.time()
preds = []
for i in texts:
encoded_input = tokenizer(i, return_tensors='pt', padding=True, max_length=200, truncation=True)
output = cls(**encoded_input)
scores = output[0][0].detach().cpu().numpy()
scores = softmax(scores)
preds.append(np.argmax(scores))
print(f'Using Casual: {time.time() - time0}')
def main(texts, model):
using_pip(texts, model)
inference(texts, model)
if __name__ == '__main__':
texts = ["I am in Love"] *1000
#default
model = "distilbert-base-uncased-finetuned-sst-2-english" #'cardiffnlp/twitter-roberta-base-sentiment'
main(texts, model)
```
<|||||>I get:
`distilbert-base-uncased-finetuned-sst-2-english`
```bash
Starting Pipe
Using Pipe: 25.281177520751953
Starting Inference
Using Casual: 23.430365562438965
```
`cardiffnlp/twitter-roberta-base-sentiment`
```
Starting Pipe
Using Pipe: 45.36654043197632
Starting Inference
Using Casual: 46.86234784126282
```
Could you share you pytorch version + hardware specifications maybe ? (This ran on `pytorch 1.10 + ubuntu 20.04 on i7-4790` so nothing too beafy)
TBH, aside from the `DataLoader` the `pipe` and `inference` should be extremely similar in performance since, they do basically the same thing. (and `DataLoader` shines only on GPU)<|||||>torch - 1.10.0
Processor - 2.3 GHz Quad-Core Intel Core i7<|||||>Is it a custom build of pytorch ?
Python version maybe (I am in 3.9) ?
We need to single out what makes your environment not working otherwise it'll be hard for me to help.<|||||>As I wrote in the first place
transformers version: 4.12.3
Platform: Darwin-20.6.0-x86_64-i386-64bit
Python version: 3.7.0
PyTorch version (GPU?): 1.10.0 (False)
Thanks for your help!<|||||>Okay, I tried every single thing I could but I am unable to reproduce.
I am guessing the issues lies in `Darwin` at that point.
I looked into pytorch issues which seem relevant (but cannot try to confirm at this time)
https://github.com/pytorch/pytorch/issues/58585
https://github.com/pytorch/pytorch/issues/46409<|||||>@Narsil
Thanks
I suspected that the issue with `fork` or `spawn` ,but I couldn't find any related issues
Now, I can't find how to change the constructor to `spawn`<|||||>This comment seemed helpful to some users:https://github.com/pytorch/pytorch/issues/46409#issuecomment-726495303
If you can manage to make the fix, we'll try to integrate back into pipeline too (the trick part will be identifying to necessary conditions to trigger if we don't understand the bug properly :( )<|||||>What a strange behavior!
I have updated the python version to `3.8.9`
and now the 3.8 version is much slower with `pipe` but the warning is gone
```
Loading model took: 10.609
Starting Pipe
Using Pipe on distilbert-base-uncased-finetuned-sst-2-english-model: 27.492
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 12.097
Starting Loop Inference
Using loop on distilbert-base-uncased-finetuned-sst-2-english-model: 2.857
```
On 3.7
```
Running Python Version : 3.7.0
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 10.049
Starting Pipe
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
[W ParallelNative.cpp:214] Warning: Cannot set number of intraop threads after parallel work has started or after set_num_threads call when using native parallel backend (function set_num_threads)
Using Pipe on distilbert-base-uncased-finetuned-sst-2-english-model: 3.945
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 8.762
Starting Loop Inference
Using loop on distilbert-base-uncased-finetuned-sst-2-english-model: 2.617
```
Script to reproduce
```
from transformers import AutoModelForSequenceClassification
from transformers import AutoTokenizer
import numpy as np
from scipy.special import softmax
import time
from transformers import pipeline
import platform
def using_pip(texts, model):
print(f'Loading Model - {model}')
time0 = time.time()
pipe = pipeline("sentiment-analysis",
model=model,
padding=True,
max_length=200,
truncation=True)
print(f'Loading model took: {round(time.time() - time0,3)}')
time0 = time.time()
print('Starting Pipe')
results = pipe(texts)
print(f'Using Pipe on {model}-model: {round(time.time() - time0,3)}')
def inference(texts, model):
print(f'Loading Model - {model}')
time0 = time.time()
cls = AutoModelForSequenceClassification.from_pretrained(model)
tokenizer = AutoTokenizer.from_pretrained(model)
print(f'Loading model took: {round(time.time() - time0,3)}')
time0 = time.time()
print('Starting Loop Inference')
preds = []
for i in texts:
encoded_input = tokenizer(i,
return_tensors='pt',
padding=True,
max_length=200,
truncation=True)
output = cls(**encoded_input)
scores = output[0][0].detach().cpu().numpy()
scores = softmax(scores)
preds.append(np.argmax(scores))
print(f'Using loop on {model}-model: {round(time.time() - time0,3)}')
def main(texts, model):
using_pip(texts, model)
inference(texts, model)
if __name__ == '__main__':
print(f'Running Python Version : {platform.python_version()}')
texts = ["I am in Love"] *100
model = "distilbert-base-uncased-finetuned-sst-2-english" #'cardiffnlp/twitter-roberta-base-sentiment'
main(texts, model)
```
<|||||>I ran the same experiments on my MacBook (BigSur, 2.3 GHz Dual-Core Intel Core i5):
```
Running Python Version : 3.8.5
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 10.064
Starting Pipe
/Users/leandro/git/test-pipe-speed/env/lib/python3.8/site-packages/torch/utils/data/dataloader.py:478: UserWarning: This DataLoader will create 8 worker processes in total. Our suggested max number of worker in current system is 4 (`cpuset` is not taken into account), which is smaller than what this DataLoader is going to create. Please be aware that excessive worker creation might get DataLoader running slow or even freeze, lower the worker number to avoid potential slowness/freeze if necessary.
warnings.warn(_create_warning_msg(
Using Pipe on distilbert-base-uncased-finetuned-sst-2-english-model: 27.524
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 7.503
Starting Loop Inference
Using loop on distilbert-base-uncased-finetuned-sst-2-english-model: 3.213
```
The warning made me try `pipe(texts, num_workers=4)`:
```
Running Python Version : 3.8.5
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 8.089
Starting Pipe
Using Pipe on distilbert-base-uncased-finetuned-sst-2-english-model: 23.554
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 7.388
Starting Loop Inference
Using loop on distilbert-base-uncased-finetuned-sst-2-english-model: 2.979
```
A little faster. So I also tried `pipe(texts, num_workers=1)`:
```
Running Python Version : 3.8.5
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 8.23
Starting Pipe
Using Pipe on distilbert-base-uncased-finetuned-sst-2-english-model: 5.636
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 6.898
Starting Loop Inference
Using loop on distilbert-base-uncased-finetuned-sst-2-english-model: 2.977
```
Finally, `num_workers=0` seems to resolve the issue:
```
Running Python Version : 3.8.5
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 9.538
Starting Pipe
Using Pipe on distilbert-base-uncased-finetuned-sst-2-english-model: 2.921
Loading Model - distilbert-base-uncased-finetuned-sst-2-english
Loading model took: 6.953
Starting Loop Inference
Using loop on distilbert-base-uncased-finetuned-sst-2-english-model: 2.798
```
So it seems to be an issue with parallelism - potentially in the `DataLoader`?
<|||||>Yes it seems an error in `DataLoader` interferring somehow with inference with some Intel parallelism for tensor calculation.
I guess at that point disabling DataLoader is the most sensible thing to do (num_workers=0 achieves that).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,551 | closed | Question about the <mask> | When fine-tuning the Bert, can I add some kind of constraint on the pre_trained **Bert model** so that Bert knows the different locations of **\<mask\>** to predict the same content? For example, **xxx \<mask1\> xxx xxx, .... xxx \<mask1\>.**. I know Bert can predict the **MASK** in two positions into the same word when the sentence is not very long, completely. However, if the sentence is long, is there any way to tell the model that the two positions should be predicted into the same word? Please answer me at your convenience, thank you. :) | 11-28-2021 08:24:00 | 11-28-2021 08:24:00 | This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,550 | closed | Huggingface Missing Larger T5 Flax Models | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.5
- Platform: Linux-5.4.0-90-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- PyTorch version (GPU?): 1.8.1+cu102 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): 0.3.5 (gpu)
- Jax version: 0.2.17
- JaxLib version: 0.1.65
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- ALBERT, BERT, XLM, DeBERTa, DeBERTa-v2, ELECTRA, MobileBert, SqueezeBert: @LysandreJik
- encoder-decoder models (For example, BlenderBot, BART, Marian, Pegasus, T5, ByT5): @patrickvonplaten, @patil-suraj
- Longformer, Reformer, TransfoXL, XLNet, FNet: @patrickvonplaten
- FSMT: @stas00
- Funnel: @sgugger
- GPT-2, GPT: @patrickvonplaten, @LysandreJik
- RAG, DPR: @patrickvonplaten, @lhoestq
- TensorFlow: @Rocketknight1
- JAX/Flax: @patil-suraj @patrickvonplaten
- TAPAS, LayoutLM, LayoutLMv2, LUKE, ViT, BEiT, DEiT, DETR, CANINE: @NielsRogge
- GPT-Neo, GPT-J, CLIP: @patil-suraj
- Wav2Vec2, HuBERT, SpeechEncoderDecoder: @patrickvonplaten, @anton-l
If the model isn't in the list, ping @LysandreJik who will redirect you to the correct contributor.
Library:
- Benchmarks: @patrickvonplaten
- Deepspeed: @stas00
- Ray/raytune: @richardliaw, @amogkam
- Text generation: @patrickvonplaten
- Tokenizers: @LysandreJik
- Trainer: @sgugger
- Pipelines: @Narsil
- Speech: @patrickvonplaten, @anton-l
- Vision: @NielsRogge, @sgugger
Documentation: @sgugger
Model hub:
- for issues with a model, report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
For research projetcs, please ping the contributor directly. For example, on the following projects:
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@patrickvonplaten, @patil-suraj
## Information
Model I am using (Bert, XLNet ...): T5
The problem arises when using:
* [ ] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
`404 Client Error: Not Found for url: https://huggingface.co/t5-large/resolve/main/flax_model.msgpack`
## To reproduce
Steps to reproduce the behavior:
```
import jax.numpy as jnp
import transformers
model = transformers.FlaxT5ForConditionalGeneration.from_pretrained("t5-large", dtype=jnp.dtype("bfloat16"))
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
For `https://huggingface.co/t5-large/resolve/main/flax_model.msgpack` to exist and used to load the model.
As the corresponding URLs for `t5-small` and `t5-base` work, I'm assuming the files for larger models just haven't been uploaded yet. I only need `t5-11b`, although I would prefer to perform sanity checks on the large or 3B versions first. | 11-28-2021 01:37:53 | 11-28-2021 01:37:53 | cc @patil-suraj @patrickvonplaten<|||||>Thanks for the issue - just uploaded it: https://huggingface.co/t5-large/blob/main/flax_model.msgpack<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>I just noticed that the Flax checkpoints for t5-3b and t5-11b are also missing.
@patrickvonplaten is it also possible that you upload them? Thanks! |
transformers | 14,549 | closed | Update TAPAS tokenization's add_numeric_table_values | Using `table.iloc[row_index, col_index]` will use the index label, whereas `table.loc[row_index, col_index]` will use the location. So if the original dataframe had an index, using `iloc` would try to access the incorrect index and fall out of range. | 11-27-2021 20:28:18 | 11-27-2021 20:28:18 | @NielsRogge could you take a look at this?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@LysandreJik is that something useful to merge? It solves the following issues https://github.com/huggingface/transformers/issues/14544 and https://github.com/huggingface/transformers/issues/10265
They are marked as closed since they were auto-closed by the bot, but the issues still persist. I'm also not sure why the tests fail, but i can look into it if someone is willing to review & merge this PR (if no one is available for that then i can just close it).<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>Hi @xhlulu,
sorry for the very late reply. Could you rebase your PR with master, such that I can take a look to merge it?
Thanks,
Niels<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md) are likely to be ignored.<|||||>I'm not sure if iloc even works here and not familiar with the tests. Will close this as this isn't a priority and there's an easy workaround. |
transformers | 14,548 | closed | Does the word embedding matrix of GPT2 load from the checkpoint, during the fine-tuning? | Does the word embedding matrix of GPT2 load from the checkpoint, during the fine-tuning?
οΌtransformer.wte.weightβshared token embedding layer Shared weights logicοΌ
I tried to do the following:
model = TFGPT2LMHeadModel.from_pretrained("gpt2-medium"οΌ
model.get_output_embeddings().weight
But the following error was occurred, probably because the parameters of wte.weight were not initialized.
tensorflow.python.framework.errors_impl.FailedPreconditionError: 2 root error(s) found.
(0) Failed precondition: Error while reading resource variable tfgp_t2lm_head_model/transformer/wte/weight from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/tfgp_t2lm_head_model/transformer/wte/weight)
[[node tfgp_t2lm_head_model/transformer/wte/weight/Read/ReadVariableOp (defined at home/nak/cho/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
[[tfgp_t2lm_head_model/transformer/wte/weight/Read/ReadVariableOp/_1]]
(1) Failed precondition: Error while reading resource variable tfgp_t2lm_head_model/transformer/wte/weight from Container: localhost. This could mean that the variable was uninitialized. Not found: Container localhost does not exist. (Could not find resource: localhost/tfgp_t2lm_head_model/transformer/wte/weight)
[[node tfgp_t2lm_head_model/transformer/wte/weight/Read/ReadVariableOp (defined at home/nak/cho/.local/lib/python3.7/site-packages/tensorflow_core/python/framework/ops.py:1751) ]]
I am very confused. Shouldn't the parameters of the word embedding table be restored directly from the checkpoint during fine-tuning? Why is it not initialized?
Moreover, when adding special tokens through the following operations, the number of tokens in the dictionary changes, which means the dimension of the word embedding table is changed. Does this mean that the wte.weight needs to be reinitialized? Will the fine-tuning effect be good in this case?
num_added_toks = tokenizer.add_special_tokens(special_tokens_dict)
model.resize_token_embeddings(len(tokenizer))
Many thanks!
| 11-27-2021 12:42:55 | 11-27-2021 12:42:55 | I was mistaken. Sorry. |
transformers | 14,547 | closed | [Flax] token-classification model steps enumerate start from 1 | # What does this PR do?
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Model saving at the last step was skipped due to enumerate starting at 0.
## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md#start-contributing-pull-requests),
Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link
to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes? Here are the
[documentation guidelines](https://github.com/huggingface/transformers/tree/master/docs), and
[here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/master/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?
## Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
<!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @LysandreJik
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @n1t0, @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
@sgugger @patil-suraj | 11-27-2021 11:54:04 | 11-27-2021 11:54:04 | You mean here
https://github.com/huggingface/transformers/blob/2b87fdac6c3ff6483eb4d8f98bbd4e1374ac4958/examples/flax/token-classification/run_flax_ner.py#L602
Like this `cur_step = (epoch * step_per_epoch) + (step + 1) `
<|||||>Yes, exactly! |
transformers | 14,546 | closed | Fix a Bug, trainer_seq2seq.py, in the else branch at Line 172, generation_inputs should be a dict | # Fixing Bug
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
In `trainer_seq2seq.py / Seq2SeqTrainer / prediction_step`, Line 174 reads:
```python
generated_tokens = self.model.generate(
**generation_inputs,
**gen_kwargs,
)
```
which require the generated_tokens to be a `dict`. However, in the `else` branch in Line 171, the `generation_inputs` is created as a `Tensor` object, which will cause a problem.
Fix this by creating `generation_inputs` as a dict, and add a key called `input_ids`. | 11-27-2021 07:18:53 | 11-27-2021 07:18:53 | Hey @TranSirius,
Thanks a lot for your PR here! It looks good to me - @sgugger can you maybe take a look as well?<|||||>Should we maybe write some tests for this use case as well?<|||||>Oops didn't see your comment @patrickvonplaten. Adding a test would be nice to have indeed @TranSirius if you want to work on it on a separate PR. |
transformers | 14,545 | closed | fix bug for trainer_seq2seq, generation_inputs should be a dict beforβ¦ | β¦e sending to model.generate (Arround Line 165 - Line 185)
# Fixing Bug
<!--
Congratulations! You've made it this far! You're not quite done yet though.
Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution.
Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change.
Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost.
-->
<!-- Remove if not applicable -->
Fixes # (issue)
In `trainer_seq2seq.py / Seq2SeqTrainer / prediction_step`, Line 174 - Line 177 read:
```python
generated_tokens = self.model.generate(
**generation_inputs,
**gen_kwargs,
)
```
which require the generated_tokens to be a `dict`. However, in the `else` branch in Line 171, the `generation_inputs` is created as a `Tensor` object, which will cause a problem.
Fix this by creating `generation_inputs` as a dict, and add a key called `input_ids`. | 11-27-2021 06:52:53 | 11-27-2021 06:52:53 | |
transformers | 14,544 | closed | TAPAS tokenizer is unable to handle indexed dataframes | Running the following code will result in a `iloc` error:
```python
from transformers import TapasTokenizer
import pandas as pd
model_name = 'google/tapas-base'
tokenizer = TapasTokenizer.from_pretrained(model_name)
data = {'Actors': ["Brad Pitt", "Leonardo Di Caprio", "George Clooney"], 'Number of movies': ["87", "53", "69"]}
queries = ["What is the name of the first actor?", "How many movies has George Clooney played in?", "What is the total number of movies?"]
answer_coordinates = [[(0, 0)], [(2, 1)], [(0, 1), (1, 1), (2, 1)]]
answer_text = [["Brad Pitt"], ["69"], ["209"]]
table = pd.DataFrame.from_dict(data)
# Let's add random years - this will break the tokenzier
table.index = [2000, 2010, 2020]
inputs = tokenizer(table=table, queries=queries, answer_coordinates=answer_coordinates, answer_text=answer_text, padding='max_length', return_tensors='pt')
inputs
``` | 11-27-2021 04:30:22 | 11-27-2021 04:30:22 | Hi,
This has been reported before in #10265. Would you mind opening a PR to fix this?<|||||>@NielsRogge I started a PR here: https://github.com/huggingface/transformers/pull/14549
I'm not sure if it totally solves the problem, but it's a start<|||||>@NielsRogge I'm looking about at this comment right now:
> Here's a notebook illustrating the issue, and fixing it:
>
> https://colab.research.google.com/drive/10MbZiMKyEWUGk2Y1fvIj0Y0_lB42NO38?usp=sharing
>
> The reason why you're getting the error is because in the part where each cell of the table is replaced by a Cell object:
>
> https://github.com/huggingface/transformers/blob/97e688bc220514cd5ea072f06b186401c9cfbbd0/src/transformers/models/tapas/tokenization_tapas.py#L2742-L2745
> , the row indices are used.
>
> This can be fixed by replacing `table` with `table.reset_index(drop=True)` in the first line (or resetting the index of the table before providing it to the tokenizer). Another solution is to replace the final line by `table.iloc[row_index, col_index] = Cell(text=table.iloc[row_index, col_index])`. Will make a small PR to add this.
>
> Thank you for spotting the error!
I'm wondering if there's any performance improvement in iterating this way through the dataframe over something like:
```python
df.applymap(lambda cell: Cell(text=cell))
```<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>@NielsRogge has this been resolved? |
transformers | 14,543 | closed | TAPAS tanh activation on the pooling layer | I noticed the following in the TAPAS pooling layer:
https://github.com/huggingface/transformers/blob/d83b0e0c079f0826d186270a86622ff5f1efd9c1/src/transformers/models/tapas/modeling_tapas.py#L696-L709
I'm curious about the use of `nn.Tanh()`. I wasn't able to find more information about that activation in [the paper](https://arxiv.org/abs/2004.02349). Is it possible to know where it comes from? Thanks! | 11-27-2021 03:45:53 | 11-27-2021 03:45:53 | Hi,
The TAPAS authors borrowed this from the original BERT paper, which decided to apply a tanh layer.
The BERT author explains why he did that [here](https://github.com/google-research/bert/issues/43).<|||||>Ah thanks, you are right. They indeed use tanh in the code: https://github.com/google-research/tapas/blob/f3d9f068e6eedb252883049b582516a1294ff951/tapas/models/bert/modeling.py#L269-L277
Wish it was mentioned in the appendix of the TAPAS paper π€· Thanks for clarifying! |
transformers | 14,542 | closed | CUDA OOM at `self.optimizer.consolidate_state_dict()` in Trainer when using sharded_ddp | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.12.3
- Platform: Linux-5.4.0-1057-aws-x86_64-with-debian-buster-sid
- Python version: 3.7.10
- PyTorch version (GPU?): 1.7.1 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: 8 GPUs
- Using distributed or parallel set-up in script?: sharded_ddp (fairscale 0.4.2)
### Who can help
@sgugger
## Information
Model I am using (Bert, XLNet ...): BART-base
The problem arises when using:
* my own modified scripts: (give details below)
* I'm using my own code which is mainly modified from `run_mlm.py`(https://github.com/huggingface/transformers/blob/v4.12.3/examples/pytorch/language-modeling/run_mlm.py) for pretraining with huggingface trainer
The tasks I am working on is:
* my own task or dataset: (give details below)
* I'm using wikipedia corpus.
## To reproduce
Steps to reproduce the behavior:
1. run the script `run_mlm.py`(https://github.com/huggingface/transformers/blob/v4.12.3/examples/pytorch/language-modeling/run_mlm.py)
2. run the script with the following command line
```
python -m torch.distributed.launch --nproc_per_node=8 --master_port=10000 run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--cache_dir /tmp/test-mlm \
--output_dir /tmp/test-mlm \
--sharded_ddp simple \
--overwrite_output_dir \
--per_device_train_batch_size 2 \
--per_device_eval_batch_size 4
```
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
Traceback (most recent call last):
File "run_mlm.py", line 538, in <module>
main()
File "run_mlm.py", line 487, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/trainer.py", line 1383, in train
self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/trainer.py", line 1495, in _maybe_log_save_evaluate
self._save_checkpoint(model, trial, metrics=metrics)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/transformers/trainer.py", line 1565, in _save_checkpoint
self.optimizer.consolidate_state_dict()
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/fairscale/optim/oss.py", line 358, in consolidate_state_dict
obj_list, src=self._local_to_global_rank[rank], group=self.group,
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1403, in broadcast_object_list
object_list[i] = _tensor_to_object(obj_view, obj_size)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 1187, in _tensor_to_object
out = pickle.loads(buf)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/storage.py", line 141, in _load_from_bytes
return torch.load(io.BytesIO(b))
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/serialization.py", line 774, in _legacy_load
result = unpickler.load()
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/serialization.py", line 730, in persistent_load
deserialized_objects[root_key] = restore_location(obj, location)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/serialization.py", line 175, in default_restore_location
result = fn(storage, location)
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/serialization.py", line 155, in _cuda_deserialize
return storage_type(obj.size())
File "/home/ubuntu/anaconda3/envs/pytorch_p37/lib/python3.7/site-packages/torch/cuda/__init__.py", line 462, in _lazy_new
return super(_CudaBase, cls).__new__(cls, *args, **kwargs)
RuntimeError: CUDA error: out of memory
## Expected behavior
<!-- A clear and concise description of what you would expect to happen. -->
Could you please tell me how to fix this issue? | 11-26-2021 20:26:27 | 11-26-2021 20:26:27 | You should remove the checkpointing by using `save_strategy="no"` to avoid getting out of memory: since the optimizer states are sharded to avoid spending too much GPU memory, you get the OOM error when the `Trainer` tries to save the optimizer state.<|||||>Thank you for your reply! But if I remove the checkpointing, then I cannot get any checkpoint until the end of the training or resume training from the checkpoint. Do you have any idea if I still want to the checkpoints?<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored.<|||||>> Thank you for your reply! But if I remove the checkpointing, then I cannot get any checkpoint until the end of the training or resume training from the checkpoint. Do you have any idea if I still want to the checkpoints?
So are there any solutions?<|||||>At the end, I rewrite the Trainer saving function to skip saving the optimizer weights. Please let me know if there's any better solution! Thx!<|||||>Same here. I just commented that line which saves the optimizer<|||||>Any update? @sgugger <|||||>Probably need to add an option to `training_argument` that we want to disable the saving of optimizer state_dict, and default is `True` <|||||>>
Yeah, I think that could be the "unsatisfactory" solution for us, but at least it is a simple solution to implement. And I assume it wont affect much if we dont save optimizer right?<|||||>@allanj @sgugger I found the solution.
You need to update fairscale==0.4.6, then in the huggingface transformers trainer code (function `create_optimizer`), add `force_broadcast_object=True` , then problem solved. Now it can perfectly store the optimizer state_dict compared to just skipping the save of optimizer weights which @yana-xuyan suggested.
You can find the `force_broadcast_object=True` in the latest code in [fairscale](https://github.com/facebookresearch/fairscale/blob/main/fairscale/optim/oss.py)
<|||||>Do you want to make a PR with the fix?<|||||>> Do you want to make a PR with the fix?
Yeah sure, with pleasure. Will do it before this weekend. Cheers! |
transformers | 14,541 | closed | Out of the box GPT-2 CLM hits out of memory on an AWS 8x Nvidia A100 VM | Running the command located here https://github.com/huggingface/transformers/tree/master/examples/pytorch/language-modeling
python run_clm.py \
--model_name_or_path gpt2 \
--dataset_name wikitext \
--dataset_config_name wikitext-2-raw-v1 \
--do_train \
--do_eval \
--output_dir /tmp/test-clm
Produces an OOM error on the AWS p4d.24xlarge VM. Surely this should run out of the box on a VM like this without me having to fiddle with the training params...
See stack trace below:
```
(pytorch_latest_p37) ubuntu@ip-172-31-20-157:~/transformers/examples/pytorch/language-modeling$ python run_clm.py \
> --model_name_or_path gpt2 \
> --dataset_name wikitext \
> --dataset_config_name wikitext-2-raw-v1 \
> --do_train \
> --do_eval \
> --output_dir /tmp/test-clm
11/26/2021 16:35:03 - WARNING - __main__ - Process rank: -1, device: cuda:0, n_gpu: 8distributed training: False, 16-bits training: False
11/26/2021 16:35:03 - INFO - __main__ - Training/evaluation parameters TrainingArguments(
_n_gpu=8,
adafactor=False,
adam_beta1=0.9,
adam_beta2=0.999,
adam_epsilon=1e-08,
dataloader_drop_last=False,
dataloader_num_workers=0,
dataloader_pin_memory=True,
ddp_find_unused_parameters=None,
debug=[],
deepspeed=None,
disable_tqdm=False,
do_eval=True,
do_predict=False,
do_train=True,
eval_accumulation_steps=None,
eval_steps=None,
evaluation_strategy=IntervalStrategy.NO,
fp16=False,
fp16_backend=auto,
fp16_full_eval=False,
fp16_opt_level=O1,
gradient_accumulation_steps=1,
gradient_checkpointing=False,
greater_is_better=None,
group_by_length=False,
hub_model_id=None,
hub_strategy=HubStrategy.EVERY_SAVE,
hub_token=<HUB_TOKEN>,
ignore_data_skip=False,
label_names=None,
label_smoothing_factor=0.0,
learning_rate=5e-05,
length_column_name=length,
load_best_model_at_end=False,
local_rank=-1,
log_level=-1,
log_level_replica=-1,
log_on_each_node=True,
logging_dir=/tmp/test-clm/runs/Nov26_16-35-03_ip-172-31-20-157,
logging_first_step=False,
logging_nan_inf_filter=True,
logging_steps=500,
logging_strategy=IntervalStrategy.STEPS,
lr_scheduler_type=SchedulerType.LINEAR,
max_grad_norm=1.0,
max_steps=-1,
metric_for_best_model=None,
mp_parameters=,
no_cuda=False,
num_train_epochs=3.0,
output_dir=/tmp/test-clm,
overwrite_output_dir=False,
past_index=-1,
per_device_eval_batch_size=8,
per_device_train_batch_size=8,
prediction_loss_only=False,
push_to_hub=False,
push_to_hub_model_id=None,
push_to_hub_organization=None,
push_to_hub_token=<PUSH_TO_HUB_TOKEN>,
remove_unused_columns=True,
report_to=[],
resume_from_checkpoint=None,
run_name=/tmp/test-clm,
save_on_each_node=False,
save_steps=500,
save_strategy=IntervalStrategy.STEPS,
save_total_limit=None,
seed=42,
sharded_ddp=[],
skip_memory_metrics=True,
tpu_metrics_debug=False,
tpu_num_cores=None,
use_legacy_prediction_loop=False,
warmup_ratio=0.0,
warmup_steps=0,
weight_decay=0.0,
xpu_backend=None,
)
11/26/2021 16:35:04 - INFO - datasets.info - Loading Dataset Infos from /home/ubuntu/.cache/huggingface/modules/datasets_modules/datasets/wikitext/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126
11/26/2021 16:35:04 - INFO - datasets.builder - Overwrite dataset info from restored data version.
11/26/2021 16:35:04 - INFO - datasets.info - Loading Dataset info from /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126
11/26/2021 16:35:04 - WARNING - datasets.builder - Reusing dataset wikitext (/home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126)
11/26/2021 16:35:04 - INFO - datasets.info - Loading Dataset info from /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126
100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:00<00:00, 802.17it/s]
[INFO|configuration_utils.py:602] 2021-11-26 16:35:04,918 >> loading configuration file https://huggingface.co/gpt2/resolve/main/config.json from cache at /home/ubuntu/.cache/huggingface/transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
[INFO|configuration_utils.py:639] 2021-11-26 16:35:04,919 >> Model config GPT2Config {
"_name_or_path": "gpt2",
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.13.0.dev0",
"use_cache": true,
"vocab_size": 50257
}
[INFO|tokenization_auto.py:344] 2021-11-26 16:35:05,205 >> Could not locate the tokenizer configuration file, will try to use the model config instead.
[INFO|configuration_utils.py:602] 2021-11-26 16:35:05,783 >> loading configuration file https://huggingface.co/gpt2/resolve/main/config.json from cache at /home/ubuntu/.cache/huggingface/transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
[INFO|configuration_utils.py:639] 2021-11-26 16:35:05,784 >> Model config GPT2Config {
"_name_or_path": "gpt2",
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.13.0.dev0",
"use_cache": true,
"vocab_size": 50257
}
[INFO|tokenization_utils_base.py:1742] 2021-11-26 16:35:07,791 >> loading file https://huggingface.co/gpt2/resolve/main/vocab.json from cache at /home/ubuntu/.cache/huggingface/transformers/684fe667923972fb57f6b4dcb61a3c92763ad89882f3da5da9866baf14f2d60f.c7ed1f96aac49e745788faa77ba0a26a392643a50bb388b9c04ff469e555241f
[INFO|tokenization_utils_base.py:1742] 2021-11-26 16:35:07,791 >> loading file https://huggingface.co/gpt2/resolve/main/merges.txt from cache at /home/ubuntu/.cache/huggingface/transformers/c0c761a63004025aeadd530c4c27b860ec4ecbe8a00531233de21d865a402598.5d12962c5ee615a4c803841266e9c3be9a691a924f72d395d3a6c6c81157788b
[INFO|tokenization_utils_base.py:1742] 2021-11-26 16:35:07,791 >> loading file https://huggingface.co/gpt2/resolve/main/tokenizer.json from cache at /home/ubuntu/.cache/huggingface/transformers/16a2f78023c8dc511294f0c97b5e10fde3ef9889ad6d11ffaa2a00714e73926e.cf2d0ecb83b6df91b3dbb53f1d1e4c311578bfd3aa0e04934215a49bf9898df0
[INFO|tokenization_utils_base.py:1742] 2021-11-26 16:35:07,792 >> loading file https://huggingface.co/gpt2/resolve/main/added_tokens.json from cache at None
[INFO|tokenization_utils_base.py:1742] 2021-11-26 16:35:07,792 >> loading file https://huggingface.co/gpt2/resolve/main/special_tokens_map.json from cache at None
[INFO|tokenization_utils_base.py:1742] 2021-11-26 16:35:07,792 >> loading file https://huggingface.co/gpt2/resolve/main/tokenizer_config.json from cache at None
[INFO|configuration_utils.py:602] 2021-11-26 16:35:08,370 >> loading configuration file https://huggingface.co/gpt2/resolve/main/config.json from cache at /home/ubuntu/.cache/huggingface/transformers/fc674cd6907b4c9e933cb42d67662436b89fa9540a1f40d7c919d0109289ad01.7d2e0efa5ca20cef4fb199382111e9d3ad96fd77b849e1d4bed13a66e1336f51
[INFO|configuration_utils.py:639] 2021-11-26 16:35:08,370 >> Model config GPT2Config {
"_name_or_path": "gpt2",
"activation_function": "gelu_new",
"architectures": [
"GPT2LMHeadModel"
],
"attn_pdrop": 0.1,
"bos_token_id": 50256,
"embd_pdrop": 0.1,
"eos_token_id": 50256,
"initializer_range": 0.02,
"layer_norm_epsilon": 1e-05,
"model_type": "gpt2",
"n_ctx": 1024,
"n_embd": 768,
"n_head": 12,
"n_inner": null,
"n_layer": 12,
"n_positions": 1024,
"reorder_and_upcast_attn": false,
"resid_pdrop": 0.1,
"scale_attn_by_inverse_layer_idx": false,
"scale_attn_weights": true,
"summary_activation": null,
"summary_first_dropout": 0.1,
"summary_proj_to_labels": true,
"summary_type": "cls_index",
"summary_use_proj": true,
"task_specific_params": {
"text-generation": {
"do_sample": true,
"max_length": 50
}
},
"transformers_version": "4.13.0.dev0",
"use_cache": true,
"vocab_size": 50257
}
[INFO|modeling_utils.py:1352] 2021-11-26 16:35:08,733 >> loading weights file https://huggingface.co/gpt2/resolve/main/pytorch_model.bin from cache at /home/ubuntu/.cache/huggingface/transformers/752929ace039baa8ef70fe21cdf9ab9445773d20e733cf693d667982e210837e.323c769945a351daa25546176f8208b3004b6f563438a7603e7932bae9025925
[INFO|modeling_utils.py:1619] 2021-11-26 16:35:11,081 >> All model checkpoint weights were used when initializing GPT2LMHeadModel.
[INFO|modeling_utils.py:1628] 2021-11-26 16:35:11,082 >> All the weights of GPT2LMHeadModel were initialized from the model checkpoint at gpt2.
If your task is similar to the task the model of the checkpoint was trained on, you can already use GPT2LMHeadModel for predictions without further training.
11/26/2021 16:35:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-87a8d9e859906bd4.arrow
11/26/2021 16:35:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-7e63090e8c713f4a.arrow
11/26/2021 16:35:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-fec244142b111ff5.arrow
11/26/2021 16:35:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-d3b2bb82e64457e6.arrow
11/26/2021 16:35:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-8badde48a528711e.arrow
11/26/2021 16:35:11 - WARNING - datasets.arrow_dataset - Loading cached processed dataset at /home/ubuntu/.cache/huggingface/datasets/wikitext/wikitext-2-raw-v1/1.0.0/a241db52902eaf2c6aa732210bead40c090019a499ceb13bcbfa3f8ab646a126/cache-4c30c758881dc770.arrow
[INFO|trainer.py:1196] 2021-11-26 16:35:15,018 >> ***** Running training *****
[INFO|trainer.py:1197] 2021-11-26 16:35:15,018 >> Num examples = 2318
[INFO|trainer.py:1198] 2021-11-26 16:35:15,018 >> Num Epochs = 3
[INFO|trainer.py:1199] 2021-11-26 16:35:15,018 >> Instantaneous batch size per device = 8
[INFO|trainer.py:1200] 2021-11-26 16:35:15,018 >> Total train batch size (w. parallel, distributed & accumulation) = 64
[INFO|trainer.py:1201] 2021-11-26 16:35:15,018 >> Gradient Accumulation steps = 1
[INFO|trainer.py:1202] 2021-11-26 16:35:15,018 >> Total optimization steps = 111
0%| | 0/111 [00:00<?, ?it/s]/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py:65: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector.
warnings.warn('Was asked to gather along dimension 0, but all '
Traceback (most recent call last):
File "run_clm.py", line 526, in <module>
main()
File "run_clm.py", line 474, in main
train_result = trainer.train(resume_from_checkpoint=checkpoint)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 1317, in train
tr_loss_step = self.training_step(model, inputs)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 1857, in training_step
loss = self.compute_loss(model, inputs)
File "/home/ubuntu/transformers/src/transformers/trainer.py", line 1889, in compute_loss
outputs = model(**inputs)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 889, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward
return self.gather(outputs, self.output_device)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/data_parallel.py", line 180, in gather
return gather(outputs, output_device, dim=self.dim)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 76, in gather
res = gather_map(outputs)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 70, in gather_map
for k in out))
File "<string>", line 9, in __init__
File "/home/ubuntu/transformers/src/transformers/file_utils.py", line 2027, in __post_init__
for element in iterator:
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 70, in <genexpr>
for k in out))
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/scatter_gather.py", line 63, in gather_map
return Gather.apply(target_device, dim, *outputs)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/_functions.py", line 72, in forward
return comm.gather(inputs, ctx.dim, ctx.target_device)
File "/home/ubuntu/anaconda3/envs/pytorch_latest_p37/lib/python3.7/site-packages/torch/nn/parallel/comm.py", line 235, in gather
return torch._C._gather(tensors, dim, destination)
RuntimeError: CUDA out of memory. Tried to allocate 12.27 GiB (GPU 0; 39.59 GiB total capacity; 28.63 GiB already allocated; 9.00 GiB free; 28.71 GiB reserved in total by PyTorch)
0%| | 0/111 [00:27<?, ?it/s]
(pytorch_latest_p37) ubuntu@ip-172-31-20-157:~/transformers/examples/pytorch/language-modeling$
``` | 11-26-2021 16:56:07 | 11-26-2021 16:56:07 | cc @sgugger maybe we should reduce the default batch size and/or sequence length here<|||||>The defaults are pretty good, but we should definitely update the example command give to either use a smaller `block_size` or some gradient accumulation with a smaller batch size.<|||||>This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the [contributing guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) are likely to be ignored. |
transformers | 14,540 | closed | cannot import name 'SpeechEncoderDecoder' from 'transformers' - wav2vec2-xls-r-2b-22-to-16 | Hi,
I am currently trying to run this model - facebook/wav2vec2-xls-r-2b-22-to-16
https://huggingface.co/facebook/wav2vec2-xls-r-2b-22-to-16
The example code given using the pipeline is giving significantly different results compared to the api hosted on hugging face. I recorded and sent the same audio to the api through the website as well as ran the sample code on colab. The output is quite different.
I tried running using the second step-by-step method too, it fails with "cannot import name 'SpeechEncoderDecoder' from 'transformers' "
I tried with the latest transformer library as well as 4.11.3
Could you check what could be wrong? I can share my colab if needed.
Thanks for your help in advance. | 11-26-2021 16:51:13 | 11-26-2021 16:51:13 | Hi,
1) The pretrained model 'xls_r_2b_22_16.pt' is 19.6gb on the fairseq github repo. The xls_r_2b_22_16 model on the hugging face hub is 9.8GB. Could this be the issue? are these different models that were uploaded.
2) The second sample code shows SpeechEncoderDecoder, it should be SpeechEncoderDecoderModel.
Thanks.<|||||>Hey @programmeddeath1,
Thanks for noticing the import bug. I just corrected all the model cards to import `SpeechEncoderDecoderModel` instead.
Regarding the different results - could you give me an example so that I can debug the API vs. a code snippet given by you? :-)
Thanks!<|||||>Regarding 1.) The fairseq checkpoint is so large because all the training states are included which are unnecessary for inference. I made sure that the HF checkpoint behaves exactly like the fairseq one by running multiple integration tests.<|||||>Hi,
1) I ran the sample code using the ASR pipeline on colab -
```
import torch
import torchaudio
import matplotlib.pyplot as plt
import IPython
print(torch.__version__)
print(torchaudio.__version__)
torch.random.manual_seed(0)
device = "cuda" if torch.cuda.is_available() else "cpu"
MAPPING = {
"en": 250004,
"de": 250003,
"tr": 250023,
"fa": 250029,
"sv": 250042,
"mn": 250037,
"zh": 250025,
"cy": 250007,
"ca": 250005,
"sl": 250052,
"et": 250006,
"id": 250032,
"ar": 250001,
"ta": 250044,
"lv": 250017,
"ja": 250012,
}
from datasets import load_dataset
from transformers import pipeline
# select correct `forced_bos_token_id`
forced_bos_token_id = MAPPING["en"]
# replace following lines to load an audio file of your choice
librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")
audio_file = librispeech_en[0]["file"]
asr = pipeline("automatic-speech-recognition", model="facebook/wav2vec2-xls-r-2b-22-to-16", feature_extractor="facebook/wav2vec2-xls-r-2b-22-to-16")
translation = asr(audio_file, forced_bos_token_id=forced_bos_token_id)
# This translation is different from the audio file
print(translation)
IPython.display.Audio(audio_file)
```
2) I also used the same hugging face api code from - https://huggingface.co/spaces/facebook/XLS-R-2B-22-16/tree/main
Here is the colab link if you want to see the code - ( https://colab.research.google.com/drive/1keVShJfrB68IeXn44UYB4OPoC2qDIGk1?usp=sharing )I ran it on colab pro with gpu.
I recorded the audio and ran it on the hugging face spaces, it translated perfectly, I downloaded the audio and uploaded to my instance of the same api on colab and on aws, but its showing random repetitive text like this("The amendment number one hundred and twenty-eight from the amendment number two hundred and twenty-eight from the")
I am not sure where I am going wrong.
Thanks for your help1<|||||>Hey @programmeddeath1,
You might have not correctly resampled the audio. Essentially what I would try is to just copy paste the code of the spaces here: https://huggingface.co/spaces/facebook/XLS-R-2B-22-16/blob/main/app.py<|||||>@patrickvonplaten Only slightly related to this issue, but I am not able to initialise the processor like in the example on the model card:
```python
import torch
from transformers import Speech2Text2Processor, SpeechEncoderDecoderModel
model = SpeechEncoderDecoderModel.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
processor = Speech2Text2Processor.from_pretrained("facebook/wav2vec2-xls-r-300m-en-to-15")
```
I get the following error on `transformers v4.12.5`:
```
The tokenizer class you load from this checkpoint is not the same type as the class this function is called from. It may result in unexpected tokenization.
The tokenizer class you load from this checkpoint is 'MBart50Tokenizer'.
The class this function is called from is 'Speech2Text2Tokenizer'.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/models/speech_to_text_2/processing_speech_to_text_2.py", line 106, in from_pretrained
tokenizer = Speech2Text2Tokenizer.from_pretrained(pretrained_model_name_or_path, **kwargs)
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1744, in from_pretrained
return cls._from_pretrained(
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 1872, in _from_pretrained
tokenizer = cls(*init_inputs, **init_kwargs)
File "/Users/nithinholla/opt/anaconda3/lib/python3.8/site-packages/transformers/models/speech_to_text_2/tokenization_speech_to_text_2.py", line 85, in __init__
with open(vocab_file, encoding="utf-8") as vocab_handle:
TypeError: expected str, bytes or os.PathLike object, not NoneType
```<|||||>hi @patrickvonplaten,
Thank you for your reply
That is what I have tried in colab and on an aws gpu instance, I copy pasted and ran the same code of the spaces that you have shared. sampling shouldn't be a problem since it is being resampled using librosa in your code.
I have not changed anything and hosted the same code on colab right now, ( https://13310.gradio.app ) is the link. It is decoding to something quite random compared to the audio input.<|||||>@Nithin-Holla,
Good catch! Yeah at the moment it's actually not possible to create a processor for `""facebook/wav2vec2-xls-r-300m-en-to-15""` . I have an open PR that will enable this - hope to get it merged by next week. But it should be the `Wav2Vec2Processor` then :-)<|||||>@programmeddeath1
The model behaves correctly for me locally for an input waveform. Could you maybe send me a link to an audio file which gives different results for you? <|||||>Hii @patrickvonplaten
Does the model hub have different instances from which it serves the models to different regions in the world?
This audio file(https://github.com/programmeddeath1/webhost/blob/master/ghwsa-1f9vw.wav) on running on my local instance
(https://16618.gradio.app) gives output as this

But gives the output as this if I use the api (translation - these courses I am able to write much more complex code in python )
The code is the exact same, the only change that could be is in the resources being downloaded from the hub.
Can you check on an instance, where the resources are downloading afresh from the hub? The gradio app i have attached is currently functioning.
<|||||>There is only one model that is being used. I can't open your file with `soundfile` - see: https://colab.research.google.com/drive/1fVd18B1lwKeoTqw9ucMzjuS5wE-sIMA0?usp=sharing<|||||>Also let's try to solve this together - @programmeddeath1 could you create a google colab in which you re-create the spaces demo so that we can see together how the output could be different. I just checked again and the model works as expected for me locally<|||||>Hi @patrickvonplaten
I have added the spaces code and shared the colab with you.
https://colab.research.google.com/drive/1Bk9XGoDnxg3wadKVecREXjUth5dMkTky?authuser=2#scrollTo=oKv64CiwHni5
I have uploaded the same audio file and the output can be seen on the colab display (Itβs a nice beach, a nice beach and a nice beach.)
Please run the same and upload the audio file or a similar audio file on the colab.
We can come over a small call, I can share the screen and show the execution when I am downloading and running it.
Thanks!<|||||>Hey @programmeddeath1,
I sadly can't open the colab. It says:
```bash
Notebook loading error
There was an error loading this notebook. Ensure that the file is accessible and try again.
Invalid Credentials
```
=> Can you make sure the google colab is accessible by everyone?<|||||>Hi @patrickvonplaten
Here is the open link to view
https://colab.research.google.com/drive/1Bk9XGoDnxg3wadKVecREXjUth5dMkTky?usp=sharing
I had shared the editor access to your account - [email protected].
I have now shared it to your gmail account too.
Tell me if I should give edit access to the open link, but it may be edited by someone else, so I just gave comment access.<|||||>Hey @programmeddeath1,
I can now access the google colab, but this doesn't really help me to find the problem. Sorry, I've probably not been very clear in the previous message.
What I need to efficiently find a possible difference is:
a) an audio file that I can run the spaces demo with. You already provided that here: wget https://github.com/programmeddeath1/webhost/blob/master/ghwsa-1f9vw.wav . However this audio file is not readable it's broken. I cannot work with this. BTW, I now added the possibility to directly upload an audio file to the demo: https://huggingface.co/spaces/facebook/XLS-R-2B-22-16 . So I just need an audio file that I can then upload to the demo now.
b) A simple python script (just transformers code - no gradio app that should start in a colab) that runs the same code as the demo: https://huggingface.co/spaces/facebook/XLS-R-2B-22-16 but gives a different result.
I sadly cannot help otherwise.
Could you please correct a) & b)? <|||||>Hi @patrickvonplaten , i was wondering if we could come on a googlemeet/zoom call together and debug ,i will make things quite clear , should i put an invite ?<|||||>Hey @programmeddeath1,
I'm very sorry but I don't have the time to schedule google meets for specific issues. We're trying to tackle hundreds of issues every day at HF and have to try to be as efficient as possible. Could you maybe take a look at this document: https://github.com/huggingface/transformers/blob/master/ISSUES.md explaining how to best ask for help? :-)
Thanks!<|||||>Hey, Sorry could not reply due to a few deliveries, I am trying to setup the colab as per your previous comment, Iwhen i converted to a simple python code to load the model and run using a local file it is running properly but in the same colab or on aws if I run your gradio app as is, it keeps giving junk responses. It seems like its an issue with the forced_bos_tokein_id. I hardcoded the tokenid to english - 250004, its working fine now. Thank you for your help!<|||||>I uploaded these two files
https://github.com/programmeddeath1/webhost/blob/master/7_4.wav
https://github.com/programmeddeath1/webhost/blob/master/9_4.wav
It is giving quite weird results with these audio files. This is Indian English and the model gives quite good results for other audio files with similar accent.
Can you listen to the audio and tell me if this is an issue with the audio sample, coz on listening it seems quite legible, or is this
a model issue and whether I need to finetune the model further. If so can you guide me to the right resources to improve the model.
Thanks! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.